id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2308.11692 | The Geometry of the Modular Bootstrap | We explore the geometry behind the modular bootstrap and its image in the
space of Taylor coefficients of the torus partition function. In the first
part, we identify the geometry as an intersection of planes with the convex
hull of moment curves on $R^+{\otimes}\mathbb{Z}$, with boundaries
characterized by the total positivity of generalized Hankel matrices. We phrase
the Hankel constraints as a semi-definite program, which has several
advantages, such as constant computation time with increasing central charge.
We derive bounds on the gap, twist-gap, and the space of Taylor coefficients
themselves. We find that if the gap is above $\Delta^*_{gap}$, where
$\frac{c{-}1}{12}<\Delta^*_{gap}< \frac{c}{12}$, all coefficients become
bounded on both sides and kinks develop in the space. In the second part, we
propose an analytic method of imposing the integrality condition for the
degeneracy number in the spinless bootstrap, which leads to a non-convex
geometry. We find that even at very low derivative order this condition rules
out regions otherwise allowed by bootstraps at high derivative order. | Li-Yuan Chiang, Tzu-Chen Huang, Yu-tin Huang, Wei Li, Laurentiu Rodina, He-Chen Weng | 2023-08-22T16:52:18Z | http://arxiv.org/abs/2308.11692v3 | # The Geometry of the Modular Bootstrap
###### Abstract
We explore the geometry behind the modular bootstrap and its image in the space of Taylor coefficients of the torus partition function. In the first part, we identify the geometry as an intersection of planes with the convex hull of moment curves on \(R^{+}\otimes\mathbb{Z}\), with boundaries characterized by the total positivity of generalized Hankel matrices. We phrase the Hankel constraints as a semi-definite program, which has several advantages, such as constant computation time with increasing central charge. We derive bounds on the gap, twist-gap, and the space of Taylor coefficients themselves. We find that if the gap is above \(\Delta^{*}_{\text{gap}}\), where \(\frac{c-1}{12}<\Delta^{*}_{\text{gap}}<\frac{c}{12}\), all coefficients become bounded on both sides and kinks develop in the space. In the second part, we propose an analytic method of imposing the integrality condition for the degeneracy number in the spinless bootstrap, which leads to a non-convex geometry. We find that even at very low derivative order this condition rules out regions otherwise allowed by bootstraps at high derivative order.
## 1 Introduction
For theories whose physical observables can be constrained uniquely via systematic imposition of unitarity, locality and symmetries, one often finds that the observable can be identified with a well defined mathematical entities. The most well studied example is the amplituhedron for the scattering amplitude of maximal super Yang-Mills [1], where the amplitude is identified as the canonical form on a positive geometry. Other examples include the associahedron for bi-adjoint \(\phi^{3}\) theory [2], the amplituhedron for ABJM theory [3; 4], the cosmological polytope for the wave function of conformally coupled scalars [5].
In some cases general physical principles are not sufficient to uniquely determine specific theories, but instead allow a landscape of possibilities. Remarkably, even this "theory space" is in many cases an interesting geometry. A natural question is then how much of allowed space is saturated by known theories, and if the remaining space either contains new unknown theories, or implies some new principle must shrink the space further.
One such example is the space of Wilson coefficients in the low energy effective field theory of a consistent UV completion. After the pioneering result in [6], it was recently uncovered that all coefficients satisfy an infinite number of inequalities, forming the so called EFThedron [7]. Bounds on these coefficients were obtained in various ways, using both numerical and analytical methods [8; 9; 10; 11; 12; 13]. Work on the EFT bootstrap benefited from renewed interest in CFT bootstrap initiated in [14], which was able to constrain the conformal dimensions of low lying operators through the method of optimal functionals.1 The geometric nature of the CFT bootstrap was also explored in [18; 19], finding many similarities with the EFThedron.
Footnote 1: See also [15; 16; 17] for recent reviews on the CFT bootstrap.
In this paper we import recent lessons of the EFT bootstrap and apply them to obtain a geometric interpretation of the modular bootstrap for 2D CFTs [20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36]. This bootstrap applies the constraint that the CFT can be consistently defined on arbitrary Riemannian manifolds, and so its partition function must satisfy modular invariance of the torus. It was introduced by Hellerman to obtain bounds on the lowest primary [37], yielding \(\frac{c}{6}\) for \(c\gg 1\), where \(c\) is the central charge. This bound was subsequently improved through a series of works [38; 39; 40], with the strongest current result \(\frac{c}{9.1}\), obtained in [41]. These bounds are weaker than expected from holography where the Banados-Teitelboim-Zanelli (BTZ) black holes [42] serve as the lowest state of the vacuum theory, having \(\Delta=\frac{c}{12}\). While it is now generally believed that the modular bootstrap is not sufficient to improve the bound much further beyond \(\frac{c}{9.1}\), several open questions remain, such as the exact implications of integer degeneracy, which has not been systematically studied except in rational CFT [43].
One perhaps surprising aspect we will quickly observe is that the modular CFT and EFT bootstraps are in fact almost identical mathematical problems. Both bootstraps can be solved efficiently via SDPB, as was done in [9], but also by phrasing them as moment problems [44].
To see just how closely related the EFT and CFT bootstraps are, we will show the Taylor coefficients of the partition function \(Z(\tau,\bar{\tau})\), expanded around the self dual point \(\tau=i\), can be put in a form very similar to the EFT dispersion relations derived in [13]
\[Z_{p,q}^{\rm CFT}=\sum_{i}n_{i}\Delta_{i}^{p}J_{i}^{q}\qquad\qquad g_{p,q}^{ \rm EFT}=\sum_{i}p_{i}m_{i}^{p}J_{i}^{q}\,. \tag{1}\]
On the CFT side, \(\Delta_{i}=h_{i}+\bar{h}_{i}\geq\Delta_{\rm gap}\) is the scaling dimension of state \(i\), and \(n_{i}\geq 0\) is the (rescaled) degeneracy of this state. Modular invariance implies that the spin \(J_{i}=h_{i}-\bar{h}_{i}\) is an integer, and also that particular linear combinations of \(Z_{p,q}\) vanish. On the EFT side, \(m_{i}\geq M\) and \(J_{i}\) are the mass and spin of UV state \(i\), with \(M\) being the UV scale, unitarity implies \(p_{i}\geq 0\), while crossing symmetry requires linear combinations of the \(g_{k,q}\) to vanish. The vanishing of these linear combinations are what we will refer to as modular planes, or null planes, and imposing these to high order is one of the main challenges in both bootstrap programs. We will label the highest total derivative order imposed by \(\Lambda_{\rm max}\equiv\max[p+q]\). Bounds obtained are always valid for any \(\Lambda_{\rm max}\), but they become stronger the higher in \(\Lambda_{\rm max}\) one can go, until at some point they converge. Because eq.(1)
includes spin integrality as a constraint, solving it is referred to as the _spinning_ bootstrap. A much simpler problem is ignoring spin altogether, and considering only a subspace of this problem
\[Z_{p,0}=\sum_{i}n_{i}\Delta_{i}^{p}\,, \tag{2}\]
which is referred to as the _spinless_ bootstrap.
Crucially, eq.(2) constitute a single variable moment problem, i.e. what are the sufficient conditions on \(Z\) for which the RHS of eq.(2) exists. The solution to such single moment problem is given in terms of positivity conditions on Hankel matrices. Similarly, the expressions in eq.(1) constitute double moment problems, with solutions in terms of generalized Hankel matrices. The moment problem formulation provides valid and analytic answers to the bootstrap. At low order in \(\Lambda_{\rm max}\), the conditions can even be solved in closed form, but are typically not optimal. However, such solutions may reveal fine details that numerical methods potentially miss. Besides investigating low order closed form results, in this paper we also take a step further in the moment problem approach, and develop a new method to solve the (generalized) Hankel matrix positivity constraints to high order, by phrasing them as a semi-definite program (SDP). This allows us to compare the moment problem solutions of the bootstrap to traditional SDPB approaches at high derivative order. Since positivity conditions of the (generalized) Hankel matrices are necessary conditions, the space or spectrum that are ruled out attains a _certificate of in-feasibility_, irrespective of the order of derivative- or spin-truncation.
As an example, we test this setup by computing the gap and comparing to previous results. Our approach allows us to directly compute bounds at arbitrary \(c\), for both spinless and spinning bootstrap. In the current paper we have computed up to \(c=1200\) and \(\Lambda_{\rm max}=11\) for spinning and \(c=2000\) and \(\Lambda_{\rm max}=43\) for spinless. Evaluating at \(c\gg 1\) does not introduce any slow down in computation time, which demonstrates the utility of our approach for large \(c\) bootstrap. Much higher \(\Lambda_{\rm max}\) can be reached by implementing our program on a cluster.
For spinless, the gap we find is considerably weaker compared to [41], which computed up to \(\Lambda_{\rm max}=2000\). However, our method allows directly checking much higher \(c\), reducing errors due to extrapolation. For the spinning bootstrap, the state of the art was given in [39], at \(c=20\) and \(\Lambda_{\rm max}=175\). Our results support the findings in [39] that imposing spin integrality does not substantially improve bounds on the gap at large central charge.
Next we study the space of consistent CFT, defined in terms of the space of allowed \(Z_{p,q}\). In general, this space is unbounded. However, if the spectrum has a gap that is above some threshold, we find the space transitions from unbounded to bounded. This motivates us to term the threshold the critical gap \(\Delta_{\rm gap}^{*}\). As an example at \(\Lambda=11\), the upper bound for the partition function evaluated at the self-dual point \(\hat{Z}_{0}\) is shown below in Figure 1 for \(c=100\).
e also notice the appearance of kinks between \(\Delta_{\rm gap}^{*}\) and the maximal gap. These kinks become ubiquitous as \(c\gg 1\), and their position appears stable with respect to derivative truncation, suggesting correlation with physical theories.
In the last part of the paper we take the first steps towards imposing the integer degeneracy condition. While on general grounds it is believed this condition will not affect the gap bounds at large central charge [39], we will show that it drastically reduces allowed space at \(c\sim 1\), even when considered at very low derivative order. We will impose integrality by viewing the partition function as a Minkowski sum of individual states, similar to the method proposed by some of the authors to impose the unitarity upper bound in the EFT bootstrap [45]. Focusing on the spinless case given by eq.(2), the "allowed space" of each state \(i\) contributing to the partition function is a curve, parameterized by \(\Delta_{i}\in[\Delta_{\rm gap},\infty)\). The allowed space for the whole partition function is therefore a Minkowski sum of such curves, each rescaled by a number \(n_{i}\). We will show that the boundaries of this problem are simple to determine in arbitrary dimension. This more constrained geometry is non-convex, lying inside the previously derived convex geometry, generally touching the original boundary only in isolated points.
The intersection with modular planes is however more difficult to perform in general. For one null plane, this intersection can be solved analytically, while for more null planes it must be solved numerically. We solve the intersection with two null planes, and find that the integrality condition already excludes extra regions compared to traditional spinless or spinning bootstraps at much higher derivative order. For \(c=1\), we find the boundaries significantly approach the points corresponding to the known free boson theories. This is shown in Figure 2 below. It is important to note that non-integer bootstraps can only produce convex regions, while at least this \(c=1\) case demonstrates that the full picture must necessarily include integrality.
Figure 1: The upper bound on \(\hat{Z}_{0}\) plotted against the gap for \(c=100\). There are multiple kinks, situated close to the lower-derivative order critical gaps.
t large \(c\), the relative effect of integrality with just two null planes appears to wane. However, it is generally expected that high \(c\) also requires more null planes for convergence, so computing higher order intersections would be required before making a definitive statement on the role of integrality at high \(c\).
To summarize our main results:
* The spinless/spinning modular bootstrap can be formulated as a single/double moment problem, which is solved in terms of Hankel matrices
* At low derivative order, the Hankel positivity can be solved in closed form, while at high order via SDP. This provides some advantages over SDPB, such as eliminating the spin truncation convergence check, and stable efficiency with increasing \(c\).
* A critical gap exists, above which the allowed space becomes bounded from all directions. Above this gap the space exhibits numerous kinks
* The integer degeneracy condition leads to a non-convex geometry that significantly reduces allowed space even at very low derivative order.
Added note: During the preparation of this draft, we became aware of the work [46] where integrality was imposed over states within a small range of conformal dimensions around the gap. One then requires that at least one integer exists between the lower and
Figure 2: Comparing regions carved out by SDPB with the region carved out by the (spinless) degeneracy integrality condition at \(\Lambda_{\rm max}=3\). The integrality condition rules out new regions in the space, and significantly approaches the circle branch theories.
upper bounds of the spectral density. It was found that this condition slightly improves the gap at small central charge. In our work, integrality is imposed across the entire spectrum.
The paper is organized as follows. In Section 2 we introduce the torus partition function of 2D CFT, which is our object of study. In Section 3 we describe how the single and double moment problems can be solved in terms of Hankel matrices, and in Section 4 we show how the partition function can be related to such moment problems. We propose an SDP that can solve positivity of Hankel matrices, and explore bounds on the gap, twist gap, as well as introduce the notion of a critical gap. In Section 5 we explore the integrality constraint, which we impose by constructing partition function space via iterative Minkowski sums. We compute the intersection with two modular planes, and compare to results in the previous sections and known theories at \(c=1\). We conclude in Section 6.
## 2 Setup
The torus partition function in 2D CFT with central charge \(c>1\) is given by
\[Z(\tau,\bar{\tau})=\chi_{0}(q)\bar{\chi}_{0}(\bar{q})+\sum_{h,\bar{h}}n_{h,\bar {h}}\chi_{h}(q)\chi_{\bar{h}}(\bar{q}),\quad q=e^{i2\pi\tau}\,, \tag{1}\]
where \(\tau\) the torus modulus, and \(n_{h,\bar{h}}\) is the degeneracy of the state with scaling dimensions \(\Delta=h{+}\bar{h}\) and spin \(J=|h{-}\bar{h}|\), i.e. it is a positive number with \(n_{0,\bar{0}}=1\). The functions \(\chi_{h}(q)\) are the Virasoro characters, which sum up the contribution to the partition function from the descendants of each Virasoro primary. With \(c>1\), they are given by
\[\chi_{0}(q)=q^{-\frac{(c-1)}{24}}\frac{1-q}{\eta(q)},\quad\chi_{h}(q)=q^{h- \frac{(c-1)}{24}}\frac{1}{\eta(q)}\quad\forall h>0\,, \tag{2}\]
where \(\chi_{0}(q)\) is the vacuum character and \(\eta(q)\) is the Dedekind eta function, satisfying \(\eta(-\frac{1}{\tau})=\sqrt{-i\tau}\eta(\tau)\). For \(c=1\), degenerate characters appear, which require subtraction, spoiling the form of eq.(1). We leave this generalization to future work, and in this paper focus only on partition functions of the above form, which generally means \(c>1\), but also includes a subset of free boson theories at \(c=1\), as will be discussed in detail later.
Modular invariance on the torus simply implies invariance under the action of \(\mathrm{PSL}(2,\mathbb{Z})\) on \((\tau,\bar{\tau})\):
\[Z(\tau,\bar{\tau})=Z\left(\frac{a\tau{+}b}{c\tau{+}d},\frac{a\bar{\tau}{+}b}{ c\bar{\tau}{+}d}\right)\,,\quad\left(\begin{array}{cc}a&b\\ c&d\end{array}\right)\in\mathrm{PSL}(2,\mathbb{Z})\,. \tag{3}\]
The group is generated by \(S\) and \(T\) transformations
\[S:\tau\rightarrow-\frac{1}{\tau},\quad T:\tau\rightarrow\tau{+}1\,. \tag{4}\]
In terms of the representation in eq.(1), \(T\) invariance is ensured by the spin \(J\) being integer. One can simply focus on \(S\) invariance \(Z(\tau,\bar{\tau})=Z(-\frac{1}{\tau},-\frac{1}{\bar{\tau}})\). This constraint is usually studied by expanding around the \(S\) self-dual point
\[(\tau=i,\;\bar{\tau}=-i)\, \tag{5}\]
implying [37]:
\[\left(\tau\frac{\partial}{\partial\tau}\right)^{m}\left(\bar{\tau}\frac{\partial }{\partial\bar{\tau}}\right)^{n}Z(\tau,\bar{\tau})|_{\tau=i}=0\,,\quad\forall\, m+n\in odd\,. \tag{6}\]
That is, the Taylor expansion allows one to discretize the constraint: instead of a functional identity one now has a vector (which can be taken to be infinite dimensional) identity.
In the bootstrap program, one imposes modular invariance on eq.(1) and extracts constraints on \(n_{h,\bar{h}}\) and gaps in \(h,\bar{h}\). Here, we would like to approach from another viewpoint, by considering the "space" of consistent partition functions, defined as \(Z(\tau,\bar{\tau})\), that admits a representation of the form eq.(1), and respects modular invariance. We begin by discretizing eq.(1) around the self-dual point:
\[z^{(a,b)}=\sum_{h,\bar{h}}n_{h,\bar{h}}\left(\chi_{h}^{(a)}\right)\left(\chi_{ \bar{h}}^{(b)}\right)\,, \tag{7}\]
where \(z^{(a,b)}\equiv(\tau\partial_{\tau})^{a}(\bar{\tau}\partial_{\bar{\tau}})^{b} Z|_{\tau=i,\bar{\tau}=-i}\), \(\chi_{h}^{(a)}\equiv(\tau\partial_{\tau})^{a}\chi_{h}|_{\tau=i}\). The space of partition functions is now defined by the allowed range for the set of Taylor coefficients
\[[{\bf Z}]\equiv\left(\begin{array}{cccc}z^{(0,0)}&z^{(0,1)}&z^{(0,2)}&\dots \\ z^{(1,0)}&z^{(1,1)}&z^{(1,2)}&\dots\\ z^{(2,0)}&z^{(2,1)}&z^{(2,2)}&\dots\\ \vdots&\vdots&\vdots&\vdots\end{array}\right)\,. \tag{8}\]
By matching order by order in Taylor coefficients on both side of the equality in eq.(7), we have:
\[[{\bf Z}]=\sum_{h,\bar{h}}n_{h,\bar{h}}\ \ \vec{\chi}_{h}\ (\vec{\chi}_{\bar{h}})^{T}, \vec{\chi}_{h}\equiv\left(\begin{array}{c}\chi_{h}^{(0)}\\ \chi_{h}^{(1)}\\ \chi_{h}^{(2)}\\ \vdots\\ \chi_{h}^{(d)}\end{array}\right),\,n_{h,\bar{h}}>0\,, \tag{9}\]
where \(d\) is the degree of truncation in the Taylor expansion. Since \(n_{h,\bar{h}}>0\), the above simply states that the partition function \([{\bf Z}]\) must lie within the convex hull of tensored vectors \((\vec{\chi}_{h},\vec{\chi}_{\bar{h}})\). The convex hull of vectors is a polytope, and here we essentially have the geometry of tensored polytopes. To understand the space we will need to find the boundaries that "carve" out the hull.
## 3 The convex hull of moments
So far the components for the vectors \(\vec{\chi}_{h},\vec{\chi}_{\bar{h}}\) on the RHS of eq.(9) are polynomials in \(h\). As we will soon see, after a series of simple GL transformations, these can be transformed into points on a \(d\)-dimensional moment curve, i.e. each vector is a collection of monomials
with increasing degree:
\[\left(\begin{array}{c}1\\ h\\ h^{2}\\ \vdots\\ h^{d}\end{array}\right)\,. \tag{10}\]
The convex hull of such moments is well discussed in mathematical literature, and is termed the moment problem. In its most general form the moment problem identifies the necessary and sufficient conditions on a vector \(\vec{y}=(y_{0},y_{1},\cdots,y_{n})\), such that
\[y_{n}=\int_{I}\rho(x)x^{n}dx,\quad x\in I\,, \tag{11}\]
admits a solution \(\rho(x)\) that is non-negative. Here \(I\) is some region on the real axes, which falls into three categories: Hamburger \(I\in(-\infty,+\infty)\), Stieltjes \(I\in[0,+\infty)\) and Hausdorff \(I\in[0,1]\) moment problem. Equivalently, these conditions are then the co-dimension one boundaries for the convex hull of points on a moment curve. For finite \(n\), this is called the truncated moment problem, and sufficient conditions can be given phrased in terms of the symmetric Hankel matrix of \(y_{n}\).
One can generalize to the bi-variate moment problem, which will be relevant for the spinning bootstrap. In this case, we ask for conditions on \([y]_{n\times n}\) such that
\[y_{n,m}=\int_{I}\rho(x,y)x^{n}y^{m}\,dxdy,\quad x,y\in I\,, \tag{12}\]
admits a non-negative solution for \(\rho(x,y)\). For the bi-variate problem, only necessary conditions are known for the most general truncated moment problems, if we require a finite number of constraints. However, we will numerically demonstrate that these conditions lead to constraints that are close to the true boundary.
### The single moment problem
Let us first review the geometry associated with the convex hull of moment curves, or the single moment problem:
\[\vec{y}=\left(\begin{array}{c}y_{0}\\ y_{1}\\ y_{2}\\ \vdots\\ y_{d}\end{array}\right)=\int_{I}\rho(x)\left(\begin{array}{c}1\\ x\\ x^{2}\\ \vdots\\ x^{d}\end{array}\right)dx,\quad\rho(x)>0,\quad\forall x\in I\,. \tag{13}\]
As shown in [47], the image of the hull is carved out by the statement that the symmetric Hankel matrix \(K[\vec{y}]\), defined as
\[K_{n}[\vec{y}]\equiv\left(\begin{array}{cccc}y_{0}&y_{1}&\cdots&y_{n}\\ y_{1}&y_{2}&\cdots&y_{n+1}\\ \vdots&\vdots&\vdots&\vdots\\ y_{n}&y_{n+1}&\cdots&y_{2n}\end{array}\right)\,, \tag{14}\]
is a positive semi-definite matrix, which implies its leading principle minors are non-negative. If we further constrain ourselves to the convex hull of half moment curves, where \(x_{i}\in R^{+}\), we then have that the Hankel and shifted Hankel matrix are totally positive matrices, i.e. all minors are non-negative. The independent conditions are given as:
\[\text{Det }(K_{n}[\vec{y}])\geq 0,\quad\text{Det }\left(K_{n}^{\text{shift}}[\vec{y }]\right)\equiv\text{Det }(K_{n}[\vec{y}])\,|_{y_{i}\to y_{i+1}}\geq 0\,. \tag{10}\]
That these are necessary conditions can be derived as follows: the Hankel matrices can be written as
\[K_{n}=\int_{I}\,\rho(x)\,\vec{x}\vec{x}^{T}\;dx\,, \tag{11}\]
where \(\vec{x}^{T}=(1,x,\cdots,x^{d})\). Sandwiching \(K_{n}\) with a random vector one easily sees that \(\vec{v}^{T}K_{n}\vec{v}\geq 0\), and hence \(K_{n}\) is a positive semi-definite matrix. The shifted Hankel can be written as:
\[K_{n}^{\text{shift}}=\int_{I}\,x\rho(x)\,\vec{x}\vec{x}^{T}\;dx\,. \tag{12}\]
If \(x\in\mathbb{R}^{+}\) then \(x\rho(x)\geq 0\), and one similarly concludes that \(K_{n}^{\text{shift}}\) is a positive semi-definite matrix.
Since the boundaries are associated with singular Hankel matrices, this implies that the boundary is parameterized by the convex hull of finite elements. Consider \(n=2p{+}1\), where the space is carved out by
\[K_{m}\succeq 0,\quad K_{m}^{\text{shift}}\succeq 0,\quad\forall m\leq p\,. \tag{13}\]
The space is projective \(\mathbb{P}^{2p+1}\) and the co-dimension one boundary is naturally associated with \(\text{Det}K_{p}=0\) or \(\text{Det}K_{p}^{\text{shift}}=0\). We can readily read off the spectrum on these boundaries
\[\text{Det}K_{p} = 0,\quad(\Delta_{1},\cdots,\Delta_{p},\Delta_{\infty})\,,\] \[\text{Det}K_{p}^{\text{shift}} = 0,\quad(\Delta_{\text{gap}},\Delta_{1},\cdots,\Delta_{p})\,, \tag{14}\]
where \(\Delta_{i}\) represents a state with \((1,\Delta_{i},\Delta_{i}^{2},\cdots\Delta_{i}^{2p+1})\), \(\Delta_{\infty}\) being \((0,0,\cdots,0,1)\) and \(\Delta_{\text{gap}}\) corresponds to the state \((1,0,0,\cdots,0)\). The set of states in eq.(14) can be easily understood from the fact that \(\Delta_{\infty}\) do not contribute to \(K_{p}\) while \(\Delta_{\text{gap}}\) do not contribute to \(K_{p}^{\text{shift}}\). The free parameters that parameterize the hull are the \(2p\) parameters \((\Delta_{i},\rho_{i})\) and \(\rho_{\infty}\) or \(\rho_{\text{gap}}\), with one degree of freedom removed due to the projective nature.
### The double moment problem
We now move on to the bi-variate problem. More precisely, we are interested in the space of \([\mathbf{Y}]_{d+1\times d+1}\) given by:
\[\left[\mathbf{Y}\right]_{d+1\times d+1}=\left(\begin{array}{cccc}y^{(0,0)}& y^{(0,1)}&\cdots&y^{(0,d)}\\ y^{(1,0)}&y^{(1,1)}&\cdots&y^{(1,d)}\\ \vdots&\vdots&\vdots&\vdots\\ y^{(d,0)}&y^{(d,1)}&\cdots&y^{(d,d)}\end{array}\right)=\int_{I}\,\rho(x,\tilde {x})x\vec{x}^{T}dxd\tilde{x}\,, \tag{15}\]
where \(\rho(x,\tilde{x})\geq 0\) for \(x,\tilde{x}\in I\) exists. For each fixed row or column, we can construct a Hankel and shifted-Hankel matrix which are positive semi-definite from our previous argument. Furthermore, consider the following "generalized Hankel matrix"
\[{\bf K}[{\bf Y}]\equiv\left(\begin{array}{ccccc}y^{(0,0)}&y^{(0,1)}&y^{(1,0)}& y^{(0,2)}&\cdots\\ y^{(0,1)}&y^{(0,2)}&y^{(1,1)}&y^{(0,3)}&\cdots\\ y^{(1,0)}&y^{(1,1)}&y^{(2,0)}&y^{(1,2)}&\cdots\\ y^{(0,2)}&y^{(0,3)}&y^{(1,2)}&y^{(0,4)}&\cdots\\ \vdots&\vdots&\vdots&\vdots&\cdots\end{array}\right)\,. \tag{3.12}\]
If \(d=2m{+}1\), the generalized Hankel matrix will be \((m{+}1)(2m{+}3){\times}(m{+}1)(2m{+}3)\). If a solution to eq.(3.11) exists, we claim that the generalized Hankel is also a positive semi-definite matrix. Indeed substituting the RHS of eq.(3.11) into \({\bf K}[{\bf Y}]\) we see that
\[{\bf K}[{\bf Y}]=\int_{I}\,\rho(x,\tilde{x})\vec{\bf X}(\vec{\bf X})^{T}\;dxd \tilde{x}\,, \tag{3.13}\]
where \((\vec{\bf X})^{T}=(1,x,\tilde{x},x^{2},\tilde{x}x,\tilde{x}^{2},\cdots)\). Then, since
\[{\bf V}^{T}{\bf K}[{\bf Y}]{\bf V}=\int_{I}\,\rho(x,\tilde{x})({\bf V}^{T}\vec {\bf X})^{2}\geq 0\,, \tag{3.14}\]
for any real vector \({\bf V}\), we conclude that \({\bf K}[{\bf Y}]\) is a positive semi-definite matrix. If we further restrict that \(x\), \(\tilde{x}\in\mathbb{R}^{+}\) implies that the shifted generalized Hankel matrices, either by one column or one row,
\[{\bf K}[{\bf Y}]^{\rm shift_{q}}={\bf K}[{\bf Y}]|_{y^{(p,q)}\to y^{(p,q+1)}},\quad{\bf K}[{\bf Y}]^{\rm shift_{p}}={\bf K}[{\bf Y}]|_{y^{(p+1,q)}\to y^{( p,q)}}\,, \tag{3.15}\]
must also be positive semi-definite. For the bi-variate moment problem, the vanishing of the minors for the generalized Hankel no longer leads to simple constraint on the number of elements in the hull. To see this consider
\[{\rm Det}\left(\begin{array}{cc}y^{(0,0)}&y^{(0,1)}\\ y^{(0,1)}&y^{(0,2)}\end{array}\right)\,. \tag{3.16}\]
This can vanish with an infinite number of states with distinct \(x\) but the same \(\tilde{x}\).
The sufficient conditions are less understood for the bi-variant moment problem. On general grounds, we expect that the boundaries are parameterized by finite number of elements in the hull. That is, the degrees of freedom that parameterize the boundary should be the \((\rho_{i},x_{i},\tilde{x}_{i})\)s of the finite elements. For the single moment problem, the Hankel matrix being singular puts a bound on the number of elements, and thus they serve as the boundary or define limiting points to the boundary. For generalized Hankel of the double moment, this is no longer true. At truncated order, we no longer expect that the singular generalized Hankel describes the true boundary. However, if we have a collection of singular Hankel matrices, one can infer a bound on the number of elements. Indeed, this expectation is verified when compared to numerical SDPB results in the next section.
The modular-hedron
We are now ready to map the modular bootstrap problem to the moment problem. It will be more convenient to begin with the reduced partition function \(\hat{Z}(\tau,\bar{\tau})\), defined as
\[\hat{Z}(\tau,\bar{\tau})=|\tau|^{\frac{1}{2}}|\eta(\tau)|^{2}Z(\tau,\bar{\tau})\,. \tag{38}\]
As the pre-factor \(|\tau|^{1/2}|\eta(\tau)|^{2}\) is invariant under \(S\)-transform \(\tau\to-1/\tau\), the reduced partition function retains its invariance. Similarly, the expansion is now defined on the reduced characters, which are independent of the \(\eta\) functions:
\[\hat{Z}(\tau,\bar{\tau})=\hat{\chi}_{0}(\tau)\hat{\bar{\chi}}_{0}(\bar{\tau})+ \sum_{h,\bar{h}>0}n_{h,\bar{h}}\hat{\chi}_{h}(\tau)\hat{\bar{\chi}}_{\bar{h}}( \bar{\tau})\,, \tag{39}\]
where
\[\hat{\chi}_{0}(\tau)\hat{\chi}_{0}(\bar{\tau})=|\tau|^{1/2}|q^{- \frac{c-1}{24}}(1-q)|^{2},\quad\hat{\chi}_{h}(\tau)\hat{\chi}_{\bar{h}}(\bar{ \tau})=|\tau|^{1/2}q^{h-\frac{c-1}{24}}\bar{q}^{\bar{h}-\frac{c-1}{24}}\,. \tag{40}\]
Again we expand both sides of eq.(39) around the self-dual point. First for the reduced characters,
\[\left.\begin{array}{rcl}\hat{\chi}_{h}^{(n)}\equiv\left.(\tau \partial_{\tau})^{n}\hat{\chi}_{h}(\tau)\right|_{\tau=i}&=&(-1)^{\frac{1}{8}}e ^{\frac{(c-1-24h)\pi}{12}}(a_{n,n}+X(\cdots(a_{n,3}+X(a_{n,2}+X(a_{n,1}+X))\,, \\ \hat{\chi}_{\bar{h}}^{(n)}\equiv\left.(\bar{\tau}\partial_{\bar{\tau}})^{n} \hat{\chi}_{\bar{h}}(\bar{\tau})\right|_{\bar{\tau}=-i}&=&-(-1)^{\frac{7}{8}}e ^{\frac{(c-1-24h)\pi}{12}}(a_{n,n}+\bar{X}(\cdots(a_{n,3}+\bar{X}(a_{n,2}+\bar {X}(a_{n,1}+\bar{X}))\,,\end{array}\right. \tag{41}\]
where \(X=(c-1-24h)\pi\), \(\bar{X}=(c-1-24\bar{h})\pi\) and \(a_{n,q}\) are numbers independent of \((c,h,\bar{h})\). Up to a universal pre-factor \(\alpha_{c,h}\equiv(-1)^{\frac{1}{8}}e^{\frac{(c-1-24h)\pi}{12}}\) and \(\alpha_{c,\bar{h}}\equiv-(-1)^{\frac{7}{8}}e^{\frac{(c-1-24h)\pi}{12}}\), we see that the \(\hat{\chi}_{h}^{(n)}\)s ( \(\hat{\chi}_{\bar{h}}^{(n)}\)) are proportional to a polynomial of degree \(n\) in \(h\) (\(\bar{h}\)). As an example
\[\hat{\chi}_{h}^{(0)} = \alpha_{c,h},\quad\hat{\chi}_{h}^{(1)}=\alpha_{c,h}\frac{1}{12}(3 +X)\,,\] \[\hat{\chi}_{h}^{(2)} = \alpha_{c,h}\frac{1}{(12)^{2}}(9+X(18+X)),\quad\hat{\chi}_{h}^{(3 )}=\alpha_{c,h}\frac{1}{(12)^{3}}(27+X(279+X(45+X))\,. \tag{42}\]
Collecting \(\hat{\chi}_{h}^{(n)}\) into a vector, \(\vec{\hat{\chi}}_{h}=(\hat{\chi}_{h}^{(0)},\hat{\chi}_{h}^{(1)},\cdots)\), via a universal \(c\) dependent GL transformation \(M(c)\) we can rotate it into a moment form. For example, truncating at the third derivative order :
\[M(c)=\left(\begin{array}{ccc}1&0&0\\ \frac{\pi(c-1)+3}{24\pi}&-\frac{1}{2\pi}&0\\ \frac{\pi(c-1)(\pi(c-1)+6)+45}{576\pi^{2}}&\frac{-\pi c+\pi-9}{24\pi^{2}}& \frac{1}{4\pi^{2}}\end{array}\right)\] \[\to\quad M(c)\left(\begin{array}{c}\hat{\chi}_{h}^{(0)}\\ \hat{\chi}_{h}^{(1)}\\ \hat{\chi}_{h}^{(2)}\end{array}\right)=\alpha_{c,h}M(c)\left(\begin{array}{c }1\\ \frac{1}{12}(3+X)\\ \frac{1}{(12)^{2}}(9+X(18+X))\end{array}\right)=\alpha_{c,h}\left(\begin{array} []{c}1\\ h\\ h^{2}\end{array}\right)\,. \tag{43}\]
Applying the same double expansion for the partition function and the vacuum character, eq.(4.2) can be reorganized as:
\[\boxed{\begin{array}{c}[\widetilde{\mathbf{Z}}]_{h,\bar{h}}=M(c)\left([\mathbf{ \hat{Z}}]-\vec{\tilde{\chi}}_{0}\vec{\tilde{\chi}}_{0}^{T}\right)M^{\,T}(c)= \sum_{h,\bar{h}}\ \tilde{n}_{h,\bar{h}}\ \left(\begin{array}{c}1\\ h\\ h^{2}\\ \vdots\end{array}\right)\left(\begin{array}{c}1\\ \bar{h}\\ \bar{h}^{2}\\ \vdots\end{array}\right)^{T},\\ \end{array}} \tag{4.7}\]
where \(\tilde{n}_{h,\bar{h}}=n_{h,\bar{h}}e^{\frac{(c-1-12(h+\bar{h}))\pi}{6}}\). The subscript for \([\widetilde{\mathbf{Z}}]_{h,\bar{h}}\) indicates the double moment is in \((h,\bar{h})\). We see that after the double GL rotation, the vacuum-subtracted partition function is given by the convex hull of a tensor product of half-moment curves. We will term this space the "_modular-hedron_". It is sometimes more convenient to consider states labeled by \((\Delta,\mathcal{J})\). The geometry for \([\widetilde{\mathbf{Z}}]_{\Delta,\mathcal{J}}\) is simply a linear transform of \([\widetilde{\mathbf{Z}}]_{h,\bar{h}}\). For example,
\[\tilde{z}^{(0,0)}_{\Delta,\mathcal{J}}=\tilde{z}^{(0,0)}_{h,\bar{h}},\quad \tilde{z}^{(0,1)}_{\Delta,\mathcal{J}}=\tilde{z}^{(1,0)}_{h,\bar{h}}-\tilde{z }^{(0,1)}_{h,\bar{h}},\quad\tilde{z}^{(1,0)}_{\Delta,\mathcal{J}}=\tilde{z}^{ (1,0)}_{h,\bar{h}}+\tilde{z}^{(0,1)}_{h,\bar{h}}\,. \tag{4.8}\]
In this basis, the geometry is associated with the tensor product of a half- and a discrete moment curve, where \(\mathcal{J}\in\mathbb{Z}\).
#### The spinless modular polytope
A simple limit of the geometry is the case where we only consider \(\tilde{z}^{(q,0)}\), i.e. where the spin information is suppressed. Then the double moment reduces back to the single moment. This corresponds to the spinless geometry where we consider the torus moduli being purely imaginary, \(\tau=i\beta\) and
\[Z[\beta]=\sum_{\Delta}n_{\Delta}e^{2\pi\beta\left(\Delta-\frac{c-1}{12} \right)}\,. \tag{4.9}\]
We then have the modular-polytope
\[\vec{\mathcal{Z}}^{\text{spinless}}=\sum_{\Delta}n_{\Delta}\begin{pmatrix}1 \\ \Delta\\ \vdots\\ \Delta^{d}\end{pmatrix}\,, \tag{4.10}\]
where we identify \(\mathcal{Z}^{\text{spinless}}_{q}=\tilde{z}^{(q,0)}_{\Delta,\mathcal{J}}=\sum _{i=0}^{q}\begin{pmatrix}q\\ i\end{pmatrix}\tilde{z}^{(i,q-i)}_{h,\bar{h}}\).
The character representation implies that the Taylor coefficients of the partition function must lie _inside_ the hull of tensored moment curves as in eq.(4.7). This geometry encodes the positivity of the degeneracy number, as well as unitarity \(h,\bar{h}\geq 0\). Modular invariance is then reflected in the additional requirement that \(\hat{z}^{(i,j)}=0\) for \(i+j\in odd\).
That is, the discretized partition function lives on a "_modular subplane_"
\[[\mathbf{\hat{Z}}]_{\rm mod}\equiv\left(\begin{array}{ccccc}\hat{z}^{(0,0)}&0& \hat{z}^{(0,2)}&0&\cdots\\ 0&\hat{z}^{(1,1)}&0&\hat{z}^{(3,1)}&\cdots\\ \hat{z}^{(2,0)}&0&\hat{z}^{(2,2)}&0&\cdots\\ 0&\hat{z}^{(3,1)}&0&\hat{z}^{(3,3)}&\cdots\\ \vdots&\vdots&\vdots&\vdots\end{array}\right)\,. \tag{4.11}\]
The geometry for the modular bootstrap is an intersection problem, taken between the modular plane \([\mathbf{\hat{Z}}]_{\rm mod}\) with the modular-hedron \([\mathbf{\hat{Z}}]\). The boundary of this intersection will be determined by the boundaries of the modular-hedron. Restricting ourselves to the spinless geometry, i.e. the modular-polytope, this immediately implies that for \(d=2p{+}1\), the boundary of the intersection would be given by \(p\) states, as observed in [41].
In the following, we will systematically analyze the geometry using both analytical and numerical methods. We will also impose further constraints on the spectrum such as,
* A gap for either \(\Delta\) or \(min[h,\bar{h}]\).
* Spin integrality.
* Integrality for the degeneracy number.
This will allow us to glean information with regards to the twist-gap, gap, and bounds on the value of the partition function at the self-dual point.
### The semi-definite program setup
Here we summarize the constraints developed in the previous sections as follows:
* Modular-hedron geometry: \[\mathbf{K}\left[[\mathbf{\widetilde{Z}}]_{h,\bar{h}}\right]\succeq 0\,.\] (4.12) where \([\mathbf{\widetilde{Z}}]_{h,\bar{h}}\) is defined in eq.(4.7), and \(\mathbf{K}\) is defined in eq.(3.12).
* Unitarity: \(h,\bar{h}\geq 0\) \[\mathbf{K}\left[[\mathbf{\widetilde{Z}}]_{h,\bar{h}}\right]^{\rm shift_{h}} \succeq 0,\quad\mathbf{K}\left[[\mathbf{\widetilde{Z}}]_{h,\bar{h}}\right]^{ \rm shift_{\bar{h}}}\succeq 0\,.\] (4.13)
* Integer spin: \(\mathcal{J}\in\mathbb{Z}\) \[\mathbf{K}\left[[\mathbf{\widetilde{Z}}]_{h,\bar{h}}\right]^{l,l+1}\succeq 0 \quad\forall l\in\mathbb{Z}\,,\] (4.14) where \(\mathbf{K}[\mathbf{\tilde{Z}}]^{l,l+1}\) is given in eq.(4.20).
* Modular invariance: \[\hat{z}^{(i,j)}=0\quad\forall\,i{+}j\in{\rm odd}\,.\] (4.15)
These constraints, together with a linear function of \(\hat{z}^{(i,j)}\) to be extremized, constitute a semi-definite program (SDP). In particular, we begin with the relevant Hankel matrices parameterized by the modular plane, i.e. \(\hat{z}^{(i,j)}\) with \(i+j=even\), with suitable deformations to incorporate a gap for the spectrum. For fixed values of gap, this sets up a feasibility problem for SDP. In addition to finding the optimal value for the discretized partition function, it is also possible to obtain the optimal gap. The idea is to make an assumption about the gap and then find a set of \(\hat{z}^{(i,j)}\) that satisfies all the constraints. If a certificate of infeasibility to the problem is found, the proposed spectrum is then ruled out. See Appendix B for a detailed discussion of SDP and the feasibility problem.
In our notation, the traditional SDPB approach starts with the modular constraint:
\[\sum_{h,\bar{h}}n_{h,\bar{h}}f_{a,b}(h,\bar{h})\equiv\sum_{h,\bar{h}}n_{h,\bar {h}}\chi_{h}^{a}\chi_{\bar{h}}^{(b)}=0\quad\forall\ a+b=\text{odd}\,. \tag{4.16}\]
One then constructs linear combinations of \(f_{a,b}(h,\bar{h})\) that are positive above a gap. Each such functional rules out any spectrum whose lowest twist (or dimension) lives above the corresponding gap. The combination that gives a minimum gap is then the optimal functional. Translated to our geometry, what the SDBP probes is the intersection geometry of a co-dimension \(n\)-modular plane, where \(n\) is the number of modular constraints, with the _infinite dimensional_ double moment problem. By comparing the result to our geometric bounds, it provides a measure of how far the generalized Hankel constraints are from the true boundary.
### Bounding the gap
As a warm up, using the semi-definite programming setup in the previous subsection, we study the gap and twist gap at fixed derivative order up to \(\Lambda=43\). This is far from \(\Lambda\sim 2000\) studied in [41], and we do not expect novel bounds. The main purpose is to demonstrate the feasibility of this approach and comparing spin-less vs spinning results as well as the effects of integrality of spins.
To impose a gap on the spectrum, recall that total positivity of the Hankel matrices stems from its representation as a convex hull:
\[K\left[[\widetilde{\mathbf{Z}}]_{h,\bar{h}}\right]=\sum_{h,\bar{h}}\tilde{n}_ {h,\bar{h}}\begin{pmatrix}1\\ h\\ \bar{h}\\ \vdots\end{pmatrix}\begin{pmatrix}1\\ h\\ \bar{h}\\ \vdots\end{pmatrix}^{T}\,\,. \tag{4.17}\]
If the spectrum has a gap, say \(\Delta_{\text{gap}}\), then we immediately see that
\[K\left[[\widetilde{\mathbf{Z}}]_{h,\bar{h}}\right]^{\text{twist}_{\Delta}}= \sum_{h,\bar{h}}\tilde{n}_{h,\bar{h}}(h+\bar{h}-\Delta_{\text{gap}})\begin{pmatrix} 1\\ h\\ \bar{h}\\ \vdots\end{pmatrix}\begin{pmatrix}1\\ h\\ \bar{h}\\ \vdots\end{pmatrix}^{T}\,, \tag{4.18}\]
also constitute a definite positive matrix. In terms of the original coordinates, we can for example write:
\[\begin{pmatrix}\tilde{z}^{(0,1)}&\tilde{z}^{(1,1)}&\tilde{z}^{(0,2)}\\ \tilde{z}^{(1,1)}&\tilde{z}^{(2,1)}&\tilde{z}^{(1,2)}\\ \tilde{z}^{(0,2)}&\tilde{z}^{(1,2)}&\tilde{z}^{(0,3)}\end{pmatrix}+\begin{pmatrix} \tilde{z}^{(1,0)}&\tilde{z}^{(2,0)}&\tilde{z}^{(1,1)}\\ \tilde{z}^{(2,0)}&\tilde{z}^{(3,0)}&\tilde{z}^{(2,1)}\\ \tilde{z}^{(1,1)}&\tilde{z}^{(2,1)}&\tilde{z}^{(1,2)}\end{pmatrix}-\Delta_{ \rm gap}\begin{pmatrix}\tilde{z}^{(0,0)}&\tilde{z}^{(1,0)}&\tilde{z}^{(0,1)}\\ \tilde{z}^{(1,0)}&\tilde{z}^{(2,0)}&\tilde{z}^{(1,1)}\\ \tilde{z}^{(0,1)}&\tilde{z}^{(1,1)}&\tilde{z}^{(0,2)}\end{pmatrix}\succeq 0\,. \tag{4.19}\]
For a fixed truncation order \(\Lambda_{\rm max}\), i.e. the total degree in \(h\) and \(\bar{h}\), we solve the feasibility problem for various different values of the gap. For the spinless bootstrap, \(\Lambda_{\rm max}\) is just the highest degree in \(\Delta\). This enables us to determine an upper bound for \(\Delta_{\rm gap}\), and similarly for the twist-gap.
It is also straightforward to impose integrality on the spins. We will define \(\mathcal{J}\equiv h-\bar{h}\in\mathbb{Z}\). Suppose the spin \(\mathcal{J}\) does not take value between \(l\) and \(l+1\), the following matrix will be positive:
\[\mathbf{K}\left[\left[\widetilde{\mathbf{Z}}\right]_{h,\bar{h}}\right]^{l,l+ 1}=\sum_{h,\bar{h}}\tilde{n}_{h,\bar{h}}(h-\bar{h}-l)(h-\bar{h}-l-1)\begin{pmatrix} 1\\ h\\ \bar{h}\\ \vdots\end{pmatrix}\begin{pmatrix}1\\ h\\ \bar{h}\\ \vdots\end{pmatrix}^{T}\succeq 0\,. \tag{4.20}\]
By imposing the positivity of \(\mathbf{K}[\widetilde{\mathbf{Z}}]^{l,l+1}\) for each pair of adjacent spins \((l,l+1)\), one effectively requires that \(\mathcal{J}\) take only integer values. In practice, to implement these constraints, one considers only a finite number of spins \(-l_{\rm max}\leq l\leq l_{\rm max}\). It is however worth noting that the Hankel matrix positivity, in contrast with the polynomial program approach of SDPB, gives valid upper bounds on \(\Delta_{\rm gap}\) for any spin-truncation \(l_{\rm max}\) and truncation order. Increasing \(l_{\rm max}\) only _improves_ the bound. This is because the positivity of the truncated Hankel matrices is always a necessary condition for the double-moment geometry, and we are bounding the gap by ruling out configurations of the spectrum.
#### Bounding the gap of scaling dimension
Let us being with the simplest setup, the spin-less boostrap at \(\Lambda=3\) where the geometry is in \(\mathbb{P}^{3}\). Let us consider the geometry \(\{\hat{Z}_{0},\hat{Z}_{1},\hat{Z}_{2},\hat{Z}_{3}\}\), where we can analytically solve the boundary. Because of modular invariance, there are the only three independent parameters \(\{\Delta_{\rm gap},\hat{Z}_{0},\hat{Z}_{2}\}\), with
\[\hat{Z}_{0}=\hat{z}^{(0,0)},\quad\hat{Z}_{2}=2\hat{z}^{(1,1)}+\hat{z}^{(0,2)}+ \hat{z}^{(2,0)}\,. \tag{4.21}\]
This space is carved out by the union of the following equation sets,
\[\det K_{n}\text{ and }\det K_{n}^{\rm twist_{\Delta}}\succeq 0,\quad\text{for }n=0,1\,, \tag{4.22}\]
where the explicit entries of the \(2\times 2\) Hankel matrix \(K_{1}=K_{1}[\vec{y}]\) are given by,
\[\vec{y}=M(c).\left[\begin{pmatrix}\hat{Z}_{0}\\ 0\\ \hat{Z}_{2}\\ 0\end{pmatrix}-\vec{G}_{\rm vac}\right]\,, \tag{4.23}\]
where \(\vec{G}_{\rm vac}\) is the contribution from the vacuum character and \(M(c)\) is the associate GL transformation that converts the character expansion into the moment form. They are explicitly given as:
\[\vec{G}_{\rm vac} =\begin{pmatrix}\left(e^{2\pi-1}\right)^{2}e^{\frac{c}{6}e^{c_{1}}} \\ \frac{1}{6}\left(e^{2\pi-1}\right)e^{\frac{c}{6}e^{c_{1}}}\left(-e_{1}+e^{2\pi }(c_{3}+3-3)\right.\\ \frac{1}{36}e^{\frac{1}{6}e^{c_{1}}}\left(c_{2}^{2}+12c_{1}-2e^{2\pi}\left(c_{ 2}^{2}+12c_{2}+9\right)+e^{4\pi}\left(c_{1}^{2}+12c_{1}+9\right)+9\right)\\ \frac{1}{216}\left(e_{1}(c_{1}(c_{1}+27)+117)-2e^{2\pi}(c_{2}(c_{2}(c_{2}+27)+ 117)+27)+e^{4\pi}(c_{3}(c_{3}(c_{3}+27)+117)+27)+27)\right)\\ 1&0&0\\ \frac{c_{4}+3}{12\pi}&-\frac{1}{2\pi}&0&0\\ \frac{(c_{3}+6)c_{3}+27}{144\pi^{2}}&\frac{-c_{3}-6}{12\pi^{2}}&0\\ \frac{((c_{3}+9)c_{4}+81)c_{3}+405}{1728\pi^{3}}&-\frac{(c_{3}+12)c_{3}+69}{9 6\pi^{3}}&\frac{c_{3}+9}{16\pi^{3}}&-\frac{1}{8\pi^{3}}\end{pmatrix}\,, \tag{4.24}\]
where \(c_{1}=\pi(c{-}25)\), \(c_{2}=\pi(c{-}13)\) and \(c_{3}=\pi(c{-}1)\) respectively. Now the boundary of the space \(\{\hat{Z}_{0},\hat{Z}_{2}\}\) is given by,
\[\boxed{\det K_{1}=0\text{ or }\det K_{1}^{\rm twist_{\Delta}}=0}\,. \tag{4.25}\]
In Figure 3 we show an example of the allowed region of \(\{\hat{Z}_{0},\hat{Z}_{2}\}\) in \(c=1\) and \(\Delta_{\rm gap}=0\). First, note that the region is unbounded with the boundaries given by either \(K_{1}\) or \(K_{1}^{\rm twist_{\Delta}}\) being rank 1. As we will see, we can actually solve for optimal \(\Delta_{\rm gap}\) analytically in \(c\to\infty\) limit.
Since we expect that beyond the gap there are no consistent solutions, i.e. the space of partition function is empty. This means that the boundaries in \(\mathbb{P}^{3}\) becomes degenerate, \(\det K_{1}=\det K_{1}^{\rm twist_{\Delta}}=0\). This condition is satisfied if there is only one state. The gap state. Then we can rewrite (4.23) as,
\[n_{\Delta_{\rm gap}}M^{-1}(c)\cdot\begin{pmatrix}1\\ \Delta_{\rm gap}\\ \Delta_{\rm gap}^{2}\\ \Delta_{\rm gap}^{3}\end{pmatrix}+\vec{G}_{\rm vac}=\begin{pmatrix}\hat{Z}_{ 0}\\ 0\\ \hat{Z}_{2}\\ 0\end{pmatrix}\,. \tag{4.26}\]
Next focus the second and fourth row of this vector equation,
\[n_{\Delta_{\rm gap}}\vec{F}_{\Delta_{\rm gap}}+\vec{F}_{\rm vac}=\begin{pmatrix} 0\\ 0\end{pmatrix}\,. \tag{4.27}\]
The zero on the RHS indicates that the two vectors are collinear,
\[\omega(\Delta_{\rm gap})\equiv\det[\vec{F}_{\rm vac},\vec{F}_{\Delta_{\rm gap }}]=0\,. \tag{4.28}\]
Now \(\omega(\Delta_{\rm gap})\) is a degree 3 polynomial in \(\Delta_{\rm gap}\), we can analyse the \(c\to\infty\) behavior by first assuming \(\Delta_{\rm gap}=a_{1}c+a_{0}+\frac{a_{1}}{c}+\dots\), then solve the largest root \(\Delta_{*}\) order by order. At leading order we have,
\[a_{1}(1+18a_{1}(4a_{1}-1))=0\Rightarrow a_{1}=0,\frac{1}{12},\frac{1}{6}\,. \tag{4.29}\]
ince \(\Delta_{\rm gap}\) will be the largest zeros, \(a_{1}=\frac{1}{6}\). Then we can continue to solve the second order equation,
\[-12+\pi(7+6a_{0}-6\coth(\pi))=0\Rightarrow a_{0}=\frac{12-7\pi+6\pi\coth(\pi)}{6 \pi}\,. \tag{4.30}\]
The first two order of \(\Delta_{gap}\) when \(c\rightarrow\infty\) are,
\[\Delta_{gap}=\frac{c}{6}+\frac{12-7\pi+6\pi\coth(\pi)}{6\pi}+\mathcal{O}(1/c)\,. \tag{4.31}\]
This precisely matches with the early result in [37], where the \(\mathcal{O}(c^{0})\) term precisely evaluates to the numeric value \(0.473695\) quoted there in.
In Figure 4 we display the results for optimal gap at higher truncation order (\(\Lambda_{\rm max}=11\)) with varying central charges. We compare between the geometric modular-polytope/hedron approach and SDPB. For the spinless analysis, we see that the modular polytope matches with the spinless SDPB. This is expected since for the case of a single moment, the Hankel positivities are necessary and sufficient conditions for the truncated moment problem. For the spinning analysis we see that at finite central charge, for example \(c\sim 10\) in the inset of Figure 4, the result of modular-hedron is slightly weaker than the spinning SDPB. Again this is reflecting that for a finite truncation order, the positivity of the Hankel matrices need not be sufficient condition for the moment geometry. However, such difference appears to be negligible at large central charges \(c\gg 1\). In fact, as we extend to large central charge, there is negligible difference between the spinless and spinning bootstrap.
In Figure 5 we give the result for various different truncation orders up to \(\Lambda_{max}=43\) and extrapolate \(\Lambda_{\rm max}\) to infinity with the fitting function \(\Delta_{\rm gap}/c=A+B\,exp\left(-D\Lambda_{max}\right)\). However, we found that the extrapolated result is not stable. For instance, at central charge 2000, the extrapolated result improves when \(\Lambda_{\rm max}\) increases from 39 to 43. Shown in Figure 6, the fitting function derived from \(\Lambda\leq 39\) data deviates from the \(\Lambda_{max}\)=43 data point. This indicates that the extrapolated result has yet to converge, and it will continue to decrease as \(\Lambda_{\rm max}\) increases.
We continue to fit the extrapolated result and obtain an estimation of \(\Delta_{\rm gap}/c\) at the limit \(c\rightarrow\infty\). We fit the data within the range \(500\leq c\leq 2000\). The details of the fitting
Figure 4: Bound on \(\Delta_{\rm gap}\) vs. central charge \(c\) for truncation order \(\Lambda_{\rm max}=11\). We observe that for \(c>100\) there is very little difference between the four bootstraps.
Figure 5: Bound on \(\Delta_{\rm gap}\) vs. central charge \(c\) for different truncation orders \(\Lambda_{\rm max}\). The color dots are the data points of truncation order \(\Lambda_{max}=7,\,11,\,15,\,19,\,23,\,27,\,31,\,35,\,39,\,43.\) The fast bootstrap data are extracted from [41]. It is clear the bounds have not yet converged, but we are able to directly verify the \(c\gtrsim 2000\) regime.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Fitting Function & \(1-R_{adj}^{2}\) & \(A\) \\ \hline \(A+B/c+D/c^{2}\) & \(10^{-7}\) & \(1/6.60\) \\ \hline \(A+B/c+D\,log(c)/c\) & \(4\times 10^{-8}\) & \(1/6.44\) \\ \hline \end{tabular}
\end{table}
Table 1: The details of the fitting function.
functions are described in Table 1 which is also used in [41]. The estimation differs since different functions were used. However, both fitting functions estimated that the ratio \(\Delta_{\rm gap}/c\) is approximately \(1/6.60\sim 1/6.44\) at the infinite \(c\) limit. We expect that the result will improve as we increase the \(\Lambda_{\rm max}\) truncation.
**Bounding the twist-gap** The twist-gap corresponds to \(\min[h,\bar{h}]\geq h_{\rm gap}\), which means \(h\geq h_{\rm gap}\) and \(\bar{h}\geq h_{\rm gap}\). Translated to our geometry, one modifies the Hankel in eq.(4.13) to the twisted Hankel \({\bf K}[\hat{\bf Z}_{h_{gap}}]\) by identifying
\[{\bf K}[\hat{\bf Z}_{h_{gap}}]^{\rm twist_{h}}=\sum_{h,\bar{h}}\tilde{n}_{h, \bar{h}}(h-h_{gap})\begin{pmatrix}1\\ h\\ \bar{h}\\ \vdots\end{pmatrix}\begin{pmatrix}1\\ h\\ \bar{h}\\ \vdots\end{pmatrix}^{T}\succeq 0\,. \tag{4.32}\]
Similarly, to impose \(\bar{h}\geq h_{\rm gap}\),
\[{\bf K}[\hat{\bf Z}_{h_{gap}}]^{\rm twist_{h}}=\sum_{h,\bar{h}}\tilde{n}_{h, \bar{h}}(\bar{h}-h_{gap})\begin{pmatrix}1\\ h\\ \bar{h}\\ \vdots\end{pmatrix}\begin{pmatrix}1\\ h\\ \bar{h}\\ \vdots\end{pmatrix}^{T}\succeq 0\,. \tag{4.33}\]
We demonstrate the results in Figure 7, again for various central charges. Here, we compare the result from the modular-hedron geometry with or without integer spin constraints. As evident in the plot, the difference is negligible when \(c>10\). At \(c=100\) the difference is less than \(0.1\%\), so for sufficiently large central charge, spin integrality will not significantly improve the bounds.
Figure 6: Bounds on \(\Delta_{\rm gap}/c\) vs. \(\Lambda_{\rm max}\) at central charge equals 2000. The fitting function is obtained from \(\Lambda_{max}\leq 39\) data. The \(\Lambda_{max}=43\) data point deviates from the fitting function. This implies that the extrapolated result has not stabilized.
### Two sided bounds for \(Z_{0}\) and critical gap
So far, we've demonstrated how to extract the optimal gap or twist gap from our geometric approach. It is also interesting to focus on the region of intersection itself, representing the "space" of partition functions. For this purpose it will be sufficient to consider the modular-polytope \(\vec{\mathcal{Z}}^{\text{spinless}}\) eq.(4.10). Once again we simply have that the Hankel and twisted-Hankel must be positive:
\[\mathbf{K}[\vec{\mathcal{Z}}^{\text{spinless}}]\succeq 0,\quad\mathbf{K}[\vec{ \mathcal{Z}}^{\text{spinless}}]^{\text{twist}\Delta}\succeq 0\,. \tag{4.34}\]
First we claim that \(\mathcal{Z}^{\text{spinless}}_{q}\) is bounded from both sides. Recall that modular invariance implies that \(\hat{z}^{(m,n)}=0\) and \(m{+}n=odd\). Due to the GL transformation, \(\mathcal{Z}^{\text{spinless}}_{q}\) for \(q=odd\) will be linear combination of \(\mathcal{Z}^{\text{spinless}}_{i}\) with \(i<q\). Now, starting with \(\mathcal{Z}^{\text{spinless}}_{0}\), every time we introduce two more derivatives to the partition function, the Hankel conditions are also enlarged to include two new inequalities linear/quadratic in the new variable \(\mathcal{Z}^{\text{spinless}}_{n}\), respectively. The highest unshifted Hankel determinant reads
\[\det\begin{pmatrix}K_{n-1}&a\\ a^{T}&\mathcal{Z}^{\text{spinless}}_{n}\end{pmatrix}\geq 0\,,\]
which means the term linear in the new variable \(\mathcal{Z}^{\text{spinless}}_{n}\) is non-negative. On the other hand, the highest twisted Hankel determinant reads
\[\det\begin{pmatrix}K^{\text{twist}\Delta}_{n-2}&*&*\\ *&*&\mathcal{Z}^{\text{spinless}}_{n}\\ *&\mathcal{Z}^{\text{spinless}}_{n}&\mathcal{Z}^{\text{spinless}}_{n+1} \end{pmatrix}\geq 0\,.\]
This gives a quadratic inequality whose leading coefficient for \((\mathcal{Z}^{\text{spinless}}_{n})^{2}\) is \(-K^{\text{twist}\Delta}_{n-2}\), which is non-positive. Thus the former condition gives a lower bound for \(\mathcal{Z}^{\text{spinless}}_{n}\), while the later an upper bound.
Figure 7: Bound on the twist gap vs. central charge \(c\) at different derivative orders.
Therefore the boundedness of the space boils down to \(\mathcal{Z}_{0}^{\rm spinless}\), which is always bounded from below since it is definite non-negative. As we will see there is a critical gap \(\Delta_{\rm gap}^{*}\) above which \(\mathcal{Z}_{0}^{\rm spinless}\) develops an upper bound. Once again let us consider the \(\mathbb{P}^{3}\) modular polytope geometry in more detail. The constraints \(\det K_{0}^{\rm twist_{\Delta}}\geq 0\) and \(\det K_{0}\geq 0\) in (4.22) take the form
\[\det K_{0}\geq 0\Rightarrow\hat{Z}_{0}\geq\left(e^{2\pi}-1\right)^ {2}e^{\frac{1}{6}c_{1}}\,,\] \[\det K_{0}^{\rm twist_{\Delta}}\geq 0\Rightarrow\frac{\hat{Z}_{0}( \pi c_{4}{+}3)}{12\pi}+e^{\frac{1}{6}c_{1}}(\Delta_{\rm gap}-2)-2e^{\frac{1}{ 6}c_{2}}(\Delta_{\rm gap}-1)+e^{\frac{1}{6}c_{3}}\Delta_{\rm gap}\geq 0\,. \tag{4.35}\]
From (4.35), we see that when the coefficient of \(\hat{Z}_{0}\) turns negative the constraint becomes an upper bound on \(\hat{Z}_{0}\). This occurs when the gap is above a critical value,
\[\boxed{\Delta_{\rm gap}\geq\frac{c-1}{12}+\frac{1}{4\pi}}\,. \tag{4.36}\]
In this case, \(\hat{Z}_{0}\) will be bounded from two sides,
\[e^{\frac{c_{1}}{6}}(e^{2\pi}-1)^{2}\leq\hat{Z}_{0}\leq\frac{12 \left(e^{2\pi}-1\right)\pi e^{\frac{1}{6}c_{1}}\left(\left(e^{2\pi}{-}1\right) \Delta_{\rm gap}{+}2\right)}{-\pi c_{4}{-}3}\,. \tag{4.37}\]
Now let's move to other constraints \(\det K_{1}\geq 0\) and \(\det K_{1}^{\rm twist_{\Delta}}\geq 0\) in (4.22),
\[36\left(\hat{Z}_{0}-\left(e^{2\pi}{-}1\right)^{2}e^{\frac{1}{6} c_{1}}\right)\hat{Z}_{2}-e^{\frac{1}{6}c_{1}}(c_{1}(c_{1}+6)+27)\hat{Z}_{0}\] \[+2e^{\frac{1}{6}c_{2}}(c_{2}(c_{2}+6)+27)\hat{Z}_{0}-e^{\frac{1}{ 6}c_{3}}(c_{3}(c_{3}+6)+27)\] \[+\hat{Z}_{0}-288\pi^{2}e^{\frac{1}{3}\pi(c-19)}+576\pi^{2}e^{ \frac{1}{3}c_{2}}-288\pi^{2}e^{\frac{1}{3}\pi(c-7)}+18\hat{Z}_{0}^{2}\geq 0\,,\] \[-\hat{Z}_{2}^{2}+f_{1}(\hat{Z}_{0})\hat{Z}_{2}+f_{0}(\hat{Z}_{0}) \geq 0\,, \tag{4.38}\]
where \(\det K_{1}\) is linear and \(\det K_{1}^{\rm twist_{\Delta}}\) is quadratic in \(\hat{Z}_{2}\). The lower bound for \(\hat{Z}_{0}\) in (4.37) ensures that the coefficient for \(\hat{Z}_{2}\) is positive in the first inequality above, implying a lower bound. From the second inequality, the discriminant for the quadratic polynomial in \(\hat{Z}_{2}\) is always positive as long as \(\hat{Z}_{0}\) satisfies (4.37), leading to an upper bound for \(\hat{Z}_{2}\). We see that the space of \(\{\hat{Z}_{0},\hat{Z}_{2}\}\) is finite once the gap of the spectrum satisfies (4.36). Figure 8 shows an example at \(c=1,\Delta_{\rm gap}=1/10\). As we increase \(\Delta_{\rm gap}\), the region shrinks, until it vanishes completely when \(\Delta_{\rm gap}\) reaches the maximal allowed value.
Critical gap and HKS boundThe presence of a _critical gap_, above which the partition function is bounded, can be derived from the Hartman, Keller, Stoica [20, 27] (we will call it HKS bound from now on) bound. To see this, we start with (6.6) in [27]2
Footnote 2: The \(\beta\) here is different than the \(\beta\) in (6.6) in [27] by \(2\pi\), and we are using the reduced partition which subtracted the Dedekind eta functions. So the exact form of (4.39) will be different from (6.6) in [27].
\[Z[\beta]<Z_{L}(1/\beta)+\frac{Z_{L}[1/\beta]-Z_{L}[\beta]}{\beta e^{\pi/6( \beta-1/\beta)(c-1-12\Delta_{H})}-1},\quad(\Delta_{H}>\frac{c-1}{12},\beta\geq 1 )\,, \tag{4.39}\]
where \(Z[\beta]\) is defined in (4.9) and \(Z_{L}(\beta)\) comes from summing over all the contributions from states having scaling dimensions below \(\Delta_{H}\),
\[Z_{L}[\beta]=\beta^{1/2}e^{2\pi\beta\frac{c-1}{12}}(1-e^{-2\pi\beta})^{2}+\sum_{ \Delta\leq\Delta_{H}}n_{\Delta}e^{2\pi\beta\left(\Delta-\frac{c-1}{12}\right)}\,. \tag{4.40}\]
Here the separation between low and high energy states can be arbitrary as long as the threshold \(\Delta_{H}\) is larger than \(\frac{c-1}{12}\). Now we take the self-dual point limit \(\beta\to 1\). (4.39) becomes,
\[\hat{Z}_{0}\leq Z_{L}[1]-\frac{Z_{L}^{{}^{\prime}}[1]}{3+\pi(c-1-12\Delta_{H}) },\quad\Delta_{H}>\frac{c-1}{12}\,. \tag{4.41}\]
Without knowing the low energy spectrum, we cannot guarantee a upper bound from (4.41) since one can always assume a state with small scaling dimension and large degeneracy. In such case the upper bound of \(\hat{Z}_{0}\) can become arbitrarily large. Now let's consider the case where the first operator \(\Delta_{\text{gap}}\) in CFT has scaling dimension larger than \(\frac{c-1}{12}\) and we set \(\Delta_{H}=\Delta_{\text{gap}}\) in (4.41). Then the RHS will only receive contribution from the vacuum character,
\[\hat{Z}_{0}\leq\frac{12\left(e^{2\pi}-1\right)\pi e^{\frac{1}{6}\pi(c-25)} \left(\left(e^{2\pi}-1\right)\Delta_{\text{gap}}+2\right)}{-\pi c+12\pi\Delta_ {\text{gap}}+\pi-3},\quad\Delta_{\text{gap}}=\Delta_{H}>\frac{c-1}{12}\,. \tag{4.42}\]
In order for this to be a valid upper bound, the denominator in (4.42) must be positive, which implies a critical gap \(\Delta_{\text{gap}}^{*}\)
\[\Delta_{\text{gap}}^{*}=\frac{c-1}{12}+\frac{1}{4\pi}\,. \tag{4.43}\]
We see that from the HKS bound (4.39) we extract an upper bound for \(\hat{Z}_{0}\),
\[\boxed{\hat{Z}_{0}\leq\frac{12\left(e^{2\pi}-1\right)\pi e^{\frac{1}{6}\pi(c- 25)}\left(\left(e^{2\pi}-1\right)\Delta_{\text{gap}}+2\right)}{-\pi c+12\pi \Delta_{\text{gap}}+\pi-3},\quad\Delta_{\text{gap}}\geq\Delta_{\text{gap}}^{ *}}\,, \tag{4.44}\]
with \(\Delta_{\text{gap}}^{*}=\frac{c-1}{12}+\frac{1}{4\pi}\). This exactly matches with our result in (4.37) and (4.36) respectively.
While the bound derived from HKS matches with our analytic result at \(\Lambda_{max}=3\), we expect the bounds to improve significantly as we increase \(\Lambda\). To numerically determine the critical gap, we decrease the value of \(\Delta_{\rm gap}\) until no upper bound on \(\hat{Z}_{0}\) can be found. A more detailed discussion on the method of certifying unboundedness is given in Appendix B. In Figure 9, the critical gap at various different derivative orders for \(c=12\) and \(c=100\) are considered. Using \(\Delta_{\rm gap}^{*}/c=a+b\Lambda_{\rm max}^{-p}\) as an extrapolation function, we extrapolate the infinite \(\Lambda\) limit giving \(\Delta_{\rm gap}^{*}/c\approx 0.07606\) and \(\Delta_{\rm gap}^{*}/c\approx 0.08247\) for \(c=12\) and \(c=100\), respectively, both being nonzero. Note that for \(c=100\) the \(\Delta_{\rm gap}^{*}/c\) is approximately close to \(\frac{1}{12}\). We plot the value of critical gap against the central charge, shown in Figure 10, which shows that \(\Delta_{\rm gap}^{*}\) lies between \(\frac{c-1}{12}\) and \(\frac{c}{12}\), and converges towards \(\frac{c}{12}\) at large \(c\).
The fact that partition function is bounded from above for theories with gap above \(\frac{c-1}{12}\) is closely connected to the degeneracy being bounded. Indeed as shown in [39], for gap above \(\frac{c-1}{12}\) and below the optimal gap at a given derivative order, it is possible to find a functional for the modular constraint that is positive above the gap and only negative on the identity block. The ratio between the functional evaluated at the identity and a particular conformal dimension then gives the upper bound of the degeneracy number for that state. The lower limit \(\frac{c-1}{12}\) reflects the existence of Liouville theory which has a non-normalizable vacuum, hence effectively unbounded degeneracy number.
Kinks between \(\frac{c}{12}\) and the critical gapInterestingly, for sufficiently large \(c\) the allowed region of \(\Delta_{\rm gap}\) can be split into two parts depending on the behaviour of \(\hat{Z}_{0}\)'s upper bound: (I) \(\Delta_{\rm gap}^{*}\sim c/12\) and (II) \(c/12\sim\max\{\Delta_{\rm gap}\}\). In region (II) the upper bound varies smoothly with \(\Delta_{\rm gap}\), while in region (I) we observe the presence of _kinks_, where the slope of the curve develops discontinuities as shown in Figure 11. Zoomed-in plots for \(c=12\) and \(c=100\), shown in Figure 12 and 13, show that kinks can be found around the lower derivative critical gaps. For \(c=12\), the upper bound of \(\hat{Z}_{0}\) is more stable against \(\Lambda_{\rm max}\), but the kinks shift slightly. On the other hand, for \(c=100\), the upper bound decreases significantly as we increase \(\Lambda_{\rm max}\), but the values of \(\Delta_{\rm gap}\) where the kinks are located are
Figure 9: Estimation of the critical gap, where the fitting function \(\Delta_{\rm gap}^{*}/c=a+b\Lambda_{\rm max}^{-p}\) is used. \(a=0.076063\) is obtained for \(c=12\), and \(a=0.082466\) for \(c=100\). In both cases we have \(p\approx 0.6\).
relatively stable.
For \(c=12\), the upper bound is more sensitive to the integer-spin constraints. As seen in Figure 12, the spinning bootstrap yields stronger result compared with the spinless bootstrap even at lower derivative order. Imposing the spinning constraints, the sharp kinks turn into smooth bumps. However, for large central charge, imposing integer-spin constraints does not significantly change the upper bound of \(\hat{Z}_{0}\), and the kinks survive. The phenomenon that higher central charge is insensitive to the integer spin constraint is consistent with our earlier observation on bounding the gap at large central charge.
Figure 11: Upper and lower bounds on \(Z^{(0,0)}\) plotted against the gap for \(c=12\). The space vanishes when reaching the maximal gap. The upper bound grows super-exponentially, becoming infinite at the critical gap. Notice the kink near \(\Delta_{\rm gap}=\frac{c}{12}\), which divides the allowed region of \(\Delta_{\rm gap}\) into two parts.
Figure 10: Critical gap \(\Delta_{\rm gap}^{*}\) vs. central charge \(c\). Numerical results at large \(c\) suggest an asymptotic convergence of the critical gap toward \(c/12\), and we observe that the critical gaps at various derivative orders alway lies above \(\frac{c-1}{12}\).
In summary, we find that the the space of partition functions is bounded once the gap of the spectrum is above the critical gap. The value of critical gap lies between \(\frac{c-1}{12}\) and \(\frac{c}{12}\), and we observe the presence of stable kinks emerging which are stable under increase of derivative truncation. If we conjecture that the stable kinks point to physical theories, the fact that they only appear below \(c/12\) for sufficiently large central charge is consistent with the expectation that the physical theories can only admit \(\Delta_{\rm gap}\leq c/12\) in the large \(c\) limit expected from holography.
Integer degeneracy number
So far, we have only considered the convex geometry implied by imposing the positivity of the degeneracy number. In this section we explore implications of requiring the degeneracy number to be an integer. We will solve this problem geometrically by using the method introduced in [45; 48], which constructs the allowed space iteratively via Minkowski sums of all possible states contributing to the partition function.
Returning to eq.(4.1)
\[\hat{z}^{(p,q)}=\hat{\chi}_{0}^{(p)}\hat{\chi}_{0}^{(q)}+\sum_{i}n_{h_{i},\bar{ h}_{i}}\hat{\chi}_{h_{i}}^{(p)}\hat{\chi}_{\bar{h}_{i}}^{(q)}\,, \tag{5.1}\]
we now require \(n_{h_{i},\bar{h}_{i}}\in\mathbb{N}\). We will focus on the spinless problem in \(\Delta\) space, which amounts to studying:
\[\hat{Z}_{k}=G_{vac}^{(k)}+\sum_{i}n_{i}\hat{\chi}_{i}^{(k)},\quad n_{\Delta_{ i}}\in\mathbb{N}\,, \tag{5.2}\]
where \(\hat{Z}_{k}=\sum_{p+q=k}\left(\begin{array}{c}k\\ q\end{array}\right)\hat{Z}^{(p,q)}\), and similar for the \(\hat{\chi}^{(k)}\). Explicitly \(G_{vac}\) is given in eq.(4.24), and
\[\hat{\chi}_{i}=e^{(c-1-12\Delta_{i})\frac{\pi}{6}}\left(\begin{array}{c}1\\ \frac{1}{6}(3+X)\\ \frac{1}{6^{2}}(9+12X+X^{2})\\ \frac{1}{6^{3}}(27+117X+27X^{2}+X^{3})\\ \vdots\end{array}\right)\,, \tag{5.3}\]
where \(X=(c-1-12\Delta_{i})\pi\). Unlike the case where \(n_{i}\) is simply assumed to be positive, the exponential factor \(e^{(c-1-12\Delta_{i})\frac{\pi}{6}}\) can no longer be scaled away if we wish to impose \(n_{i}\in\mathbb{N}\).
Our first goal is to find the boundaries of the space \((\hat{Z}_{0},\hat{Z}_{1},\hat{Z}_{2},\ldots)\). We focus on the space of consecutive couplings, as only in this space the boundaries have no non-trivial self intersections. Once we have the boundaries, all that remains is computing their intersection with the modular planes, \(\hat{Z}_{odd}=0\). The intersection with one plane (\(\Lambda_{max}=2\)) can remarkably be solved in closed form, and we are able to also numerically solve the intersection with two null planes (\(\Lambda_{max}=3\)). We will find that already these results are in some regions stronger even than the spinning bootstrap at large \(\Lambda_{\rm max}\). Intersections with even more null planes would impose even stronger constraints, and can still be carried out using our methods, but computation time may be high.
### A Minkowski sum of curves
First let us briefly explain how the allowed space of \(\hat{Z}_{k}\) can be viewed as a Minkowski sum of curves and introduce some notation. Slightly rewriting eq.(5.2), define
\[\mathbf{A}(n_{i})(\Delta_{i})\equiv\mathbf{G}_{vac}+n_{0}\hat{\boldsymbol{ \chi}}(\Delta_{\rm gap})+\sum_{i=1}^{\infty}n_{i}\hat{\boldsymbol{\chi}}( \Delta_{i})\,, \tag{5.4}\]
assuming \({\rm wlog}\,\Delta_{j}\geq\Delta_{i}\geq\Delta_{\rm gap}\) for \(j\geq i\). Any given configuration of \(n_{i}\) and \(\Delta_{i}\) corresponds to a particular point in partition function space, and so the allowed space for \(\hat{Z}_{k}\), for some
given gap \(\Delta_{\rm gap}\), is simply the image of \(\Delta_{i}\geq\Delta_{\rm gap}\) and \(n_{i}\in\mathbb{N}\)3 under eq.(100). Our task is therefore to compute this image, and in particular to find its boundaries.
Footnote 3: If any particular state \(i\) is known to be present (typically the gap state), we should also require \(n_{i}\geq 1\). We will return to this point later.
It is in fact straightforward to show that the problem reduces to finding boundaries only for configurations of the type
\[\mathbf{A}_{N}\equiv\mathbf{A}(0,\underbrace{1,1,\ldots,1}_{N})=\mathbf{G}_{ vac}+\sum_{i=1}^{N}\hat{\boldsymbol{\chi}}(\Delta_{i})\,. \tag{101}\]
The crucial observation is that any \(\mathbf{A}(n_{i})(\Delta_{i})\) can be obtained from some \(\mathbf{A}(0,1,1,1,\ldots)(\Delta_{i}^{\prime})\) by simply fixing the \(\Delta_{i}^{\prime}\) appropriately. For instance
\[\mathbf{A}(n_{0},n_{1},0,\ldots)(\Delta_{1},\ldots)=\mathbf{A}(0,\underbrace {1,\ldots,1}_{n_{0}},\underbrace{1,\ldots,1}_{n_{1}},1,\ldots)(\underbrace{ \Delta_{\rm gap},\ldots,\Delta_{\rm gap}}_{n_{0}}\underbrace{\Delta_{1}, \ldots,\Delta_{1}}_{n_{1}},\infty,\ldots)\,. \tag{102}\]
States with large \(\Delta\) (in general, with \(\Delta{\gg}c\)) have vanishing contribution to the partition function. For \(N\to\infty\), the converse is immediately also true. Therefore we conclude that the space \(\mathbf{A}(n_{i})(\Delta_{i})\) is equivalent to the space \(\mathbf{A}_{N\to\infty}\). We will obtain the space for \(\mathbf{A}_{\infty}\) by constructing \(\mathbf{A}_{N}\) iteratively, and taking the limit \(N\to\infty\).
Next we discuss how the image of \(\Delta_{i}\) may be computed.
#### 5.1.1 Two dimensions
Let us begin with the 2D space \((\hat{Z}_{0},\hat{Z}_{1})\), and \(N=1\), ie. the space of just one state satisfying \(\Delta\geq 0\), plus the vacuum. We have
\[\mathbf{A}_{1}=\mathbf{A}(0,1)=\mathbf{G}_{vac}+\hat{\boldsymbol{\chi}}( \Delta_{1})=\begin{pmatrix}G_{vac}^{(0)}+\hat{\chi}_{\Delta_{1}}^{(0)}\\ G_{vac}^{(1)}+\hat{\chi}_{\Delta_{1}}^{(1)}\end{pmatrix}\,, \tag{103}\]
This space is a curve, paramaterized by \(\Delta_{1}\geq 0\), which we plot in Figure 14 as the blue curve. For \(N=2\) we now have
\[\mathbf{A}_{2}=\mathbf{A}(0,1,1)=\mathbf{G}_{vac}+\hat{\boldsymbol{\chi}}( \Delta_{1})+\hat{\boldsymbol{\chi}}(\Delta_{2})\,. \tag{104}\]
Since we need the image of both \(\Delta_{1}\) and \(\Delta_{2}\), this is simply the Minkowski sum of the previous curve \(\mathbf{A}_{1}\) with another curve \(\hat{\boldsymbol{\chi}}\). The result is a 2D surface, parameterized by the two \(\Delta_{i}\). It is straightforward to plot the space, and one finds a 2D region with three boundaries, given by \(\mathbf{A}(0;1)\), \(\mathbf{A}(1;1)\) and \(\mathbf{A}(0;2)\). This is shown in Figure 14.
In general, boundaries of Minkowski sums can be of two types. The first type are associated to bounds of the parameters themselves. Indeed \(\mathbf{A}(0,1)\) can be obtained from \(\mathbf{A}(0,1,1)\) by taking \(\Delta_{2}=\infty\), and \(\mathbf{A}(1,1)\) can be obtained by taking \(\Delta_{1}=0\). The second type of boundaries can be obtained by extremizing in one direction, keeping the other directions fixed. This can done via Lagrange multipliers, which in 2D entails solving4
Footnote 4: One can also compute the bordered Hessian to establish whether a boundary is upper or lower, but we do no find it necessary in our low dimension examples.
\[\begin{pmatrix}\partial_{\Delta_{1}}A^{(0)}(0,1,1)&\partial_{\Delta_{2}}A^{ (0)}(0,1,1)\,,\\ \partial_{\Delta_{1}}A^{(1)}(0,1,1)&\partial_{\Delta_{2}}A^{(1)}(0,1,1)\,, \end{pmatrix}\propto(\Delta_{1}-\Delta_{2})=0\,. \tag{105}\]
We find the extremal solution \(\Delta_{1}=\Delta_{2}\), corresponding to the third boundary \({\bf A}(0,2)\). In fact, even in higher dimensions and arbitrary number of curves, solutions obtained via extremizing will always be of the type \(\Delta_{i}=\Delta_{j}\), ensuring remarkably simple boundaries.
It is important that these boundaries have no intersections, except at the parameter bounds, \(\Delta=0\) or \(\Delta=\infty\). Again we emphasize this is only valid for our particular curves \(\hat{\boldsymbol{\chi}}\) and for spaces \(\hat{Z}_{k}\) with consecutive \(k\). As a counter-example, one can check that in the space \((\hat{Z}_{0},\hat{Z}_{2})\) the boundaries do have nontrivial self-intersections.
Now let us move on to \({\bf A}_{3}={\bf A}(0;1,1,1)\). Using the fact that the boundary of a Minkowski sum of objects is included in the Minkowski sum of the objects' boundaries, we simply compute the Minkowski sum of the boundaries of \({\bf A}_{2}\) and an extra curve. The resulting space is given in Figure (b)b, where the boundaries are \({\bf A}(0;3)\), \({\bf A}(0;1)\), \({\bf A}(1;1)\), and \({\bf A}(2;1)\).
This pattern persists as we consider \(N\) curves, where we find the boundaries given by
\[\text{upper bdy}: {\bf A}(n_{0},1)\,\ n_{0}=\left\{0,1,2,\ldots,N\right\},\] \[\text{lower bdy}: {\bf A}(0,N)\ . \tag{113}\]
This is shown in Figure 15, with the lower boundary being pushed successively downward. In the \(N\gg 1\) limit, the lower boundary (corresponding to \({\bf A}(0,\infty)\)) becomes a straight vertical line. Meanwhile the upper boundary is simply the same periodic structure, repeated infinitely. We therefore obtain the boundaries of \({\bf A}_{\infty}\) in the 2D space \(\{\hat{Z}_{0},\hat{Z}_{1}\}\) as
\[\text{upper bdy}: {\bf A}(n_{0},1)\,\ n_{0}=\left\{0,1,2,\ldots\right\},\] \[\text{lower bdy}: {\bf A}(0,\infty)\ . \tag{114}\]
We also compare the \(c=1\) and \(c=2\) boundaries of this space in Figure 16.
Figure 14: Allowed space of two states \({\bf A}_{2}\) and respectively three states \({\bf A}_{3}\). Note \({\bf A}_{2}\) is a subspace of \({\bf A}_{3}\). We have fixed \(c=1\) for this figure.
#### 5.1.2 Three dimensions
In 3D, for the space \((\hat{Z}_{0},\hat{Z}_{1},\hat{Z}_{2})\), we begin with the Minkowski sum of three curves \({\bf A}_{3}\equiv{\bf A}(0;1,1,1)\), which corresponds to a 3D region, parameterized by the three \(\Delta_{i}\). As before, the boundaries of this space can originate either from bounds of the parameters \(\Delta=\{0,\infty\}\), or by extremizing via a Lagrange multiplier. We find the co-dimension one boundaries are given by \({\bf A}(0,2,1)\), \({\bf A}(0,1,2)\), \({\bf A}(1;1,1)\), and \({\bf A}(0;1,1)\), and illustrate them in Figure 17.
Figure 15: The lower boundary converges to a vertical line for \(N\to\infty\)
The generalization for \(N\) curves can be obtained just as before by adding one extra curve at a time. We obtain the boundaries of \(\mathbf{A}_{N}\) as
\[\text{lower bdy}:\quad\mathbf{A}(0,n_{1},1)\quad\,\quad n_{1}=\{1,2, \ldots,N{-}1\}\,,\] \[\text{upper bdy}:\ \ \mathbf{A}(n_{0},1,N{-}n_{0})\,\quad n_{0}=\{0,1,2, \ldots,N{-}1\}\,. \tag{111}\]
As a consistency check, it is easy to verify the lower and upper boundaries in 3D intersect along the co-dimension 2 boundaries given by eq. 10.
Taking \(N\to\infty\) the upper boundary becomes \(\mathbf{A}(n_{0},1,\infty)\), which is a curved infinite strip in the \(\hat{Z}_{2}\) direction for each \(n_{0}\). The three components of the upper boundary can be parameterized as:
\[\begin{pmatrix}A^{(0)}(n_{0},1,\infty)\\ A^{(1)}(n_{0},1,\infty)\\ A^{(2)}(n_{0},1,\infty)\end{pmatrix}=\begin{pmatrix}A^{(0)}(n_{0},1)\\ A^{(1)}(n_{0},1)\\ A^{(2)}(n_{0},1)\end{pmatrix}+\begin{pmatrix}0\\ 0\\ x\end{pmatrix}\,, \tag{112}\]
with \(x\geq 0\) an independent parameter. This parameterization makes the relation between the geometry of \(2D\) and \(3D\) clear: projecting out \(\hat{Z}_{2}\), we are left with \(A(n_{0},1)\), so the upper boundary from \(3D\) becomes the lower boundary in \(2D\).
We therefore obtain the boundaries of \(\mathbf{A}_{\infty}\) in 3D as
\[\text{lower bdy:}\quad\mathbf{A}(0,n_{1},1)\,,\quad n_{1}=\{1,2, 3,\ldots\}\,,\] \[\text{upper bdy:}\quad\mathbf{A}(n_{0},1,\infty)\,,\quad n_{0}=\{0, 1,2,\ldots\}\,. \tag{113}\]
Figure 17: Boundaries for the sum of three curves. The co-dimension 1 boundaries intersect along co-dimension 2 boundaries, which in turn are just the previous boundaries of the 2D space.
We illustrate these in Figure 18, together with the null plane \(\hat{Z}_{1}=0\). We will return to null planes shortly.
#### 5.1.3 Higher dimensions
In 4D, the boundaries follow a similar pattern, and we conjecture this holds for arbitrary dimensions as well. We summarize our findings below, for a space \((\hat{Z}_{0},\hat{Z}_{1},\ldots,\hat{Z}_{D-1})\)
\begin{tabular}{|c|c|c|c|c|c|} \hline & \(D=2\) & \(D=3\) & \(D=4\) & \(D=5\) & \(\ldots\) \\ \hline upper & \({\bf A}(n_{0};1)\) & \({\bf A}(n_{0},1,\infty)\) & \({\bf A}(n_{0},1,n_{2},1)\) & \({\bf A}(n_{0},1,n_{2},1,\infty)\) & \(\ldots\) \\ lower & \({\bf A}(0;\infty)\) & \({\bf A}(0,n_{1},1)\) & \({\bf A}(0,n_{1},1,\infty)\) & \({\bf A}(0,n_{1},1,n_{2},1)\) & \\ \hline \end{tabular} where \(n_{0}=\{0,1,\ldots\}\), \(n_{i}=\{1,2,\ldots\}\) for \(i>0\), and \({\bf A}(n_{0},n_{1},\ldots)\) is given in eq.(100), and parametrized by \(\Delta_{i}\geq 0\). We next discuss the generalization when \(\Delta\geq\Delta_{\rm gap}\), and when a gap state is assumed to be present.
### Integrality with a gap
The presence of a gap further reduces the space. There are two possible independent changes to be made.
First, if we only assume that all states satisfy a bound \(\Delta\geq\Delta_{\rm gap}\), the boundary structure remains unchanged, but the range of parameters is now \(\Delta\in[\Delta_{\rm gap},\infty)\). This does not automatically assume the state corresponding to \(\Delta_{\rm gap}\) is actually present. Rather, the space contains all theories in which the lowest state is anywhere above this gap.
Figure 18: Boundaries of \({\bf A}_{\infty}\) in three dimensions. \(\hat{Z}_{1}=0\) is the first modular plane, intersecting the lower boundaries along the green line. This intersection can be worked out in closed form.
Second, if we know that the state \(\Delta_{\rm gap}\) itself is present, the space is modified further. This is only true for integer degeneracy, since if some state is present, it must have degeneracy equal to at least 1, and so must have a finite contribution to the partition function. Therefore, if we assume a gap state is present, all boundaries we found previously must have \(n_{0}\geq 1\). This amounts to translating the whole space by the vector \(\hat{\boldsymbol{\chi}}(\Delta_{\rm gap})\).
### Null planes
In this geometrical approach, modular invariance acts a series of null planes intersecting the geometry we described earlier. The intersection with the first null plane at \(\hat{Z}_{1}=0\) was sketched in Figure 18, and in fact can be solved analytically in terms of the Lambert \(W_{k}(x)\), or product log function. This function is the solution to the equation
\[we^{w}=x\,, \tag{112}\]
and has two real branches for \(x\geq-e^{-1}\): the \(k=0\) branch for \(x\geq 0\), and the \(k=-1\) branch for \(x<0\). Intersecting a boundary of the form \(A(0,n_{1},1)\) with this null constraint implies solving
\[A^{(1)}(0,n_{1},1)=G^{(1)}_{vac}+n_{1}\hat{\chi}^{(1)}(\Delta_{1})+n_{2}\hat{ \chi}^{(1)}(\Delta_{2})=0\,, \tag{113}\]
giving
\[\Delta_{2}=\frac{c-1}{12}+\frac{1}{4\pi}-\frac{1}{2\pi}W_{k}\left(\sqrt{e} \left(-G^{(1)}_{vac}-n_{1}\hat{\chi}^{(1)}(\Delta_{1})\right)\right),\quad k= 0,-1\,, \tag{114}\]
which at \(c=1\) we find gives valid solutions for \(n_{1}\geq 2\). This solution corresponds to the green intersection line in Figure 19. For higher \(c\), the first intersection occurs at higher \(n_{1}\), with \(n_{1}=7\) at \(c=2\) for example.
The intersection with higher order null planes can only be solved numerically. For instance, intersecting the 4D geometry upper boundaries with two null planes requires solving the intersection
\[A^{(1)}(n_{0},1,n_{2},1)=A^{(3)}(n_{0},1,n_{2},1)=0\,. \tag{115}\]
We accomplish this in Mathematica by placing \(\Delta_{3}\) on a grid, and using NSolve to find \(\Delta_{1}\) and \(\Delta_{2}\) for all values of \(n_{0}\), \(n_{2}\), and \(\Delta_{3}\). Since high values of \(\Delta\) are exponentially suppressed, we can optimize the search by limiting to low values. Experimentally we find \(\Delta<10\) is sufficient. Once a valid solution for some values \((n_{0},n_{2})\) is found, the complete space can be quickly determined using the fact that consecutive values for \(n_{0},n_{2}\) correspond to neighboring boundaries.
We show the result for \(c=1\) with zero, one, and two null plane intersections in Figure 19, comparing with the case \(n_{i}\in\mathbb{R}\) as well. We observe that the integrality condition rules out significant regions of allowed space, and only intersects the original boundary in isolated points.
or even higher null constraints, the task becomes computationally intensive, and would require more specialized approaches that can efficiently find roots of polynomial-exponential equations. Note that we are not claiming the bounds due to integrality have converged at \(\Lambda_{max}=3\). However, they are still valid bounds, as increasing the number of null plane intersections can only reduce the allowed space further.
Integrality at large \(c\)As we increase \(c\), keeping \(\Lambda_{\rm max}=3\), we find that the relative importance of the integrality condition diminishes. In Figure 20 we compare the boundaries for \(c=1\) and \(c=2\).
Figure 19: Intersection of the geometry with 0, 1, and 2 null planes for \(c=1\), comparing integer and non-integer degeneracy boundaries.
Figure 20: Integer degeneracy boundaries for \(c=1\) and \(c=2\), at \(\Lambda_{\rm max}=3\).
However, it is known from the usual bootstrap approaches that as \(c\) increases, we have to also increase \(\Lambda_{\rm max}\) for bounds to converge. Our current analysis is therefore not sufficient to make any claim on the effect of integrality at large \(c\), but leaves open the possibility for significant effects at large also \(c\).
Bounding the gapBy increasing \(\Delta_{\rm gap}\), eventually the null planes no longer intersect the geometry, allowing us to find bounds on the gap. With two null planes and \(c=1\), we find a gap \(\Delta_{\rm gap}\leq 0.605\), compared with \(\Delta_{\rm gap}\leq 0.615\) from the usual spinless bootstrap with no restriction on degeneracy, when using the same number of null planes. Using more null planes, the spinless bootstrap gap bound converges to \(\Delta_{\rm gap}\leq 0.603\), while the spinning bootstrap converges to \(\Delta_{\rm gap}\leq 0.5\), saturating the free boson theory. Since the bound is already saturated by the spinning bootstrap there is no improvement to be found, however we find it encouraging that the integer condition is stronger than the spinless bootstrap at equivalent derivative order. This leaves open the possibility that at large \(\Lambda_{\rm max}\) the integer condition improves the spinless result, and for large \(c\) it may improve also the spinning bootstrap bound, given the reductions in allowed theory space we found previously.
### Comparing with non-integer bootstraps and known theories at \(c=1\)
We can now compare the integer bootstrap at \(\Lambda_{\rm max}=3\) with the SDP bootstrap at much higher orders, and also to the known \(c=1\) free boson theory.
For \(c=1\), the character representation in eq.(1) is no longer valid. This is because for \(c=1\), primaries with \(h=n^{2}\) and \(h=(n+\frac{1}{2})^{2}\), \(n\in\mathbb{Z}\) have extra null states that requires removal, introducing minus signs in the usual character expansion. Therefore, in general for \(c=1\), our geometry requires modification to include these special primaries.
However, for compactified free bosons at special radius its partition function can be positively expanded on the usual characters. Indeed its reduced partition function is given by [49]
\[Z(R)=(\tau\bar{\tau})^{\frac{1}{4}}\sum_{a,b=-\infty}^{\infty}q^{(a/R+bR)^{2} /4}\bar{q}^{(a/R-bR)^{2}/4}\,, \tag{119}\]
containing states
\[(h,\bar{h})=\left(\frac{1}{4}\left(\frac{a}{R}+bR\right)^{2},\frac{1}{4}\left( \frac{a}{R}-bR\right)^{2}\right)\,, \tag{120}\]
with \(a,b\in\mathbb{Z}\). To express this partition function in the form of eq.(1), we need to subtract the vacuum \((1-q)(1-\bar{q})\), and for general \(R\) the term \(q\bar{q}\), corresponding to a state \((h,\bar{h})=(1,1)\), appears with a minus sign. However, for \(R=\frac{n}{2}\), \(n\geq 2\), \(n\in\mathbb{Z}\), the term becomes positive. Therefore, it becomes a valid question if these specific theories can be bootstrapped. There are three interesting cases we can consider for the bootstrap, depending on our assumptions of the gap state.
\(\Delta\geq 0\), no gap state assumedWe compare the case where the only assumption on the scaling dimensions is \(\Delta\geq 0\), but we do not assume a state with \(\Delta=0\) is necessarily present. In other words, the allowed region includes all possible theories with any gap. This is shown in Figure 21. First, we explore convergence from usual SDPB without imposing
integrality. We find that spinless converges quickly around \(\Lambda_{\rm max}=15\), while spinning has still not converged in the bottom region even at \(\Lambda_{\rm max}=25\). It appears plausible that this lower boundary will converge precisely on the circle branch theories.
Next we plot the boundary due to imposing the integrality (up to \(\Lambda_{\rm max}=3\)) on the spinless bootstrap. We observe this boundary is stronger in many regions compared to both the spinless and spinning bootstrap at much higher order. Since the circle branch points can only be saturated by non-convex conditions, it is possible that integrality at high \(\Lambda_{\rm max}\) may converge exactly on these isolated points.
Figure 21: Regions carved out by SDPB, for \(c=1\), with no assumption on the scaling dimension except \(\Delta\geq 0\). The black dots represent the circle branch of the free boson theory, evaluated at particular \(R=\frac{n}{2}\), \(n\in\mathbb{N}\). The spinless bootstrap has already converged around \(\Lambda_{\rm max}=15\), while the spinning bootstrap region is still shrinking. At large \(\Lambda_{\rm max}\) we conjecture the lower boundary will reach the circle branch theories.
\(\Delta\geq 0.1\), gap state not presentIt is also instructive to explore this space when we require all \(\Delta\geq 0.1\) as used for Figure 8. Here we are still not assuming the presence of any particular gap state, only that it must be somewhere above \(0.1\). In the circle branch, the lowest state in scaling dimension corresponds to \(a=1,b=0\), implying that the circle branch satisfies \(\Delta\geq\Delta_{\rm gap}\) if \(R\leq(4\Delta_{\rm gap})^{-\frac{1}{2}}\), leading to three allowed theories at \(R=\{1,3/2,2\}\) for this particular gap.
Figure 22: Comparing regions carved out by SDPB with the region carved out by the (spinless) degeneracy integrality condition at \(\Lambda_{\rm max}=3\). The integrality condition significantly approaches the circle branch theories.
\(\Delta_{\rm gap}=0.1\), gap state presentIn Figure 24 we plot the allowed region of theories which have the gap state at \(\Delta_{\rm gap}=0.1\). For this particular gap these is no known theory to exist. We see that the integer condition rules out an even larger piece of the region allowed by the spinning bootstrap.
Figure 23: States satisfy \(\Delta\geq 0.1\), but gap state is not necessarily at \(\Delta=0.1\). Three circle branch theories are compatible with this restriction.
Figure 24: Gap state has \(\Delta_{\rm gap}=0.1\), and is assumed to be present, significantly pushing the integer boundary. There are no know theories satisfying our constraints for this gap.
Linear combinations of circle branchesAs observed in [39], it is possible to take linear combinations of circle branch theories which have positive expansion on non-degenerate characters for any \(R\). Let us focus on the space for \(\Delta\geq 0.1\), where the only allowed partition functions compatible with our assumptions are for \(R=\{1,3/2,2\}\), and consider the combination
\[Z^{\prime}(R)=(1-x-y-z)Z(R)+xZ(1)+yZ(3/2)+zZ(2)\,. \tag{5.21}\]
It is easy to verify that for particular ranges of \(x,y,z\) the above partition function has positive expansion on the non-degenerate characters, as well as the correct vacuum normalization. Clearly, for generic values of \(R\) it will not satisfy integer degeneracy, and so does not constitute a valid CFT, but it will be compatible with the spinning bootstrap. We plot this in Figure 25, and note this simple linear combination covers most of the space except a corner, at least for \(\Lambda_{\text{max}}=25\). This is similar to what was observed in the EFT bootstrap, where a linear combination of two theories spanned most of allowed space [9]. We can also view this result in the other direction: the bootstrap, based only on convex conditions like positivity, will never be able to rule out the region covered by this linear combination. Adding the integrality condition is therefore necessary to reduce this space further, and we can see that this occurs already at \(\Lambda_{\text{max}}=3\).
## 6 Conclusions
In this paper, we elucidate the geometry behind the modular bootstrap, where the constraint of modular invariance is imposed on the derivative expansion of the torus partition function around the self dual point. We show that the character representation of the torus
Figure 25: The space covered by eq.(5.21) covers most of the space allowed by the spinning bootstrap. The integrality condition starts ruling out some of this space.
partition function can be mapped to the convex hull of a double moment curve on \(\mathbb{R}^{+}\otimes\mathbb{Z}\), while modular invariance acts as a series of planes intersecting this space.
The spinless bootstrap corresponds to a simpler single moment problem, where sufficient conditions are well known. For the double moment problem which includes the spin related constraints, we only have a collection of necessary conditions. In both cases, these conditions are phrased in terms of the total positivity of (generalized) Hankel matrices constructed from the Taylor coefficients of the partition function. This allows us to efficiently identify the boundary analytically at low derivative order, but also at much higher order via numeric methods such as semi-definite programming.
We extract the optimal gap and twist gap for a wide range of values for the central charge, at derivative order \(\Lambda_{\rm max}\) up to 43. This is not sufficient to match results from the highly efficient fast bootstrap method. However, one advantage of our approach is that the value of central charge does not affect the computation time, allowing us to directly obtain bounds above \(c\gtrsim 2000\). Furthermore, while the bounds we derive depend on spin truncation, they are always valid bounds. This is because by our construction raising the spin truncation can only tighten the bounds. As such one can eliminate the step of checking convergence with spin truncation, which is necessary in traditional SDPB approaches.
Next, we observe that the allowed region of Taylor coefficients is in general unbounded unless the spectrum has a sufficiently large gap. This allows us to introduce the notion of a critical gap \(\Delta^{*}_{\rm gap}\) above which \(\hat{Z}_{0}\) develops an upper bound and the partition function is bounded from all directions. Such upper bound can also be extracted from the HKS [20; 27] bound, evaluated at \(\tau=i\), leading to \(\Delta^{*}_{\rm gap}=\frac{c-1}{12}+\frac{1}{4\pi}\). We demonstrate that the "optimal" critical gap is in fact lower than this value, and that it tends to \(\frac{c}{12}\) in the large \(c\) limit. Furthermore, in the latter limit we observe multiple kinks for \(\Delta^{*}_{\rm gap}\leq\Delta_{\rm gap}\leq\frac{1}{12}\) that are stable against increase in derivative expansion. It will be interesting to understand the nature of these kinks and whether they correspond to physical theories. For pure quantum gravity the gap is at \(\frac{c-1}{12}\), thus the space is unbounded and we only have a lower bound for \(\hat{Z}_{0}\). Preliminary results indicate that:
\[\hat{Z}_{0}\gtrsim 1.180 \tag{109}\]
with mild dependence on \(c\). This can be compared to the newly proposed partition functions in [50], which are conjectured to be unitary at particular values of \(c\). We leave the detailed analysis of such lower bounds for future work.
We then compare the allowed space obtained via the bootstrap to known theories at \(c=1\). We find that even though spinning bootstrap does not improve the gap bound, it still drastically reduces allowed space, with the free boson theories living partially on the boundary of this space. Curiously, linear combinations of these theories in fact almost saturate allowed space. These combinations are taken such that they respect positivity and vacuum normalization, but generically they do not satisfy integer degeneracy, and so are not consistent CFTs.
This motivates us to explore the consequences of integrality, and we indeed find it can be used to rule out parts of allowed space, even starting at very low derivative order. We show that this integrality condition can be imposed by computing a Minkowski sum of
identical curves, which leads to a non-convex geometry. Intersecting this geometry with up to two modular planes (\(\Lambda_{\rm max}=3\)) we find that integrality rules out regions allowed by the spinning bootstrap at much higher \(\Lambda_{\rm max}\). We speculate that at high \(\Lambda_{\rm max}\), integrality could in fact reduce the space to precisely the free boson theories. On bounding the gap, we find integrality is stronger than the spinless bootstrap at equivalent derivative order, but at high \(\Lambda\) or high \(c\) it is still not clear what the effect of integrality is. The reductions in theory space support the possibility that integrality should also improve bounds on the gap, but direct computation is required for a definitive conclusion.
There are several questions we have not yet answered about integrality. First would be pushing the derivative order, ie. the number of null plane intersections. Since the boundaries are known to arbitrary dimension, this problem is purely computational, where one has to solve systems of polynomial-exponential equations. More efficient and specialized algorithms could potentially increase \(\Lambda_{\rm max}\), hopefully to the point where bounds begin to converge and become relevant also at large \(c\). Finally, we have not explored the combination of imposing both integer spin and integer degeneracy. Our approach of finding boundaries via Minkowski sums can still be used in principle, but organising the geometry in such a high dimensional space becomes a very difficult task. Solving both integer spin and integer degeneracy to high order in \(\Lambda_{\rm max}\) would finally answer if the modular bootstrap alone can fix \(\Delta_{\rm gap}=\frac{c}{12}\), as expected from holography, and more generally how the space of known theories compares to the space of allowed theories.
It should also be feasible to apply our approach to superconformal characters. For N=2 super conformal symmetry, the states are now labeled by \((h,\bar{h},Q,\bar{Q})\), where \(Q\) is the charge under the U(1) R symmetry, and the partition function is given by sums over BPS and non-BPS representations (see see [38; 51] for details). For the spinless bootstrap, the space is given by four distinct moment curves, subject to modular constraints that interwine the four curves. It will be interesting to see if the space of consistent partition function also exhibits kink like behaviour with some critical gap.
## Acknowledgements
We thank Scott Collier, Shu-Heng Shao and Yuan Xin for discussions. LYC, YTH and HCW are supported by MoST Grant No. 109-2112-M-002 -020 -MY3 and 112-2811-M-002 -054 -MY2. LR is supported by the European Union Horizon 2020 Marie Sklodowska-Curie Individual Fellowships, Grant No. 101025095. WL is supported by the US Department of Energy Office of Science under Award Number DE-SC0015845, and the Simons Collaboration on the Non-Perturbative Bootstrap. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DE-SC0011632.
## Appendix A Flat extensions of the single moment problem
Having positive semi-definite Hankel and shifted-Hankel matrices are not sufficient constraints for the single moment problem. This is best illustrated with an example. Consider
\(\vec{y}=(2,\sqrt{2},1,1)\), which gives the following Hankel and shifted Hankel matrix:
\[K_{1}[\vec{y}]=\left(\begin{array}{cc}2&\sqrt{2}\\ \sqrt{2}&1\end{array}\right)\,,\quad K_{1}^{\rm shift}[\vec{y}]=\left(\begin{array} []{cc}\sqrt{2}&1\\ 1&1\end{array}\right)\,, \tag{116}\]
both satisfying eq.(10). However \(\vec{y}\) is not in the hull since \(\operatorname{Det}K_{1}[\vec{y}]=0\), implying there is only one element in the hull. But then \(\operatorname{Det}K_{1}\) must then vanish as well, leading to a contradiction.
This example provides a hint towards the correct sufficient conditions. First of all, we see that contradictions can only occur when minors vanish, implying a bound on the rank. Then Hankel and shifted-Hankel being positive definite are sufficient conditions for a solution to the moment problem. If some minors vanish, then one needs to further require that the extended (larger) minors also vanish. Indeed as discussed by Curto and Fialkow [52], the sufficient condition for singular Hankel matrices is the requirement that there exits a _flat extension_ of \(K_{n}\) to \(K_{n+1}\), such that the latter is totally non-negative **and** has the same rank as \(K_{n}\).
Starting with a singular \(K_{n}\), write
\[K_{n+1}=\left(\begin{array}{c|c}K_{n}&b\\ \hline b^{T}&c\end{array}\right),\]
For \(K_{n+1}\) to be positive semi-definite, we must have \(v^{T}K_{n+1}v\geq 0\) for all \(v\). Let's write \(v=\left(\dfrac{x}{y}\right)\). Then we have
\[x^{T}K_{n}x+2(x\cdot b)y+cy^{2}\geq 0.\]
If \(x\) lies in the null space of \(K_{n}\), we then have \(cy^{2}\geq 0\)
\[c(y+\dfrac{x\cdot b}{c})^{2}-\dfrac{(x\cdot b)^{2}}{c}\geq 0.\]
For the above to be always positive for all \(y\), \(b\) must be orthogonal to all vectors in the null space of \(K_{n}\), which means \(b\) shall be in the range of \(K_{n}\). Furthermore, with \(b\) being spanned by the vectors \(K_{n}\), \(K_{n+1}\) must be singular as it must.
In particular, the sequence \((2,\sqrt{2},1,1)\) fails to satisfy this requirement since \((1,1)\notin\operatorname{Ran}(K_{2})=\{b(\sqrt{2},1)|b\in\mathbb{R}\}\).
## Appendix B Explicit details for SDP
Standard formAny semi-definite program can be brought into the standard form
Min \[\mathbf{c}^{T}\mathbf{x}\] (117) subject to \[\sum_{i=1}^{m}x_{i}A_{i}^{(l)}-C^{(l)}\succeq 0,\quad l=1,...,L,\] (118) \[B^{T}\mathbf{x}=\mathbf{b},\] (119)
where
\[\mathbf{x}\in\mathbb{R}^{m},\quad A_{i}^{(l)}\in\mathbb{S}^{k^{(l)}},\quad B \in\mathbf{R}^{m\times n},\quad b\in\mathbf{R}^{n}. \tag{120}\]
Carving out the theory spaceSeen in section 4, the problem of bounding the partition function can be viewed as a semi-definite program, where the optimization variable \(\mathbf{x}\) consists of the discretized partition functions, i.e. the vectorization of (2.8). In this basis the equality constraint (B.3), which corresponds to the modular plane, becomes trivial. We simply remove the vanishing \(z^{(i,j)}\) with odd \(i+j\) and reduce the dimension of the problem by roughly half. Therefore, we almost never need to work with equality constraints explicitly. \(A_{i}^{(l)}\) and \(C^{(l)}\) are symmetric matrices that depend on the gap and the central charge, and \(L\) counts the number of positivity constraints, which is equal to the number of generalized Hankel matrices. Solving (B.1) then enables us to bound arbitrary linear combinations of the discretized partition functions.
Bounding the gapTo find the maximal possible value of, say, the gap of the scaling dimension \(\Delta_{\text{gap}}\), we start from a small value of \(\Delta_{\text{gap}}\) and gradually increase its value until no solution \(\mathbf{x}\) that satisfies the constraints (B.2) can be found. Therefore, to obtain an upper bound for \(\Delta_{\text{gap}}\), one needs a certificate of _infeasibility_. To achieve this, we can introduce an auxiliary variable \(t\) which measures the violation of the positivity, and solve the following optimization problem:
Min \[t\] (B.5) s.t. \[\sum_{i=1}^{m}x_{i}A_{i}^{(l)}-C^{(l)}+tI\succeq 0,\quad l=1,...,L.\] (B.6)
Suppose the optimal value \(t^{*}\) is positive, the problem is infeasible; otherwise, the problem is feasible.
Certifying unboundednessSuppose we obtain an optimal solution to (B.1), it automatically serves as a certificate of boundedness of \(\mathbf{c}^{T}\mathbf{x}\). However, the optimization problem (B.1) may also be unbounded, such as when the gap \(\Delta_{\text{gap}}\) is tuned to some value below the critical gap. The following feasibility problem can then be served as a certificate for unboundedness:
find \[\mathbf{y}\] (B.7) s.t. \[\mathbf{c}^{T}\mathbf{y}<0,\] (B.8) \[\sum_{i=1}^{m}y_{i}A_{i}^{(l)}\succeq 0.\] (B.9)
Since if such a point \(\mathbf{y}\) exists, then \(\mathbf{x}+\alpha\mathbf{y}\) for any \(\alpha>0\) will again be a feasible point of problem (B.1), implying that the objective function is unbounded from below.
ImplementationThere are many freely available solvers for semi-definite programs, for example SDPA[53] and SDPB[54]. Designed especially for polynomial matrix programs, the SDPB solver is widely used in the bootstrap community. Ordinary semi-definite programs are a special case of polynomial programs, which we frequently encounter in this paper. While SDPB is highly efficient and optimized for polynomial programs, it is overkill for solving ordinary SDPs. Instead, we needed a solver that could handle not only large
number of spins but also large matrix sizes. To this end, we developed SDPJ, a native Julia SDP solver based on the primal-dual interior point method, which is parallelized and supports arbitrary precision, with a modified parallelization architecture to suit our needs. The code is publicly available on GitHub [55]. The choice of parameters in this paper is shown in Table 2.
|
2305.00763 | SGX Switchless Calls Made Configless | Intel's software guard extensions (SGX) provide hardware enclaves to
guarantee confidentiality and integrity for sensitive code and data. However,
systems leveraging such security mechanisms must often pay high performance
overheads. A major source of this overhead is SGX enclave transitions which
induce expensive cross-enclave context switches. The Intel SGX SDK mitigates
this with a switchless call mechanism for transitionless cross-enclave calls
using worker threads. Intel's SGX switchless call implementation improves
performance but provides limited flexibility: developers need to statically fix
the system configuration at build time, which is error-prone and
misconfigurations lead to performance degradations and waste of CPU resources.
ZC-SWITCHLESS is a configless and efficient technique to drive the execution of
SGX switchless calls. Its dynamic approach optimises the total switchless
worker threads at runtime to minimise CPU waste. The experimental evaluation
shows that ZC-SWITCHLESS obviates the performance penalty of misconfigured
switchless systems while minimising CPU waste. | Peterson Yuhala, Michael Paper, Timothée Zerbib, Pascal Felber, Valerio Schiavoni, Alain Tchana | 2023-05-01T10:45:24Z | http://arxiv.org/abs/2305.00763v2 | # SGX Switchless Calls Made Configless
###### Abstract
Intel's software guard extensions (SGX) provide hardware enclaves to guarantee confidentiality and integrity for sensitive code and data. However, systems leveraging such security mechanisms must often pay high performance overheads. A major source of this overhead is SGX enclave transitions which induce expensive cross-enclave context switches. The Intel SGX SDK mitigates this with a _switches_ call mechanism for transitionless cross-enclave calls using worker threads. Intel's SGX switchless call implementation improves performance but provides limited flexibility: developers need to statically fix the system configuration at build time, which is error-prone and misconfigurations lead to performance degradations and waste of CPU resources. ZC-Switchless is a configless and efficient technique to drive the execution of SGX switchless calls. Its dynamic approach optimises the total switches worker threads at runtime to minimise CPU waste. The experimental evaluation shows that ZC-Switchless obviates the performance penalty of misconfigured switchless systems while minimising CPU waste.
Intel SGX, trusted execution environments, SGX switchless calls, multithreading
## I Introduction
Cloud computing increases privacy concerns for applications offloaded to the cloud, as sensitive user data and code are largely exposed to potentially malicious cloud services. Trusted execution environments (TEEs) like Intel SGX [3, 9, 11, 19] provide secure _enclaves_, offering confidentiality and integrity guarantees to sensitive code and data. Enclaves cannot be accessed by privileged or compromised software stacks, including the operating system (OS) or hypervisor. However, SGX enclaves introduce overheads, primarily from _enclave context switches_, upon every CPU transition between non-enclave and enclave code. The Intel SGX SDK [6] provides specialised function call mechanisms (_i.e._, ecalls and ocalls), to enter, and respectively exit, an enclave. Enclave switches cost up to 14,000 CPU cycles [32, 33], on average 56\(\times\) more expensive compared to regular system calls on similar Intel CPUs (_i.e._, 250 cycles [21]). This represents a serious performance bottleneck for applications which perform many enclave context switches, _e.g._, to perform system calls via ocalls [33].
Techniques exist [4, 28, 33] which circumvent expensive enclave switches by leveraging _worker threads_ in and out of the enclave. _Client threads_, _i.e._, inside or outside of the enclave, send their requests to the opposite side via shared memory. Worker threads handle such requests and send the appropriate responses once completed. The Intel SGX SDK implements this technique via the _switchless call library_[8]. While this approach improves the performance of enclave context switches, our practical experience revealed the following problems. First, Intel SGX switchless calls **must be manually configured** at build time and misconfigurations can lead to performance degradations and waste of CPU resources [8]. Second, manually configuring ecalls or ocalls as switchless at build time is not ideal: developers rarely know the frequency nor the duration of the calls, which are typically application and workload specific.
To mitigate these problems, we propose ZC-Switchless (_zero-config_ switchless), a system to **dynamically select switchless routines** at run time and **configure the most appropriate number of worker threads on the fly**. This approach prevents static configuration of switchless routines and worker threads, and obviates the performance penalty of misconfigured switchless systems, while minimizing CPU waste.
ZC-Switchless leverages an application level scheduler that monitors runtime performance metrics (_i.e._, number of wasted cycles or fallback calls to regular non-switchless routines, _etc._); these are used to dynamically configure an optimal number of worker threads to fit the current workload while minimising CPU waste.
We further analysed the Intel SGX SDK implementation, and derived practical configuration tips for Intel SGX switchless calls. In addition, we tracked performance issues with Intel's SDK memcpy implementation, and propose an optimised version which removes a major source of performance overhead for both switchless and regular SGX transition routines.
In summary, our **contributions** are: (1) The design and the implementation of ZC-Switchless, a configless and efficient system to dynamically select and configure switchless calls, to be released as open-source. (2) An optimised implementation for the Intel SGX SDK's memcpy, also to be open-sourced. (3) An extensive experimental evaluation demonstrating the effectiveness of our approach via micro- and macro-bench-marks.
## II Background on Intel SGX
Intel SGX extends Intel's ISA to enable secure memory regions called _enclaves_. The CPU reserves an encrypted portion of DRAM, called the _enclave page cache_ (EPC), to store enclave code and data at runtime. EPC pages are only decrypted inside the CPU by the _memory encryption engine_ (MEE) once loaded in a CPU cache line.
Enclaves operate only in user mode and thus cannot issue system calls (syscalls) [9] directly. The ecalls and ocalls are specialised function calls to enter (using the EENTER CPU instruction) and exit (using the EEXIT CPU instruction) the enclave, respectively. The Intel SGX SDK provides a secure version of the C standard library, the _trusted libc_ (tlibc), for in-enclave applications. The tlibc only relies on trusted functions, and removes instructions forbidden by Intel SGX [8].
Enclave applications can be deployed following two approaches: _(1)_ running the entire application in the enclave (_e.g._, SCONE [4], Graphene-SGX [5], Occlum [27], SGX-LKL [23]), or _(2)_ splitting the application into a trusted and untrusted part which respectively execute inside and outside of the enclave (_e.g._, Glamdring [15], Civet [30], Montsalvat [34]).
For both cases, unsupported routines not implemented by the tlibc must be relayed to the untrusted part via ocalls, to be executed in non-enclave mode.
Both ecalls and ocalls perform expensive CPU context switches, _i.e._, switching from enclave to non-enclave mode, and vice-versa. The overhead of enclave transitions is mainly due to the CPU flushing its caches as well as all TLB entries containing enclave addresses, so as to preserve confidentiality [9]. More precisely: an ocall \(=\) EEXIT \(+\) untrusted host processing \(+\) EENTER. EEXIT flushes the CPU caches and possibly invalidates branch predictors and TLBs. Similarly, the EENTER instruction performs many checks and requires hardware-internal synchronization of CPU cores [26].
In the following, we focus on ocalls as they are usually the principal source of overhead due to enclave context switches, based on our experience. However, the techniques we propose in this paper can equally be used for ecalls.
**Switchless ocalls.** The Intel SGX SDK provides two variants of ocalls. First, _regular ocalls_ perform costly context switches (up to 14,000 cycles [32]) from enclave mode to non-enclave mode. Second, _switchless ocalls_[8] provide a mechanism to do cross-enclave calls without performing an enclave context switch. In a nutshell (_i.e._, Fig. 1), _worker threads_ outside the enclave perform unsupported functionality (_e.g._, syscalls) on behalf of in-enclave client threads. The latter send ocall requests to a task pool in untrusted memory. The worker threads outside the enclave wait for pending tasks in the task pool, and execute them until the task pool is empty [8]. These operations are done without performing any enclave transition.
## III Limitations of Intel SGX switchless calls
In the following, we identify three main problems and limitations with the Intel switchless calls implementation: _(i)_ switchless call selection (SSIII-A), _(ii)_ worker thread pool sizing (SSIII-B), and _(iii)_ Intel SDK parameterisation (SSIII-C).
**Setup.** We use a 4-core (8 hyper-threads) Intel Xeon CPU E3-1275 v6 clocked at 3.8 GHz, 8 MB L3 cache, 16 GB of RAM, with support for Intel SGX v1. We deploy Linux Ubuntu 18.04.5 LTS with kernel 4.15.0-151 and Intel SGX SDK v2.14. We report the median over 10 executions.
### _Switchless calls selection_
At build-time, developers must specify the routines (_i.e._, ecalls or ocalls) to be handled switchlessly at run time. The Intel SGX reference [8] suggests to configure a routine as switchless if it has _short_ duration and is _frequently called_. However, these details are hardly available to the developer at build time. Auto-tuning tools [18] cannot spot potential switchless routines by relying solely on duration and frequency, as it would require a full code path exploration, a costly operation for large systems. Further, the execution frequency of a specific application routine is workload specific and hard to configure at build time.
Improperly selected switchless routines degrade the performance of SGX applications. We show this behaviour with a synthetic benchmark. It executes \(n\) ocalls to 2 functions as follows: \(\alpha\) calls to function \(f\) which is known to benefit from switchless calls (as shown in [28]), while \(\beta\) calls to \(g\) which should run as a regular ocall. If \(\alpha=\beta\), the sum of the execution times of calls to \(f\) is negligible compared to the same sum for calls to \(g\). Hence, to better highlight the performance gains when executing \(f\) switchlessly, we set \(\alpha=3\beta\) and \(n=\alpha+\beta\). In this test, \(f\) is an empty function (_i.e._, void f(void){}). On the other hand, \(g\) routine executes asm("pause") in a loop, _i.e._, a busy-wait loop.
We evaluate five different configurations: C1-C5. In C1, all \(f\) functions run switchlessly, while \(g\) functions run as regular ocalls. We expect C1 to perform best. In C2, only
Fig. 1: Intel SGX switchless ocall architecture.
the \(g\) functions run switchlessly, and we expect C2 to be the worst. In the C3 case, \(\frac{a}{2}\)\(f\) and \(\frac{\beta}{2}\)\(g\) functions run switchlessly while the other \(f\) and \(g\) functions run as regular ocalls. Finally, in C4 and C5 all functions run switchlessly or regularly, respectively. We observe the following results when executing 100'000 ocalls (_i.e._, switchless and regular ocalls combined). C1 is the fastest configuration (0.9 s), \(\approx\) 1.8\(\times\) faster than C2, indeed the worst (1.6 s). C3 and C4 complete in 1.3 s (1.44\(\times\) slower than C1). Finally, C5 completes in 1 s.
_Take-away 1 : An improper selection of switchless coroutines degrades the performance of SGX applications._
### _Worker thread pool sizing_
Developers must specify the total number of worker threads allowed to the SGX switchless call mechanism at build time. However, an overestimation of worker threads for ocalls can lead to a waste of CPU resources due to busy-waiting of worker threads awaiting switchless requests (see [8], page 71). The latter will limit the number of applications that can be co-located on the same server or interfere with application threads which will be deprived of CPU resources. Similarly, an underestimation can lead to poor application performance as more ocalls will perform costly enclave switches.
Using the same synthetic benchmark from SSIII-A, we validate these effects for a varying number of worker threads (see Fig. 2). The optimal number of workers strictly depends on the specific configuration. At its best (C1), the fewer the workers, the better the performance: this is expected, as there are as many in-enclave threads as hardware cores. In the other cases, we observe better results when executing functions as regular ocalls, where the best overall result is using 4 worker threads.
Fig. 3 presents the execution time of the application depending on both the number of worker threads (from 1 to 5) and the duration of \(g\) (from 0 to 500 asm{"pause"} instructions, each spending 140 CPU cycles). We report 4 configurations (we omit C3 for the sake of clarity and because it obtains results between the worst of C1 and C2). We can make the following observations. C5 (_i.e._, both \(f\) and \(g\) as regular ocalls) performs worst on average for the shortest \(g\) function (_i.e._, 0 pauses), but it is best in several cases for longer \(g\) functions, regardless of the number of workers. C1 (\(f\) switchless, \(g\) regular) is the best when \(g\) is longer than 200 pauses. Executing all functions switchlessly (C4) is good for short \(g\) functions, scaling with the available worker threads.
_Take-away 2 : Switchless calls perform best when the calls are short, relative to the cost of an enclave transition._
### _Intel SDK parameterisation_
If a switchless task pool is full or all worker threads are busy, a switchless call falls back to a regular ecall or ocall. The Intel SGX SDK defines the variable retries_before_fallback (rbf) for the number of retries client threads perform in a busy-wait loop, waiting for a worker thread to start executing a switchless call request, before falling back to a regular ecall/ocall[8]. Similarly, the SDK defines retries_before_sleep (rbs), for the number of asm{"pause"} done by a worker while waiting for a switchless request, before going to sleep. The SDK's default values for both rbf and rbs are set at 20,000 retries.
However the value of rbf especially is abnormal in both a theoretical (as we explain next) and practical sense: between successive retries, a caller thread executes an asm{"pause"} instruction, which has an estimated latency up to 140 cycles on Skylake-based microarchitectures (where SGX extensions were first introduced, see [https://intel.ly/3hTVEMG](https://intel.ly/3hTVEMG), page 58). A caller thread can hence wait more than 2.8 M cycles before its call is handled by a worker thread. This is about 200\(\times\) more costly relative to a regular ocall transition (\(\approx\)14,000 cycles), and defeats the purpose of using switchless calls, _i.e._, to avoid the expensive transition. Similarly for rbs, a worker thread will wait for 2.8 M cycles before going to sleep.
While developers can easily tune the rbf and rbs values at build time, the Intel SDK lacks proper guidance.
_Take-away 3 : The proper configuration of the Intel SGX switchless-related parameters remains hard and misconfigurations lead to poor performance._
Fig. 3: Runtime for \(100,000\) ocalls with 8 in-enclave threads for different durations of \(g\) function.
Fig. 2: Runtime for 75,000 switchless ocalls to \(f\) and 25,000 regular ocalls to \(g\)
## IV ZC-Switchless
The main goal of ZC-Switchless is to provide a resource-efficient implementation for SGX switchless calls.
We first present its internal scheduler in SSIV-A-IV-C. In addition, we detail a more efficient implementation of tlibc's memcpy (SSIV-F), used for intra-enclave data copying, as well as data exchange between the enclave and the outside world.
Fig. 4 shows an overview of ZC-Switchless. In a nutshell, we consider _any function_ (Fig. 4-\(\boldsymbol{\Theta}\)) as a potential candidate to run as switchless, thus avoiding the need for manual selection by developers at build time. Our design allows the number of worker threads to be tuned dynamically, to minimise CPU waste while improving performance via the switchless call technique. We do so by having the scheduler implement a feedback loop (Fig. 4-\(\boldsymbol{\Theta}\)) to periodically collect ocall statistics (Fig. 4-\(\boldsymbol{\Theta}\)), determine the optimal number of worker threads (Fig. 4-\(\boldsymbol{\Theta}\)), and finally apply its decision (Figure 4-\(\boldsymbol{\Theta}\)). Inside the enclave, a call is executed in a switchless manner if the caller finds at least one idle worker thread. The remainder details further these aspects. Unless indicated otherwise, the term _scheduler_ refers to ZC-Switchless's scheduler.
### ZC-Switchless's scheduler
The main objective of the scheduler is to minimise wasted CPU cycles. We define a _wasted CPU cycle_ as one spent by a CPU core doing something that does not make the application (_i.e._, caller thread) move forward in its execution [16].
In the case of Intel SGX, we identify two potential sources of wasted CPU cycles: _(1)_ transitions between enclave and non-enclave mode in the case of regular ocalls, and _(2)_ busy-waiting in the case of switchless ocalls. The overhead of regular ocalls has been evaluated extensively in past research [33], and it varies also according to the specific CPU and micro-code version. We evaluated this overhead to be \(\sim\)13,500 CPU cycles for our experimental setup (see SSIII).
Each active worker can be at any point in time in one of two states: either the worker is handling an ocall, in which case the enclave thread (which made the ocall) is busy-waiting, or the worker is busy-waiting for incoming ocall requests. Therefore, for every active worker thread, there is always exactly one thread busy-waiting. The extra cost of having \(M\) worker threads is thus \(M\) multiplied by the number of cycles during which they have been active.
The scheduler periodically computes the number of worker threads that minimises the number of wasted CPU cycles. Throughout its lifetime, the scheduler switches between two phases (see Fig. 5): a _scheduling phase_ that lasts a scheduler quantum, \(Q\), during which it sets an optimal number of switchless workers, and a _configuration_ phase, during which it calculates the optimal number of switchless workers for the next scheduling phase.
We denote by \(T_{es}\) the duration of an enclave switch, \(F\) the number of calls not being handled switchlessly (_i.e._, fallback), \(N\) the number of cores on the machine and \(M\) the number of worker threads set during ZC-Switchless's scheduler quantum (\(Q\), set empirically to 10 ms).
The number of wasted cycles during \(T\) cycles is: \(U=F\cdot T_{es}+M\cdot T\). At every quantum (_i.e._, during a _configuration phase_), the scheduler thread estimates the optimal number of workers for the next quantum (_i.e._, _scheduling phase_).
To estimate the number of workers during a _configuration phase_, the scheduler sleeps for \(\frac{N}{2}+1\) micro-quanta, of length \(\mu\cdot Q\) each, with a different number of workers \(i\) each time, \(0\leq i\leq\frac{N}{2}\) (_i.e._, \(\frac{N}{2}+1\) possible values). The constant \(\mu\) is a small time period, so the configuration phase can be quick, but still long enough to capture the needs (in terms of CPU resources) of the application at a given time. We empirically set \(\mu=\frac{1}{100}\). During every micro-quantum, the number of non-switchless calls is recorded so that the scheduler can compute \(U_{i}=F_{i}\cdot T_{es}+i\cdot\mu\cdot Q\cdot CPU\_FREQ\) once awake. Finally, the scheduler keeps \(M^{\prime}\) workers for the next _scheduling phase_, where \(M^{\prime}\) is such that \(U_{M^{\prime}}=\min_{i}U_{i}\).
To deactivate a worker thread, the scheduler sets a value in the worker's buffer (see SSIV-B). The worker's loop function will eventually check this value and, if it is set and no caller thread has reserved (or is using) the worker, the worker will pause. To re-activate a paused worker, the scheduler sends a signal to wake up the corresponding worker thread.
### _Worker thread state machine_
We associate to each worker a buffer structure that consists of 4 main fields: an untrusted memory pool (preallocated) used by callers to allocate switchless requests, a field to hold the most recent switchless request, a status field to track the worker status, and a field used to communicate with the scheduler. Fig. 6 summarises the status transitions of a worker thread.
Fig. 4: ZC-Switchless general overview.
Fig. 5: ZC-Switchless scheduler phases.
Fig. 6: Worker thread state transitions.
A worker is initially in the UNUSED state. When a caller needs to make a switchless call, it finds an UNUSED worker and switches the worker's state to RESERVED. We rely on GCC atomic built-in operations [10] for thread synchronisation. The caller allocates a switchless request structure from the corresponding memory pool. A switchless request comprises: an identifier of the function to be called, the function arguments (if present), and the return result (if present).
The caller copies its request to the worker's buffer and changes the worker's state from RESERVED to PROCESSING. At this point, the worker reads the request and calls the desired function with the corresponding arguments. Once the function call completes, the worker updates the request with the returned results (if present) and switches from the PROCESSING to the WAITING state.
Finally, the caller copies the returned results into enclave memory and changes the worker's state to UNUSED.
The memory pools of worker buffers are freed and re-allocated when full via an ocall. Using preallocated memory pools prevents callers from performing ocalls to allocate untrusted memory for each switchless request, which will defeat the purpose of using a switchless system.
Upon program termination, the scheduler sets a value in workers' buffers so the workers can switch to the EXIT state. At this stage, the workers perform final cleanup operations (_e.g._, freeing memory) and then terminate.
### _Switchless call selection_
In zc, any routine can be run as switchless if the corresponding enclave caller thread finds an available/unused worker thread. Otherwise, the call immediately falls back to a regular ocall without any busy waiting.
### _Integrating ZC switchless with other TEE implementations_
Other popular TEE implementations operate following a very similar architecture to Intel SGX. For example in ARM TrustZone (Armv8-M architecture) [22], the application is divided into two parts: a secure world (_i.e._, enclave) with very limited system functionality, and a normal world (untrusted world) with a richer system API. Similar to Intel SGX, CPU thread transitions between the secure and normal worlds require extra security checks to guarantee data confidentiality and integrity. So conceptually, the design proposed in ZC can be applied here. This will, however, require some modifications at the implementation level.
### _Security analysis of Zc-Switchless_
The security analysis of Zc-Switchless is similar to that presented in [33]. All switchless call designs (_i.e._, Intel switchless, Zc-Switchless, hotcalls [33]) are based on threads in and out of the enclave communicating via plaintext shared memory. Thus, the switchless design proposed by Zc-Switchless is no less secure than that proposed by the Intel SGX switchless library.
**Impact of Zc scheduler on security.** The Zc-Switchless scheduler is located in the untrusted runtime, which leaves it vulnerable to malicious tampering. However, since the scheduler only decides how many worker threads should be used and when, the worst case scenario here will be a DoS, _e.g._, by killing worker threads. The enclave's confidentiality or integrity, however, cannot be compromised by tampering with the scheduler.
### _Trusted libc's memcpy optimisation_
For security reasons, the Intel SDK provides its own implementation of a subset of the functions from the libc, _i.e._, the libc. The libc re-implements most of the functions from the libc that do not require system calls, _e.g._, memset, memcpy, snprintf, _etc_. Because enclave code cannot be linked to dynamic libraries, these re-implementations are statically linked to the enclave application at build time.
We focus on the Intel SDK tlibc version of memcpy[2], as it is heavily used to pass ocall arguments from trusted memory to untrusted memory and back for the results [14]. Our tests highlighted huge performance gaps when using aligned buffers (_i.e._, when the src and dest arguments are congruent modulo 8) and unaligned buffers.
For instance, Fig. 7 presents the throughput when issuing 100,000 write system calls, which requires an ocall and involves memcpy calls, with varying lengths of aligned and unaligned buffers ranging from 512 B to 32 KB. We observe that the execution time for unaligned buffers is consistently higher than for aligned buffers. Moreover, when using unaligned buffers, we observe poor scalability trends for write when increasing the buffer sizes, basically plateauing at about 0.4 GB/s.
By analysing Intel's original implementation of tlibc's memcpy, we observed that it performs a _software word-by-word copy for aligned buffers_ but _a byte-by-byte copy for unaligned buffers_. We provide a revised and more efficient implementation leveraging the hardware copy instruction rep movsb, as also advised by Intel's optimization manual [1]. Listing 1 sketches our optimised approach, which we evaluate in depth in SSV-D.
## V Evaluation
Our evaluation answers the following questions:
Fig. 7: Throughput for ocalls of the write system call to /dev/null (100,000 operations, average over 10 executions) for aligned and unaligned buffers.
* How does ZC-Switchless impact application performance for (1) static workloads (SSV-A1), and (2) dynamic workloads (SSV-C1)?
* What is the effect of misconfigurations of Intel switchless on application performance? (SSV-A1) and (SSV-C1)?
* What is the effect of ZC-Switchless on CPU utilisation for static (SSV-A2) and dynamic (SSV-C2) workloads?
* What is the performance gain of our improved memcpy implementation (SSV-D)?
**Experimental setup.** All experiments use a server equipped with a 4-core Intel Xeon CPU E3-1275 v6 clocked at 3.8 GHz with hyperthreading enabled. The CPU supports Intel SGX, and ships with 32 KB L1i and L1d caches, 256 KB L2 cache and 8 MB L3 cache. The server has 16 GB of memory and runs Ubuntu 18.04 LTS with Linux kernel version 4.15.0-151. We run the Intel SGX platform software, SDK, and driver version v2.14. All our enclaves have maximum heap sizes of 1 GB. The EPC size is 128 MB (93.5 MB usable by enclaves). We use both static and dynamic benchmarks.
Our static benchmarks are based on kissdb [12] and an Intel SGX port of OpenSSL [7]: kissdb is a simple key/value store implemented in plain C without any external dependencies, while OpenSSL [24] is an open-source software library for general-purpose cryptography and secure communication.
For dynamic benchmarks, we use lmbench [20], a suite of simple, portable, ANSI/C microbenchmarks for UNIX/POSIX.
For all benchmarks, we set the initial number of worker threads to \(\frac{\#logical\_cpus}{2}\) for ZC-Switchless. This number is 4 for the SGX server used. For Intel switchless experiments, we maintain the default rbf and rbs values (_i.e._, \(20,000\)).
### _Static benchmark: kissdb_
We issue a varying number of key/value pair writes to kissdb, and we evaluate and compare the performance and CPU utilisation of ZC-Switchless with Intel switchless.
We ran our benchmark in 3 modes: without using switchless calls (no_s1), using Intel switchless calls, and using ZC-Switchless (zcf for short). For Intel switchless, we consider two values for the number of switchless worker threads: 2 and 4. In kissdb, we know empirically that the 3 most frequent ocalls in the benchmarks are: fseeko, fwrite, and fread. Therefore, for kissdb we benchmark Intel's switchless in 10 (\(2\times 5\)) different configurations: only fseeko as switchless (i-fseeko-_x_, \(x\) being 2 or 4, the number of Intel switchless worker threads), only fwrite as switchless (i-fwrite-_x_), only fread as switchless (i-frew-_x_), and all the 3 ocalls as switchless (i-all-_x_). Note that these ten configurations correspond to possible configurations an SGX developer could have set up for kissdb. We report the averages over \(5\) runs.
#### V-A1 ZC-Switchless vs. Intel switchless: Answer to Q1 & Q2
Fig. 8 (a) and (b) show the average latencies for setting a varying number of 8-byte keys and 8-byte values in kissdb, with respectively 2 and 4 switchless worker threads configured for Intel switchless.
**Observation.** Here, zc is \(1.22\times\) faster when compared to a system without switchless calls, and respectively \(1.19\times\), \(1.13\times\), \(1.13\times\), \(1.05\times\), and \(1.02\times\) faster when compared to i-fread-2, i-fwrite-2, i-fseeko-2, and i-frew-2.
However, zc is about \(1.33\times\) slower for i-all-2. We note how zc is (on average) \(1.26\times\) faster than i-fread-4, \(1.22\times\) faster than i-fweite-4, \(1.13\times\) faster than i-fseeko-4, \(1.10\times\) than i-frew-4. However, it is \(1.16\times\) slower than i-all-4.
**Discussion.** In kissdb, fseeko is the most frequent ocall, invoked almost twice more often than fread and fwrite. Further, fseeko is much shorter in duration relative to fread and fwrite, which explains the better performance of i-fseeko when compared to i-fread and i-fwrite for both 2 and 4 switchless worker threads. i-fwrite configurations show the poorest performance for Intel's switchless in all cases.
However, when both fread and fwrite are configured as switchless (i-frew), we see an improvement in performance relative to i-fwrite and i-fread, almost equal to i-fseeko performance. Here the combined sum of fread and fwrite calls surpasses the number of fseeko invocations, which leads to a more significant number of switchless calls in i-frew, thus leading to similar performance as fseeko.
This configless strategy of zc outperforms statically misconfigured systems like i-fread and i-fwrite (for both 2 and 4 Intel switchless workers), shows similar performance as i-fseeko-2, and is faster than i-fseeko-4. The observed spikes (_e.g._, 7,500 and 8,500 keys) in zc are due to ocall operations when reallocating full memory pools for zc buffers (see SSIV-B).
Fig. 8: kissdb: average latency of key/value SET commands.
In i-all, Intel's switchless outperforms zc because it maintains a constant number of switchless workers (2 or 4), whereas zc uses few worker threads at various points during the application's lifetime.
_Take-away 5:_ _A well configured Intel switchless system outperforms zc, but zc obviates the performance penalty observed from the misconfigured switchless sytems._
#### Iv-A2 Zc-Switchless vs. Intel switchless: CPU usage
_Answer to Q3._ We now evaluate the CPU utilisation of Zc-Switchless and Intel switchless when running the same experiments as in SSV-A1. We measured the overall CPU utilisation for the given systems from the kernel's /proc/stat. The percentage CPU utilisation is calculated by: \(\%cpu\_used=\frac{(user+nice+system)}{(user+nice+system+idle)}*100\) where: _user_ is the time spent for normal processes executing in user mode, _nice_ is the time spent for processes executing with "nice" priority in user mode, _system_ is the time spent for processes executing in kernel mode, and _idle_ is time spent by the CPU executing the _system idle process_.
Fig. 9 shows the average CPU usage for setting a varying number of 8-byte keys and 8-byte values in kissdb, with respectively 2 and 4 switchless worker threads configured for Intel switchless.
**Observation.** The experimental results show that zc maintains approximately \(60\%\) CPU usage throughout the benchmark's lifetime. For 2 Intel switchless workers, all Intel switchless configurations stabilise at about \(55\%\) CPU usage, while for 4 Intel switchless workers, the Intel switchless configurations stabilise at about \(80\%\) CPU usage. All switchless systems have visibly higher CPU usage when compared to the system without switchless calls enabled (no_sl).
**Discussion.** Intel's switchless mechanism maintains a constant number of worker threads (2 or 4) throughout the application's lifetime, while zc's scheduler increases or decreases the number of worker threads (to a maximum of 2 or 4) with respect to the workload. This explains the overall lower CPU usage of zc relative to Intel switchless.
_Take-away 6:_ _zc outperforms misconfigured systems_ (e.g., i-fread,i-fwrite,i-frow) _while minimizing CPU usage._
### _Static benchmark:_ OpenSSL _file encryption/decryption_
This benchmark consists of two enclave threads encrypting and decrypting data read from files. The first thread reads chunks of plaintext from a file, encrypts these in the enclave, and writes the corresponding ciphertext to another file, while the second thread reads ciphertext from a different file, and decrypts it inside the enclave. All cryptographic operations are done with the AES-256-CBC [25] algorithm.
Similarly, we ran this benchmark in the 3 modes described above: no_sl, using Intel switchless calls (2 and 4 workers), and zc.
In the OpenSSL benchmark, we know empirically that 4 ocalls are called most frequently: fread, fwrite, fopen, and fclose. We consider 10 possible configurations (\(2\times 5\)) for Intel's switchless routines: only fread as switchless (i-fr-\(x\), \(x\) being 2 or 4, the number of Intel switchless worker threads), only fwrite as switchless (i-fw-\(x\)), both fread and fwrite (i-frw-\(x\)), both fopen and fclose (i-foc-\(x\)), and all 4 ocalls as switchless (i-frwoc-\(x\)).
Fig. 9(a) and Fig. 9(b) show the latency and CPU usage for these configurations. We highlight and discuss essential observations.
**Observation and discussion.** In this OpenSSL benchmark, fread and fwrite are respectively called \(\approx 2700\times\) and \(\approx 1400\times\) more frequently as compared to both fopen and fclose. This explains why Intel's latency is poor (close to no_sl) with i-foc, as more ocalls perform context switches, and much better (w.r.t no_sl) with i-frw, where a larger number of ocalls are performed switchlessly. Intel performs best with i-frwoc, when the four ocalls are all
Fig. 10: Latency and CPU usage for OpenSSL.
Fig. 9: Average %CPU usage for key/value pair SET ops in kissdb
configured as switchless. However, we observe that zc is about \(1.62\times\) and \(1.82\times\) faster than Intel's best configuration i-frwoc, for 2 and 4 Intel workers respectively. This is explained by the fact that the fread and fwrite calls here are much longer (about \(6\times\) longer as compared to the previous kissdb benchmark). This accentuates the bad effect of the poor default rbf value in Intel's switchless, as enclave threads do longer pauses waiting for a switchless worker thread to become available. This is absent in zc, where caller threads immediately fall back.
Regarding CPU usage, zc's scheduler set the number of worker threads to \(0\), \(1\), \(2\), \(3\), \(4\) for respectively \(9.4\%\), \(4.6\%\), \(84.4\%\), \(1.6\%\), and \(0\%\) of the program's lifetime. This explains the similar CPU usage in zc and Intel 2 workers (except for i-foc), while with 4 Intel workers, CPU usage for Intel's best config is about \(1.62\times\) larger than zc's, despite the latter performing better.
_Take-away7: zc outperforms all Intel configurations when ocalls are long; this is due to the poor default rbf value in Intel's switchless library._
### _Dynamic benchmark: lmbench_
Our dynamic benchmark is based on the read and write system call benchmarks of lmbench. The read benchmark iteratively reads one word from the /dev/zero device [20], while the write benchmark iteratively writes one word to the /dev/null device. We devised a dynamic workload approach which consists of periodically (every \(\tau=0.5s\)) issuing a varied number of read and write operations to lmbench using two in-enclave caller threads (1 reader + 1 writer) over a period of \(60s\). These operations trigger ocalls, and zc scheduler adapts the number of worker threads accordingly.
The total run time of the dynamic benchmark is divided into 3 distinct phases, each lasting \(20s\)_: (1) increasing operation-frequency_: the number of operations is doubled periodically, (2) constant operation-frequency: the number of operations remains at a constant value (the peak value from phase-1), and (3) decreasing operation-frequency, where the number of operations is periodically decreased (reduced by half every \(\tau\)). We measure the read/write throughputs and CPU usage at different points during the benchmark's lifetime.
Similarly, we ran our benchmark in 3 modes: without using switchless calls (no_sl), using Intel switchless calls, and using ZC-Switchless (zc). For Intel switchless, we consider two values for the number of switchless worker threads: 2 and 4. For lmbench syscall benchmark, we know empirically that the read and write syscalls are the most frequent ocalls. So we configure Intel's switchless in six (\(3\times 2\)) different configurations: only write as switchless (i-write-\(x\)), only read as switchless (i-read-\(x\)), both read and write ocalls as switchless (i-all-\(x\)). Similarly, these six configurations correspond to possible configurations an SGX developer could have set for their program. We show the throughputs and CPU usages as observed from both the reader and writer threads. We highlight essential observations, and analyze them in our discussions.
#### Iv-C1 Dynamic benchmark
ZC-Switchless _vs. Intel switchless: Answer to Q1 & Q2_. Fig. 11 shows the operation throughputs as observed by the reader (bottom) and writer (top) threads respectively during the lifetime of the dynamic benchmark.
**Observation.** The experimental results show that on average, zc is about \(2.3\times\) faster when compared to both reader-i-write-2 and reader-i-write-4, and \(2.1\times\) and \(2.5\times\) faster when compared to writer-i-read-2 and writer-i-read-4 respectively. However, zc is about \(1.6\times\) and \(1.1\times\) slower when compared to properly configured Intel switchless configurations i-all-2 (reader,writer) and i-all-4 (reader, writer) respectively.
**Discussion.** From the perspective of the reader thread, i-write is a misconfiguration as the read calls will never be switchless, and similarly for the writer thread, i-read is a misconfiguration as the write calls will never be switchless. This explains the relatively lower throughputs of these configurations when compared to zc and the other switchless configurations. However, zc has a lower throughput when compared to the better configurations: reader-(i-all,i-read) and writer-(i-all,i-write).
#### Iv-C2 ZC-Switchless CPU utilisation vs. Intel switchless: Answer to Q3
We compute the CPU usage as explained previously. Fig. 12 shows the average CPU usage as observed by the reader (top) and writer (bottom) threads respectively at the different points during the lifetime of the dynamic benchmark.
**Observation.** Similarly to the throughputs, the CPU usage for the studied configurations increases with time and plateaus at a certain point. The experimental results show that on average, zc CPU usage is about \(1.8\times\) and \(1.6\times\) more when compared to reader-i-write-2 and writer-i-read-2 respectively, but almost equal CPU usage on average when compared to reader-i-write-4, writer-i-read-4, and reader/writer-i-all-2 respectively. However, reader/writer-i-all-4 use about \(1.3\times\) more CPU when compared to zc.
**Discussion.** Similarly to the poor throughputs, we can eas
Fig. 11: Read (a/b), write (c/d) tput. for zc _vs._ 2/4 Intel switchless worker threads.
ily highlight CPU waste for the misconfigurations: reader-i-write-4 and writer-i-read-4, as they have similar CPU usages when compared to zc on average but much poorer performance with regards to the corresponding throughputs. ZC-Switchless prevents this misconfiguration problem. For the better configurations: reader-(i-read,i-all), and writer-(i-write,i-all), Intel performs better than zc but this usually comes at a higher CPU cost (especially for 4 Intel workers), which is explained by the fact that Intel's switchless mechanism maintains the maximum number of workers when there are pending switchless requests.
_Take-away 8: Poorly configured Intel switchless systems lead to a waste of CPU resources._ ZC-Switchless _obviates the performance penalty from misconfigured Intel switchless systems, while minimising CPU waste._
### _Performance of improved memcpy_
_Answer to Q4._ To evaluate the performance of our improved memcpy implementation, we ran a benchmark similar to that in SSIV-F, which issues 100,000 write system calls from within the enclave, with various sizes of aligned and unaligned buffers ranging from 512 B to 32 kB. We ran this benchmark in two modes: using the default memcpy implementation of the SDK (vanilla-memcpy) and using our improved memcpy implementation (zc-memcpy).
As shown in Fig. 13, our revised memcpy implementation achieves a speedup, in the case of larger buffers, of up to \(3.6\times\) for aligned buffers and \(15.1\times\) for unaligned buffers. Noteworthy, these speedups will benefit both regular and switchless ocalls, as well as any application using the Intel SDK's memcpy implementation inside enclaves.
**Impact on inter-enclave communication.** Recent work [14] presents a quantitative study on the performance of real-world serverless applications protected in SGX enclaves. A performance test of zc-memcpy in this work showed a \(7\%-15\%\) speedup for inter-enclave SSL transfers in the context of their benchmarks, which confirms the efficiency of zc-memcpy for copy-intensive operations.
## VI Related work
We classify related work into three categories, as detailed next.
**(1) SGX benchmarking and auto-tuning tools.** In [18], authors use stochastic optimisation to enhance the performance of applications hardened with Intel SGX. [17] proposes a framework for benchmarking SGX-enabled CPUs, micro-code revisions, SDK versions, and extensions to mitigate enclave side-channel attacks. These tools do not provide dynamic configuration of the switchless mechanisms.
**(2) SGX performance improvement.** Weichbrodt et al. [32] propose a collection of tools for high-level dynamic performance analysis of SGX enclave applications, as well as recommendations on how to improve enclave code and runtime performance, \(e\)._g_., by batching calls, or moving function implementations in/out of the enclave to minimise enclave transitions. _Intel VTune Profiler_[31] permits to profile enclave applications to locate performance bottlenecks, while Dingding et al. [13] provide a framework to improve enclave creation and destruction performance. ZC-Switchless focuses on improving enclave performance efficiently via the switchless call mechanism.
**(3) SGX transitions optimizations.** Previous work [21, 4, 29, 33] circumvents expensive SGX context switches by leveraging threads in and out of the enclave which communicate via shared memory, an approach also implemented by Intel [8].
Tian et al. [28] propose a switchless worker thread scheduling algorithm aimed at maximising worker efficiency so as to maximise performance speedup. ZC-Switchless on the other hand, leverages a scheduling approach aimed at minimising CPU waste while improving application performance relative to a non-switchless system.
## VII Conclusion and Future Work
In this paper, we highlight the limitations of Intel's switchless call implementation via experimental analysis. To mitigate the issues raised, we propose ZC-Switchless, an efficient and configless technique to drive the execution of
Fig. 12: Read(top)/Write(bottom) CPU usage % for zc vs. 2/4 Intel switchless worker threads.
Fig. 13: Throughput for ocalls of the write system call to /dev/null (100,000 operations, average over 10 executions) for aligned and unaligned buffers.
SGX switchless calls. ZC-Switchless leverages an in-application scheduler that dynamically configures an optimal number of worker threads which minimises the waste of CPU resources and obviates the performance penalty from misconfigured switchless calls. Our evaluation with a varied set of benchmarks shows that ZC-Switchless provides good performance while minimising CPU waste.
We will extend ZC-Switchless by integrating with profiling tools (see SSVI), to offer deployers an additional monitoring knob over SGX-enabled systems. Further, while the performance issues with memcpy were unexpected, we speculate similar issues might exist in other routines of the tlibc, for which a more in-depth analysis should be dedicated.
## Acknowledgments
This work was supported by the Swiss National Science Foundation under project PersiST (no. 178822) and the VEDLIoT (Very Efficient Deep Learning in IoT) European project (no. 957197).
|
2301.12690 | High-Density behavior of symmetry energy and speed of sound in the dense
matter within an effective chiral model | With an effective chiral model, we investigate how the mesonic cross
couplings $\sigma-\rho$ and $\omega-\rho$ affect the density content of the
symmetry energy and its higher-order slope parameters. Earlier mentioned
cross-couplings are crucial to controlling the density content of symmetry
energy. For this purpose, we did a case study for different values of the
symmetry energy $J_1$, defined at density 0.1 fm$^{-3}$ in the range (23.4 -
25.2) for a fixed value of the slope of the symmetry energy $L_0 = 60$ MeV at
saturation density and investigate its effect on the higher-order coefficients
and their influence on the underlying equation of state. We found that the
model with $J_1= 24.6$ MeV is more favorable with the pure neutron matter (PNM)
constraints obtained from $\chi$EFT calculations. In addition, we show that all
of our models predict a monotonically increasing speed of sound up to four
times the saturation density. The speed of sound decreases/saturates above that
point and approaches the conformal limit approximately $\sqrt{1/3}~c$ at the
center of the maximum mass star. | Prashant Thakur, N. K. Patra, T. K. Jha, Tuhin Malik | 2023-01-30T06:44:48Z | http://arxiv.org/abs/2301.12690v1 | High-Density behavior of symmetry energy and speed of sound in the dense matter within an effective chiral model
###### Abstract
With an effective chiral model, we investigate how the mesonic cross couplings \(\sigma-\rho\) and \(\omega-\rho\) affect the density content of the symmetry energy and its higher-order slope parameters. Earlier mentioned cross-couplings are crucial to controlling the density content of symmetry energy. For this purpose, we did a case study for different values of the symmetry energy \(J_{1}\), defined at density 0.1 fm\({}^{-3}\) in the range (23.4 - 25.2) for a fixed value of the slope of the symmetry energy \(L_{0}=60\) MeV at saturation density and investigate its effect on the higher-order coefficients and their influence on the underlying equation of state. We found that the model with \(J_{1}=24.6\) MeV is more favorable with the pure neutron matter (PNM) constraints obtained from \(\chi\)EFT calculations. In addition, we show that all of our models predict a monotonically increasing speed of sound up to four times the saturation density. The speed of sound decreases/saturates above that point and approaches the conformal limit approximately \(\sqrt{1/3}\)\(c\) at the center of the maximum mass star.
## 1 Introduction
Constraining the equation of state (EOS) of dense nuclear matter, supported by astrophysical observations [1; 2; 3; 4; 5; 6], and terrestrial experiments [7; 8; 9; 10; 11; 12; 13; 14; 15] remains one of the primary focus in dense matter studies for decades. A dense matter EOS can consist of several phases or compositions, including hyperons, quarks, superconducting matter, and colored superconducting matter. The density-dependence of symmetry energy plays an important role in regulating the particle content in the underlying EOS, particularly in studies of highly asymmetric matter such as those speculated in the core of neutron stars. It is to be noted that the symmetry energy slope parameters can be equivalently expressed as a series expansion in matter density. Here, in addition to \(J_{0}\), the symmetry energy defined at nuclear saturation density (\(\rho_{0}\sim 0.153\) fm\({}^{-3}\)), its slope parameter \(L_{0}\), the curvature parameter \(K_{\rm sym,0}\), and the skewness parameter \(Q_{\rm sym,0}\), as well as other higher-order terms may dictate the behavior of a particular EOS at higher densities. Based on the current knowledge of nuclear masses and giant dipole polarizability, the symmetry energy has been constrained \(J_{0}\simeq 32.25\pm 2.5\) MeV [16; 17; 18; 19] and it's slope parameter \(L_{0}\simeq 58.9\pm 16\) MeV [4; 20; 21; 22; 23; 24] at nuclear saturation density. However, there are very few known theoretical constraints on \(K_{\rm sym,0}\) and constraints on \(Q_{\rm sym,0}\)[24; 25; 26] and therefore, one of our objectives here is to analyze their interdependence with a model based on microphysics. Numerous relativistic mean-field models (RMF) [27; 28; 29; 30; 31] are known to have been successfully applied theoretically to extract finite nuclear properties across the periodic table as well as in applications of nuclear matter. Broadly, in these theories, nuclear interactions have been expressed in terms of the scalar meson \(\sigma\), the vector mesons \(\omega\) and the iso-vector meson \(\rho\) and their cross-couplings and the respective fields being expressed in mean-field where quantum fluctuations are ignored. The formalism seems to be more so valid when the density or the source terms are large [32; 33; 34; 28].
Analogously, models based on chiral symmetry, introduced by Gell-Mann & Levy [35] were subsequently applied to nuclear matter studies [36]; however, had limited applications in the finite nuclear domain [37; 38]. One of the major drawbacks of the model is that at short distances or high-momentum transfer, the interactions become weaker [39]. The vacuum jumps to a chirally restored abnormal vacuum (Lee-Wick vacuum) [36; 40]. The problem [41] may be overcome by including logarithmic terms of the scalar
field in chiral potentials [31; 42; 43; 44; 45] which intercept the normal vacuum from collapsing which may help in describing the finite nuclear properties [46; 47; 48]. Subsequent inclusion of the dynamically generated mass of the vector meson in the model [49; 50] reported an unrealistic high nuclear incompressibility (\(K\)) value, which was taken care of later by introducing higher-order terms of scalar meson field and applied to nuclear matter studies at both low [51] and high densities and neutron stars [52; 53; 54]. Lately, in order to improve the density content of the symmetry energy, mesonic cross-couplings were incorporated [55] and applied to study magnetized neutron stars [56]. Moreover, the higher-order interactions in the chiral fields are desirable as it is known to mimic the three-body forces, which may play a vital role in dense matter studies.
In the present work, we focus on the aspects of the isospin part of nuclear interaction and analyze its dependence on the in-medium nuclear interactions and the resulting equation of state (EOS), particularly on the density dependence of the nuclear symmetry energy [7]. Although the nuclear symmetry energy at normal matter density is fairly well known, however, they are poorly known or known with large uncertainties at supra-normal densities [57]. Our primary objective is to study the behavior of different order slope parameters of the nuclear symmetry energy up to larger densities. For this, we employ the effective chiral model with mesonic cross interaction such as \(\sigma-\rho\) and \(\omega-\rho\). There are three parameters in our model regulating the density content of symmetry energy. We systematically study the correlations and the effect of those parameters on different slope parameters of the nuclear symmetry energy.
It was often shown by several authors that the speed of sound has exhibited a maximum at an energy density of approximately 500 MeV fm\({}^{-3}\) or three times the saturation density when the EOS developed by physics-agnostic perturbative QCD calculations is conditioned at high densities [58; 59]. A similar phenomenon, however, does not occur when only astronomical restrictions are applied. Accordingly, we also investigate the behavior of sound up to higher densities.
The paper is organized as follows. In Section 2, we present the details of the hadronic model. The results of our calculations are discussed in Section 3. Section 4 contains the summary and conclusions of the present work.
## 2 The equation of state
The effective chiral lagrangian is as given in Eq. (1), where the hadronic degrees of freedom are the pseudo-scalar meson \(\pi\), the scalar meson \(\sigma\), the vector meson \(\omega\) and the isovector \(\rho-\)meson [55].
\[\mathcal{L} = \psi_{B}\Big{[}\left(i\gamma_{\mu}\partial^{\mu}-g_{\alpha} \gamma_{\mu}\omega^{\mu}-\frac{1}{2}g_{\rho}\rho_{\mu}\cdot\tau\gamma^{\mu} \right)-g_{\sigma}(\sigma+i\gamma_{5}\cdot\pi.\pi)\Big{]}\psi_{B} \tag{1}\] \[+\frac{1}{2}\left(\partial_{\mu}\pi.\partial^{\mu}\pi+\partial_{ \mu}\sigma\partial^{\mu}\sigma\right)-\frac{\lambda}{4}\left(x^{2}-x_{0}^{2} \right)^{2}-\frac{\lambda b}{6m^{2}}(x^{2}-x_{0}^{2})^{3}\] \[-\frac{\lambda c}{8m^{4}}(x^{2}-x_{0}^{2})^{4}-\frac{1}{4}F_{\mu \nu}F^{\mu\nu}+\frac{1}{2}g_{\omega}^{2}x^{2}\left(\omega_{\mu}\omega^{\mu} \right)-\frac{1}{4}\mathbf{R}_{\mu\nu}\mathbf{.R}^{\mu\nu}\] \[+\frac{1}{2}m_{\rho}^{\prime\prime}\rho_{\mu}\cdot\rho^{\mu}+ \eta_{1}\left(\frac{1}{2}g_{\rho}^{2}x^{2}\rho_{\mu}\cdot\rho^{\mu}\right)+ \eta_{2}\left(\frac{1}{2}g_{\rho}^{2}\rho_{\mu}\cdot\rho^{\mu}\omega_{\mu} \omega^{\mu}\right)\]
In the lagrangian above, \(\psi_{B}\) is the nucleon iso-spin doublet interacting with the mesons. The \(\frac{1}{4}F_{\mu\nu}F^{\mu\nu}\) and \(\frac{1}{4}\mathbf{R}_{\mu\nu}\mathbf{R}^{\mu\nu}\) are kinetic terms for \(\omega\) and \(\rho\) respectively, where \(F_{\mu\nu}=\partial_{\mu}\omega_{\nu}-\partial_{\nu}\omega_{\mu}\) and \(\mathbf{R}_{\mu\nu}=\partial_{\mu}\rho_{\nu}-\partial_{\nu}\rho_{\mu}\). The coupling strength for higher order scalar fields are \(b\) and \(c\), respectively, and \(\gamma^{\mu}\) and \(\tau\) are the Dirac matrices and Pauli matrices, respectively. The interaction terms and higher order terms are given in terms of the chiral invariant field \(x^{2}=(\pi^{2}+\sigma^{2})\). The last two terms contain mesonic cross-coupling between \(\rho\)-\(\sigma\) and \(\rho\)-\(\omega\) whose coupling strengths are \(\eta_{1}\) and \(\eta_{2}\), respectively. The vacuum expectation value of the scalar field is \(x_{0}\) and the mass of the nucleon (\(m\)), the scalar (\(m_{\sigma}\)), and the vector meson mass (\(m_{\omega}\)), are related to \(x_{0}\) through
\[m=g_{\sigma}x_{0},\ \ m_{\sigma}=\sqrt{2\lambda}x_{0},\ \ m_{\omega}=g_{\omega}x_{0}\, \tag{2}\]
where, \(\lambda=\frac{(m_{\omega}^{2}-m_{\omega}^{2})}{2\tilde{\sigma}^{2}}\), where and \(f_{\pi}\) is the pion decay constant. In the mean-field treatment, we ignore the pion field and also set \(m_{\pi}=0\). The derivation of the equation of motion of the meson fields and its EOS (\(\epsilon\) & \(p\)) for the present model can be found in Ref. [55]. In terms of \(Y=x/x_{0}=m^{*}/m\) and the corresponding couplings \(C_{\sigma}\equiv g_{\sigma}^{2}/m_{\sigma}^{2}\) for scalar, \(C_{\omega}\equiv g_{\alpha}^{2}/m_{\omega}^{2}\) for vector and \(C_{\rho}\equiv g_{\rho}^{2}/m_{\rho}^{2}\) for the isovector the energy density (\(\epsilon\)) and pressure (\(p\)) of the present model for a given baryon density can be calculated as,
\[\epsilon = \frac{1}{\pi^{2}}\sum_{i=n,p}\int_{0}^{k_{F}^{i}}k^{2}\sqrt{k^{ 2}+{m^{*}}^{2}}dk+\frac{m^{2}}{8C_{\sigma}}(1-Y^{2})^{2} \tag{3}\] \[-\frac{b}{12C_{\sigma}C_{\omega}}(1-Y^{2})^{3}+\frac{c}{16m^{2}C_ {\sigma}C_{\omega}^{2}}(1-Y^{2})^{4}+\frac{1}{2}m_{\omega}^{2}\omega_{0}^{2}Y^ {2}\] \[+\frac{1}{2}m_{\rho}^{2}\Big{[}1-\eta_{1}(1-Y^{2})(C_{\rho}/C_{ \omega})+3\eta_{2}C_{\rho}\omega_{0}^{2}\Big{]}(\rho_{3}^{0})^{2},\] \[p = \frac{1}{3\pi^{2}}\sum_{i=n,p}\int_{0}^{k_{F}^{i}}\frac{k^{4}}{ \sqrt{k^{2}+{m^{*}}^{2}}}dk-\frac{m^{2}}{8C_{\sigma}}(1-Y^{2})^{2}\] (4) \[+\frac{b}{12C_{\sigma}C_{\omega}}(1-Y^{2})^{3}-\frac{c}{16m^{2}C_ {\sigma}C_{\omega}^{2}}(1-Y^{2})^{4}+\frac{1}{2}m_{\omega}^{2}\omega_{0}^{2}Y^ {2}\] \[+\frac{1}{2}m_{\rho}^{2}\Big{[}1-\eta_{1}(1-Y^{2})(C_{\rho}/C_{ \omega})+\eta_{2}C_{\rho}\omega_{0}^{2}\Big{]}(\rho_{3}^{0})^{2}.\]
In the equation above, \(k_{F}^{i}\) is the fermi momentum of a nucleon and \(\gamma\) is the spin degeneracy factor (\(\gamma=4\) for Symmetric Nuclear Matter (SNM)). The parameters \(C_{\sigma}\) and \(C_{\omega}\) can be obtained by solving the field equations self-consistently tuned to satisfy standard nuclear matter saturation properties. The detailed calculation of the same can be found in our previous work, where it was evident that the couplings
are very sensitive to the value of the effective mass [52]. Accordingly, the symmetry energy \(S(\rho)\) in the present model using cross interactions (Eq. (1)) is given as,
\[S(\rho) = \frac{k_{F}^{2}}{6\sqrt{k_{F}^{2}+m^{*2}}}+\frac{C_{\rho}k_{F}^{3}} {12\pi^{2}(m_{\rho}^{*}/m_{\rho})^{2}}+\frac{\eta_{2}C_{\rho}^{2}\omega_{0}^{2}k _{F}^{3}}{6\pi^{2}(m_{\rho}^{*}/m_{\rho})^{4}} \tag{5}\] \[-\frac{2\eta_{2}C_{\rho}^{2}C_{\omega}k_{F}^{9}}{27\pi^{6}m_{ \omega}^{2}Y^{4}(m_{\rho}^{*}/m_{\rho})^{4}},\]
where, \({m_{\rho}^{*}}^{2}=m_{\rho}^{2}\left[1-\eta_{1}(1-Y^{2})(C_{\rho}/C_{\omega})+ \eta_{2}C_{\rho}\omega_{0}^{2}\right]\) and \(k_{F}=(3\pi^{2}\rho/2)^{1/3}\). The three coupling parameters \(C_{\rho}\), \(\eta_{1}\) and \(\eta_{2}\) are fixed numerically for given values of symmetry energy at 0.1 fm\({}^{-3}\)\(J_{1}\), symmetry energy \(J_{0}\) and its slope parameter \(L_{0}\) at saturation density. In general, the symmetry energy can be expressed in Taylor series expansion around saturation density (\(\rho_{0}\)) as [60]
\[S(\rho) =J_{0}+L_{0}(\frac{\rho-\rho_{0}}{3\rho_{0}})+\frac{1}{2}K_{\rm sym,0}(\frac{\rho-\rho_{0}}{3\rho_{0}})^{2} \tag{6}\] \[+\frac{1}{6}Q_{\rm sym,0}(\frac{\rho-\rho_{0}}{3\rho_{0}})^{3}+ \frac{1}{24}Z_{\rm sym,0}(\frac{\rho-\rho_{0}}{3\rho_{0}})^{4}\] \[+\mathcal{O}(\frac{\rho-\rho_{0}}{3\rho_{0}})^{5},\]
where \(J_{0}\) is the symmetry energy coefficient, its slope parameter \(L_{0}\), symmetry energy curvature parameter \(K_{\rm sym,0}\) and the third order density derivatives of symmetry energy \(Q_{\rm sym,0}\), all defined at \(\rho_{0}\)[24]. The density dependence of all those parameters can be defined as,
\[L =3\rho_{0}\frac{\partial S(\rho)}{\partial\rho}, \tag{7}\] \[K_{\rm sym} =9\rho_{0}^{2}\frac{\partial^{2}S(\rho)}{\partial\rho^{2}},\] (8) \[Q_{\rm sym} =27\rho_{0}^{3}\frac{\partial^{3}S(\rho)}{\partial\rho^{3}},\] (9) \[Z_{\rm sym} =81\rho_{0}^{4}\frac{\partial^{4}S(\rho)}{\partial\rho^{4}}. \tag{10}\]
The nuclear incompressibility of asymmetric matter can be elaborated in terms of isospin asymmetry \(\delta=\frac{(\rho_{0}-\rho_{0})}{\rho}\) at \(\rho_{0}\) as \(K(\delta)=K+K_{\tau}\delta^{2}+\mathcal{O}(\delta^{4})\) and \(K_{\tau}\) is given by [61]
\[K_{\tau}=K_{sym}-6L_{0}-\frac{Q_{0}L_{0}}{K}, \tag{11}\]
Where \(K\) is the nuclear matter incompressibility (\(9\rho^{2}\frac{\delta^{2}(\epsilon/\rho)}{\delta\rho^{2}}|_{\rho_{0}}\)) and \(Q_{0}\) is the skewness parameter (\(27\rho^{3}\frac{\delta^{3}(\epsilon/\rho)}{\delta\rho^{3}}|_{\rho_{0}}\)).
## 3 Result and Discussion
In this section, we present the results for the density-dependence of symmetry energy, its slope, and curvature parameters over densities obtained for different cross-coupling parameters. For the present investigation, the EOS for the symmetric nuclear matter is kept unchanged by fixing the parameters \(C_{\sigma}\), \(C_{\omega}\), \(B\) and \(C\) at \(\rho_{0}\) as obtained in Ref. [55], with the nuclear saturation properties, such as binding energy, nuclear incompressibility, and nucleon effective mass and are enlisted in Table 1. The values are well within the acceptable ranges prescribed in the literature. However, the effective mass \(m^{*}/m=0.87\) is relatively larger than some of the well-known relativistic mean field models, such as NL3 or IUFSU. It is to be noted that the magnitude of the effective mass is very important for reproducing the spin-orbital splittings of finite nuclei. If the effective nucleon mass is larger, the spin-orbital splitting will be decreased. Furthermore, the \(\sigma\)-meson mass is also smaller compared to other RMF models. In one of the previous works[52], the correlation of nucleon effective mass and the corresponding mass of the \(\sigma-\)meson was analyzed. The authors found that with larger the nucleon effective mass in the model, the smaller the mass of the scalar meson to satisfy the nuclear matter saturation properties. Another rare feature of the present model is the behavior of the nucleon effective mass with density. In our case, the effective mass goes down with density but again increases after \(\sim\,3\rho_{0}\)[52; 53]. This behavior is attributed to the fact that repulsive forces start to dominate as the matter density increases.
For density-dependent of symmetry energy, we have three parameters, namely \(C_{\rho}\), \(\eta_{1}\), and \(\eta_{2}\). We tune these three parameters to obtain different values of \(J_{1}\) and \(J_{0}\), but a fixed value of slope \(L_{0}=60\) MeV, which is reasonably good for the central value obtained in different analyses [55]. In Table 2, we present the parameter sets \(C_{\rho}\), \(\eta_{1}\) and \(\eta_{2}\) along with \(K_{\rm sym,0},Q_{\rm sym,0},Z_{\rm sym,0}\) for different \(J_{0}\), \(J_{1}\). The symmetry energy slope parameter \(L_{0}\) for all those models is kept fixed at
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \(C_{\sigma}\) & \(C_{\omega}\) & \(B\) & \(C\) & \(E_{0}\) & K & \(m^{*}/m\) \\ \((fm^{2})\) & \((fm^{2})\) & \((fm^{2})\) & \((fm^{4})\) & (MeV) & MeV & \\ \hline
7.221 & 1.670 & -6.435 & 0.001 & -16.0 & 231 & 0.87 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Model parameters for the symmetric nuclear matter calculated at energy per nucleon (\(E_{0}\)), nuclear incompressibility (\(K\)), and effective nucleon mass (\(m^{*}/m\)) at the saturation density \(\rho_{0}\)=0.153 fm\({}^{-3}\). \(C_{\sigma}=g_{\sigma}^{2}/m_{\sigma}^{2}\) and \(C_{\omega}=g_{\omega}^{2}/m_{\sigma}^{2}\) are the scalar and vector meson coupling parameters, respectively. The higher-order self-couplings of scalar field parameters are \(B=b/m^{2}\) and \(C=c/m^{4}\), respectively, where \(m\) is the mass of the nucleon. The masses of nucleon, \(\omega\) meson and \(\sigma\) meson are 939 MeV, 783 MeV and 444.6 MeV respectively.
60 MeV. In the following, we present the density behavior of those symmetry energy parameters (Eq.(7\(-\)10) ) for different \(J_{1}\) and \(J_{0}\), and for a fixed value of \(L_{0}\). The value of \(J_{1}\) in our present work lies in the range (23.4\(-\)25.2) MeV obtained in Ref. [16; 21] by analyzing the experimental data on isovector giant resonances.
The pressure for both symmetric nuclear matter (left) and pure neutron matter (right) are plotted in Fig. 1 as a function of density. A comparison with the constraints obtained from heavy-ion collision flow (HIC) experiments and a few well-known theoretical models is shown. It is to be noted that the HIC PNM pressure is calculated theoretically with two extreme parameterizations, the Asy-soft (weakest) and Asy-stiff (strongest) of symmetry energy [7]. Here, we find that our model prediction is in good agreement with the band of the HIC data in the entire range, very similar to other models such as IUFSU [62], BSP [63], and BKA22 [33]. It is to be noted that the NL3 has both a stiff SNM and a PNM content as its nuclear matter incompressibility K is 270 MeV, and the slope of the symmetry energy \(L_{0}\) is 120 MeV at saturation density \(\rho_{0}\). The figure also reflects the same as it is out of the HIC constraints for both SNM and PNM. The models employed in the present study based on different classes of \(J_{1}\) don't show any substantial differ
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline TPIM & \(J_{1}\) & \(J_{0}\) & \(C_{p}\) & \(\eta_{1}\) & \(\eta_{2}\) & \(K_{\rm sym,0}\) & \(Q_{\rm sym,0}\) & \(Z_{\rm sym,0}\) & \(K_{\rm z}\) \\ & (MeV) & (MeV) & (fm\({}^{2}\)) & & & (MeV) & (MeV) & (MeV) & (MeV) \\ \hline & & 31.5 & 7.45 & -0.32 & 5.20 & -153.74 & 124.97 & 703.76 & -427.04 \\
1 & 23.4 & 32.0 & 6.78 & -0.15 & 6.99 & -191.31 & 97.22 & 2918.98 & -461.85 \\ & & 32.5 & 6.35 & -0.06 & 6.48 & -192.33 & -37.344 & 3435.22 & -482.22 \\ & & 31.5 & 12.56 & -0.80 & 1.19 & -79.19 & 302.59 & -2209.27 & -348.94 \\
2 & 24.0 & 32.0 & 8.48 & -0.41 & 4.55 & -146.75 & 182.84 & 19.97 & -419.05 \\ & & 32.5 & 7.14 & -0.17 & 6.81 & -195.20 & 125.73 & 2750.89 & -466.51 \\ & & 31.5 & 13.13 & -0.82 & 1.05 & -76.66 & 310.06 & -2624.05 & -346.07 \\
3 & 24.6 & 32.0 & 13.32 & -0.79 & 1.63 & -87.44 & 302.36 & -2343.15 & -353.06 \\ & & 32.5 & 10.23 & -0.55 & 3.24 & -124.97 & 242.90 & -1621.37 & -398.85 \\ & & 33 & 12.75 & -0.68 & 2.54 & -110.01 & 296.53 & -2509.99 & -377.39 \\
4 & 25.2 & 34 & 7.01 & -0.02 & 8.15 & -247.63 & 167.07 & 6205.68 & -520.32 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The slope parameter of symmetry energy for all the models is kept fixed \(L_{0}=60\)MeV. The values of the various symmetry energy parameters(\(J_{0}\), \(J_{1}\),\(K_{\rm sym,0}\), \(Q_{\rm sym,0}\), \(Z_{\rm sym,0}\), and \(K_{\rm z}\)) as shown in Eqs. [7; 8; 9; 9; 9; 9; 10; 11] are listed according to the model coupling schemes as discussed in the text.
Figure 1: (Color Online) The pressure is plotted against density for (left) SNM, and (right) PNM is plotted. The results for PNM EOS obtained for different classes of our model based on \(J_{1}\) values are displayed and compared with a few RMF models such as NL3, IUFSU, BSP, and BKA22. The shaded regions on both panels are the constraints from Havey Ion Collision (HIC) experiments [7].
ence in the pressure. It is interesting to note that all of our models have different symmetry energy behavior or sufficient wide dispersion in symmetry energy at high density, but it dissolves in pressure. It is because of many degeneracies of proton fraction or symmetry energy for a given pressure[64; 65].
The Chiral EFT describes the hierarchy of two-, three-, and weaker higher-body forces and provides theoretical uncertainty estimates.. Microscopic calculations, which are derived from chiral effective field theory, help us to constrain the properties of neutron-rich matter up to nuclear saturation density to a high degree. In Figure 2, we show the energy per particle for PNM for different classes of our models based on \(J_{1}\) values. We compare the pure neutron matter energy obtained from \(\chi\)EFT calculations in yellow hatched [67] area. We also plot this constraint with twice the uncertainty in the yellow-shaded region. As we know, this PNM constraint strongly influences symmetry energy up to supra-saturation density and is a stringent constraint for most of the RMF models. For comparison, we also show the PNM energy obtained from a few well-known RMF models, such as NL3, IUFSU, BSP, and BKA22. As we can see, our models for different \(J_{1}\) are in agreement with the \(\chi\)EFT constraints from \(\approx 0.4\)\(\rho_{0}\) density similar to others. However, the prediction of \(J_{1}=24.6\) MeV is the closest to \(\chi\)EFT PNM constraints compared to all the models throughout the density range shown. The mesonic cross couplings, therefore, seem to be instrumental in dictating and regulating the EOS at both high and low densities, particularly with the symmetry energy aspects of the matter.
The density content of the symmetry energy as a function of baryon density is plotted in Fig. 3. Additionally, we compare the symmetry energy constraint provided by the isobaric analog states (IAS) [68]. It can be seen from the figure that all of our model's predictions agree with IAS well. We also compare the density-dependent symmetry energy derived from a few RMF models, namely NL3, IUFSU, BSP, and BKA22. All of our models for different values of \(J_{1}\) show softer symmetry energy with a spread above two times saturation density compared to other RMF models shown in
Figure 4: (Color Online) The slope parameter \(L\) (Eq. 7) as a function of density (\(\rho_{b}\)) is shown. Different bands of \(L\) corresponding to different values of \(J_{1}\) are displayed (see Table 2 ).
Figure 3: (Color Online) Symmetry energy (\(S(\rho)\)) as a function of density(\(\rho_{b}\)) is plotted for different classes of our models based on \(J_{1}\) values along with a few selected RMF models such as NL3, IUFSU, BSP, and BKA22. The constraint of symmetry energy from the isobaric analog states (IAS) is also displayed [68].
Figure 2: (Color Online) Energy per neutron as a function of density (\(\rho_{u}\)) is shown for different classes of our models based on \(J_{1}\) values. Few selected RMF models such as NL3, IUFSU, BSP and BKA22 are also compared [66; 67].
the figure. It is to be noted for all of our models, the slope of the symmetry energy at saturation is fixed to \(L_{0}=60\) MeV. The values of \(L_{0}\) are 120.65, 49.26, 50, and 78.79 for the models NL3, IUFSU, BSP, and BKA22, respectively. We find that the model with \(J_{1}=25.2\) MeV represents the softest symmetry energy among others.
In Figure 4, we plot the density-dependence of the slope parameter \(L\) as a function of baryon density for the models with different \(J_{1}\) values, corresponding to the symmetry energy at saturation density, \(J_{0}\) in the range (\(31.5\,-\,34\)) MeV (see Table 2). The values agree to \(J_{0}\simeq 32.25\pm 2.5\) MeV as obtained from Refs. [16; 17; 18; 19]. The value of \(L_{0}=60\) MeV is fixed for all the cases and is the median value which is obtained from recent analyses of isospin diffusion and the astrophysical data[4; 24; 73]. Our models have a fixed value of \(L_{0}\) at saturation density, which can be inferred from the plot, where they all converge to the same point at \(\rho_{0}\). However, we find that the model with a lower value of \(L\) below saturation density results in a step up in values after saturation density. In Fig. 5, We plot \(J_{0}-L_{0}\) phase space for all of our models and compare them with available empirical/experimental constraints. Our model results are comparable with other theoretical models as well as various terrestrial experimental constraints and astrophysical observations. Compatibility with the overlap region of the \(J_{0}-L_{0}\) phase space is of interest as it agrees to a diverse set of constraints available. We find that our model predictions lie in this overlap region, which again speaks of its prominence in the current model.
In Fig. 6, we plot the density dependence of symmetry energy curvature parameter \(K_{\rm sym}\) against baryon number density for all of our different models corresponding to different symmetry energy(\(J_{1}\)). We also compare the range of \(K_{\rm sym}\) at saturation density \(K_{\rm sym,0}\in[-177,71]\) MeV obtained from RMF within the Bayesian Inference approach with the minimal constraint imposed [26; 74]. All of our models show good agreement with the value at saturation density except the model with \(J_{1}=25.2\) MeV. The density dependence of curvature parameter \(K_{\rm sym}\), in general, shows an increasing trend over density, contrary to the behavior of \(L\).
We plot the density dependence of the symmetry energy skewness parameter \(Q_{\rm sym}\) in Figure 7 against baryon number density for all models corresponding to different symmetry energy (\(J_{1}\)). Also, we compare the range of \(Q_{\rm sym,0}\in[-88,1736]\) at saturation density obtained from the microscopic model in RMF treatment with the minimal constraint imposed [26; 74]. The values of \(Q_{\rm sym,0}\) do also agree to the recently obtained range (-607.65 \(-\) 783.21) MeV in Ref.[75; 76]. At saturation density, this parameter may have a wide range, but all of our models fall within the constraints. The density dependence of the skewness parameter \(Q_{\rm sym}\) shows a
Figure 5: (Color Online) The \(J_{0}-L_{0}\) phase space is plotted. Present model calculations have been shown with various other theoretical approaches such as microscopic, the Skyrme, the non-linear Walecka model, and the density-dependent mesonic couplings model (DDM). The different experimental constraints such as heavy-ion collisions (HIC) [69], isobaric analog states (IAS) [70], giant dipole resonances (GDR) [21] those obtained from fitting astrophysical M–R observations[71; 72], electric dipole polarizability [17] and neutron-skin thicknesses of Sn isotopes [73] are also displayed.
Figure 6: (Color Online) The curvature parameter \(K_{\rm sym}\) (Eq. 8) as a function of density (\(\rho_{\rm b}\)) is shown. The results of our models with different bands of \(K_{\rm sym}\) corresponding to symmetry energy (\(J_{1}\)) is displayed.
decreasing trend and becomes saturated at higher densities. However, all the models show nonlinearity around saturation density except for the model with \(J_{1}=24.6\) MeV. The nonlinearity is greater for models with \(J_{1}\) equal to 25 MeV. This is because we have fixed the value of slope parameter \(L_{0}\) for all models. The density dependence of the symmetry energy kurtosis parameter \(Z_{\rm sym}\) against scaled baryon number density in Fig. 8 for all models corresponding to different symmetry energy (\(J_{1}\)). We put a band of \(Z_{\rm sym,0}\in[-19290,236]\) obtained in Ref [26; 74]. Except for the model with \(J_{1}=25.2\), all of our models show good agreement with the value at saturation density. The figure illustrates that the kurtosis parameter \(Z_{\rm sym}\) increases over density and becomes saturated at a higher density above 2 \(\times\rho_{0}\).
As mentioned earlier, the nuclear matter incompressibility \(K\) and the \(K_{\tau}\) are crucial in the determination of a reliable EOS as they control the stiffness of the symmetric and asymmetric parts of the EOS separately. The influence of parameters \(L_{0}\) and \(K_{\rm sym,0}\) predominantly determine the value of \(K_{\tau}\) and is related to the incompressibility of asymmetric nuclear matter. It is to be noted that various theoretical approaches fail to converge on a particular range of nuclear incompressibility (K), and there is also disagreement with the experimental data that is available. For example, in the non-relativistic domain, random phase calculations (RPA) calculations indicate \(K\) in the range (210 - 220) MeV [77; 78], microscopic Gogny effective interactions limit the value to 231\(\pm\)5 MeV [79]. Recently a range of (240\(\pm\)20) MeV was reported using various experimental data on isoscalar giant monopole resonances [80]. Combined together, the mean value would come to be \(\sim 230\) MeV. Similarly, the constraints on \(K_{\tau}\) ranges from \(-840\) to \(-350\) MeV [81; 82]. In Fig. 9, we have shown the range on the \(K\)-\(K_{\tau}\) phase space for our model (marked with a red triangle) along with several other well-known models and experimental constraints. All our models predict the incompressibility (\(K\)) value at 231 MeV, and we have only modified the symmetry energy component, which affects \(K_{\tau}\) only. It can be seen that our model predictions seem to agree with the majority of the theoretical approaches.
We extend our analysis to study the speed of sound of beta-equilibrated charge neutral matter inside the star. The behavior of \(c_{s}\) at such a high density is important because it controls the stiffness of the equation of state. In Fig. 10, we plot the square of the speed of sound as a function of density for all models mentioned in Table 2. We observe that all of our models give a monotonically increasing speed
Figure 8: (Color Online) The higher-order parameter \(Z_{\rm sym}\) (Eq. 10) as a function of density (\(\rho_{b}\)). Different bands of \(Z_{\rm sym}\) corresponding to symmetry energy \(J_{1}\) are displayed.
Figure 7: (Color Online) The skewness parameter \(Q_{\rm sym}\)(Eq.9) is plotted as a function of density (\(\rho_{b}\)). Different bands of \(Q_{\rm sym}\) corresponding to symmetry energy \(J_{1}\) are displayed.
Figure 9: (Color Online) The values of \(K_{\tau}\) and \(K\) of our model compared with different theoretical models [83; 84]. The horizontal and vertical dashed lines represent the empirical ranges of \(K_{\tau}\) and \(K\), respectively.
of sound up to four times saturation density, and above this density, it gets saturated and tends to decrease towards \(\approx\sqrt{1/3}\)c. However, in all of our models, the rate of decrease is minimal (in the third decimal place only). Generally, in most microscopic models within RMF treatment, it commonly shows a monotonically increasing trend and gets saturated at a value higher than 0.4. For a better comparison, we show the same obtained from other RMF models: FSU2 [85], BSR2 [33], and NL3 [86]. We also compare the results obtained in Ref. [58; 59], where the authors calculated the possible domain of speed of sound by imposing astrophysical and QCD constraints. It was shown that a knee-like feature forms at densities of approximately \(3\rho_{0}\), which is the critical density when QCD softens the equation of state. The speed of sound, which is obtained from the equation state, exhibits the effects of this property in the form of a peak-like structure. The speed of sound obtained with our parameter set falls in the overlap region of Astro and Astro+ QCD constraints (Mass-Radius-Tidal deformability). It can be seen from the figure that the NL3 and BSR2 do not satisfy the constraints imposed by _astro_+pQCD (shaded light green), whereas they are within the region obtained with _astro_ constraints (shaded light purple) [58; 59]. It is to be noted that all of our models for different values of \(J_{1}\) noticeably agree with those constraints.
## 4 Conclusions
Studies of dense matter physics are complicated because of the unknown knowledge of nuclear forces. In dense matter studies, chiral symmetry models are promising to replicate three-body forces, which may play a vital role [36; 37; 38]. On the other hand, the particle content in the underlying EOS is significantly influenced by the density-dependent symmetry energy. Several phases or compositions, such as hyperons, quarks, superconducting matter, and colored super conditions, can be suitable candidates for a dense matter EOS. The value of symmetry energy at saturation density is quite known from nuclear physics experimental or empirical knowledge, however, is not well known at higher densities. In this work, we have done a case study employing the existing chiral model [55] to investigate the density-dependent behavior of different slope parameters of the symmetry energy. We also investigate the behavior of the stiffness parameter of EOS: the square of the speed of sound with different values of \(J_{1}\).
In the present work, we bring out the essence of the density behavior of several higher-order symmetry energy parameters in the presence of various mesonic cross-couplings. At both high and low densities, mesonic cross couplings appear to be crucial in determining and controlling the EOS, especially when considering the symmetry energy features of the matter. For models with varying \(J_{1}\) in the range (23.4-25.2) MeV for a fixed value of \(L_{0}=60\) MeV, we found that model with \(J_{1}=24.6\) shows better performance over all the empirical constraints of nuclear symmetry energy shown in this article. The incompressibility of nuclear matter, due to their independent influence over the symmetric and asymmetric components of the EOS, \(K\), and the \(K_{\tau}\), are essential in determining a reliable EOS.
The behavior of the square of the speed of sound \(c_{s}^{2}\) at such a high density is important because it controls the stiffness of the equation of state. Our model predictions for the square of the speed of sound fall within the overlap region of Astro and QCD constraints (Mass-Radius-Tidal deformability) obtained in Ref. [58; 59]. We find that up to four times the saturation density, the models predict a monotonically increasing sound speed, and thereafter, it saturates and trends toward the value of asymptotic limit \(\sqrt{1/3}\)c.
It is interesting to note that the square of the sound speed at high density is similar in all of our models, but the symmetry energy at high density has spread and therefore, has uncertainty in composition. In order to fully understand this, a model-independent statistical future calculation is required, which is beyond the scope of this study.
Figure 10: (Color Online) The speed of sound \(c_{s}^{2}\)(in \(c^{2}\)) as a function of baryon density is shown. The color lines correspond to a few selected RMF models, such as NL3, BSR2, and FSU2. For comparison, we displayed the speed of the sound domain from Ref.[58] obtained with pQCD, and astrophysical constraints (shaded light green and light purple region ).
## 5 Acknowledgements
N.K.P. would like to acknowledge the Department of Science and Technology (DST), India, for the support DST/INSPIRE Fellowship/2019/IF190058. |
2307.09658 | Numerical investigations of the orbital dynamics around a synchronous
binary system of asteroids | In this article, equilibrium points and families of periodic orbits in the
vicinity of the collinear equilibrium points of a binary asteroid system are
investigated with respect to the angular velocity of the secondary body, the
mass ratio of the system and the size of the secondary. We assume that the
gravitational fields of the bodies are modeled assuming the primary as a mass
point and the secondary as a rotating mass dipole. This model allows to compute
families of planar and halo periodic orbits that emanate from the equilibrium
points $ L_1 $ and $L_2$. The stability and bifurcations of these families are
analyzed and the results are compared with the results obtained with the
Restricted Three-Body Problem (RTBP). The results provide an overview of the
dynamical behavior in the vicinity of a binary asteroid system. | L. B. T. Santos, Allan Kardec de Almeida Jr, P. A. Sousa-Silva, M. O. Terra, D. M. Sanchez, S. Aljbaae, A. F. B. A. Prado, F Monteiro | 2023-07-18T22:00:23Z | http://arxiv.org/abs/2307.09658v1 | # Numerical Investigations of the Orbital Dynamics Around a Synchronous Binary System of Asteroids
###### Abstract
In this article, equilibrium points and families of periodic orbits in the vicinity of the collinear equilibrium points of a binary asteroid system are investigated with respect to the angular velocity of the secondary body, the mass ratio of the system and the size of the secondary. We assume that the gravitational fields of the bodies are modeled assuming the primary as a mass point and the secondary as a rotating mass dipole. This model allows to compute families of planar and halo periodic orbits that emanate from the equilibrium points \(L_{1}\) and \(L_{2}\). The stability and bifurcations of these families are analyzed and the results are compared with the results obtained with the Restricted Three-Body Problem (RTBP). The results provide an overview of the dynamical behavior in the vicinity of a binary asteroid system.
Key Words:Stability -- Celestial mechanics -- Minor planets, asteroids: general
## 1 Introduction
In recent years, the investigation and analysis of small celestial bodies have become fundamental to deep space exploration. Thus, understanding the dynamical behavior in the vicinity of small bodies is of great interest for the design of exploration missions and also for planetary science.
However, describing how a particle behaves around these objects is a challenging subject in astrodynamics, mainly due to the combination of the rapid rotation of the asteroids around their axis together with the non-spherical shapes.
In particular, an increasing number of binary asteroid systems has been observed throughout the Solar System and, in particular, among the near-Earth asteroids (NEAs). It is estimated that about 15% of NEAs larger than 0.3 km are binary systems (Pravec et al., 2006; Margot et al., 2015). Most of these binaries are formed by a more massive primary component, usually with nearly spherical shapes, and a small secondary component, generally referred to as satellite (Pravec et al., 2006; Pravec & Harris, 2007; Walsh et al., 2008; Zhang et al., 2020).
There are several types of binary asteroid systems, which have been grouped according to their physical properties (e.g. size, rotation, mass ratio, diameter) (Pravec & Harris, 2007). The characteristics of these groups also suggest different formation mechanisms. As shown by Pravec & Harris (2007), the Type A binary asteroids are composed of small NEAs, Mars crosses (MC), and Main-Belt Asteroids (MBA), with primary components less than 10 km in diameter and with a component size ratio (\(D_{s}/D_{p}\)) less than 0.6. The Type B, in turn, consists of small asteroids with nearly equal size components (\(D_{s}/D_{p}\)\(>\) 0.7) and with primary diameters smaller than 20 km. The Types L and W are, respectively, composed of large asteroids (\(D\)\(>\) 20km) with relatively very small component size ratio (\(D_{s}/D_{p}\)\(<\) 0.2) and of small asteroids (\(D\)\(<\) 20 km) with relatively small satellites (\(D_{s}/D_{p}\)\(<\) 0.7) in wide mutual orbits.
Most Type A binary asteroids are synchronous systems, that is, the rotation period of the secondary component is equal to the orbital period around the center of mass of the system (Pravec et al., 2006, 2016). Numerical simulations revealed that binary systems are likely to undergo a chaotic process of energy dissipation involving tidal forces that allows the system to evolve to a fully synchronous end state (Jacobson & Scheeres, 2011). According to Jacobson & Scheeres (2011), the higher the mass ratio of the binary system, the faster the synchronization can be achieved. This happens because each member of the system exerts tidal forces with the same proportion over each other. Thus, as most systems have mass ratios less than 0.5, we find in the literature a larger number of systems with only the secondary component coupled with the orbital movement (Pravec et al., 2016).
Performing semi-analytical and/or numerical investigations of the orbits and equilibrium solutions around asteroid systems using simplified models can be useful to provide some preliminary understanding of such systems (Werner, 1994; Liu et al., 2011). Simplified models can be used to approximate the gravitational field to irregularly shaped bodies, requiring less computational effort and generating considerable results in a short period of time. Another advantage of using a simplified model is that we can easily investigate the effects of a given parameter on the dynamics of a spacecraft around asteroids, such as, the distribution of stable periodic orbits (Lan et al., 2017), the stability of the equilibrium points (Zeng et al., 2015; Barbosa Torres dos Santos et al., 2017), as well as the permissible parking regions (Yang et al., 2015; Zeng et al., 2016). In addition, simplified models can be used to support the orbit design (Wang et al., 2017) and feedback control (Yang et al., 2017).
Due to their advantage and considerable results, several simplified models have been used proposed to study the orbital dynamics of a particle in the vicinity of irregular bodies. For example, Riaguas et al. (1999, 2001) analyzed the dynamics of a particle under the gravitational force of an asteroid modeled as a straight segment. Zeng et al. (2016) analyzed the influence of the parameters \(k\) (angular velocity) and \(\mu\) (mass ratio) in the equilibrium solutions using the rotating mass dipole model and observed that there are always 5 equilibrium points when considering the primary bodies as points of mass. Other works have investigated the dynamics around small irregular bodies using a simplified model given by an homogeneous cube (Liu et al., 2011), a simple flat plate (Blesa., 2006), a rotating mass dipole (Zeng et al., 2015; Barbosa Torres dos Santos et al., 2017; dos Santos et al., 2017), the dipole segment model (Zeng et al., 2018), a rotating mass tripole (Lan et al., 2017; dos Santos et al., 2020; Santos et al., 2021), and many others.
In particular, aiming to understand the dynamical environment in the vicinity of irregular bodies, Aljbaae et al. (2020) investigated the dynamics of a spacecraft around the asynchronous equal-mass binary asteroid (90) Antiope, the authors applied the Mascon gravity framework using the shaped polyhedral source (Chanut et al., 2015; Aljbaae et al., 2017) to consider the perturbation due to the polyhedral shape of the components. The perturbations of the solar radiation pressure at the perihelion and aphelion distances of the asteroid from the Sun is also considered in that study. In order to investigate the stability of periodic orbits, (Chappaz & Howell, 2015) considered the asynchronous binary asteroid system using the triaxial ellipsoid model and observed that the non-spherical shape of the secondary body significantly influences the behavior of the halo orbit around \(L_{1}\) and \(L_{2}\).
As said before, simplified models are useful to provide some preliminary understanding of the motion around binary systems, and the circular restricted three-body problem is suitable and often used to investigate the dynamics around small bodies (de Almeida Junior & Prado, 2022). Furthermore, even landing trajectories has been evaluated using a spherical shape for the gravitational field of the primaries in the circular restricted three-body problem
(Tardivel & Scheeres, 2013; Celik & Sanchez, 2017; Ferrari et al., 2016). Although the orbit-attitude coupled equations of motion for a binary asteroid can be obtained using a more sophisticated model, which takes into consideration a potential for a non-spherical distribution of mass (Scheeres et al., 2021; Wen & Zeng, 2022), they are only essentials for very close encounters, such as for landing approaches. In this study, the dynamics is investigated for orbits around the binary system of asteroids. Thus, in this contribution, a more simplified model is used, whose results capture the essentials parts of the physics of the problem, although its accuracy depends on the parameters of the specific mission. Therefore, we carry out a numerical investigation using the simplified model called a Restricted Synchronous Three-Body Problem, as introduced by (Barbosa Torres dos Santos et al., 2017). The practical advantage of using this model is that we can, in a relatively simple way, analyze the influence of the dimension of the secondary body on the dynamics of a spacecraft in the neighborhood of \(M_{2}\).
We focus on the behavior of a particle of negligible mass in the vicinity of a binary system of type A small bodies (NEAs and MBAs). The reason for choosing this class of asteroids is that, the NEAs, in particular, are asteroids that pass near the Earth and most of the systems that are part of this class are synchronous systems. Our aim is to understand how the parameters of the dipole, dimension (\(d\)) and the mass ratio (\(\mu^{*}\)) of the system, influence the stability, period and, bifurcation of the periodic orbits around the equilibrium points. In Section 2 we provide the equation of motion of the three-body synchronous restricted problem. In Section 3, we investigate the influence of the force ratio (\(k\)) on the appearance of the equilibrium points, keeping the values of \(\mu^{*}\) and \(d\) fixed. Then, in Section 4, we investigate the influence of \(\mu^{*}\) and \(d\) on periodic orbits (planar and halo) around the equilibrium points \(L_{1}\) and \(L_{2}\), considering \(k\) fixed (\(k=1\)). Finally, in Section 5, we provide the final considerations that were obtained in this article.
## 0.2 Equations of Motion
Consider that the motion of a particle with negligible mass, \(P(x,\ y,\ z)\), is dominated by the gravitational forces of the primary bodies \(M_{1}\) and \(M_{2}\). As already mentioned, the distance between \(M_{1}\) and \(M_{2}\) is assumed to be \(D=12\) km, which will be the normalization factor in the rest of this work. The larger primary is considered to be a point mass with mass \(m_{1}\) and the secondary is modeled as a rotating mass dipole formed by \(m_{21}\) and \(m_{22}\), as shown in Figure 1.
In canonical units, the sum of the masses of the bodies \(M_{1}\) and \(M_{2}\) is unitary. In this work, for all numerical simulations, we assume that \(m_{1}>m_{21}=m_{22}\) and that the mass ratio is defined by \(\mu^{*}=m_{21}/(m_{1}+m_{21}+m_{22})\). By analogy, \(\mu^{*}=\mu/2\), with \(\mu\) being the usual mass ratio used in the classical restricted three-body problem.
The angular velocity, given by \(\boldsymbol{\omega}=\omega\mathbf{z}\), is aligned with the \(z\)-axis of the system. Here, the unit of time is defined such that the orbital period of
the primary bodies around the center of mass of the system is equal to \(\omega^{-1}\). Because the system is synchronous, the orbital period of \(M_{2}\) around the center of mass is the same as its orbital period around the axis of the dipole.
With respect to the barycentric rotating frame, the masses \(m_{1}\), \(m_{21}\) and \(m_{22}\) are fixed along the \(x\)-axis with coordinates \(x_{1}=-2\mu^{*}\), \(x_{21}=-2\mu^{*}-\frac{d}{2}+1\) and \(x_{22}=-2\mu^{*}+\frac{d}{2}+1\), respectively, where \(d\), given in canonical units, is the distance between \(m_{21}\) and \(m_{22}\).
Using the generalized potential
\[\Omega=\frac{x^{2}+y^{2}}{2}+k\left(\frac{1-2\mu^{*}}{r_{1}}+\frac{\mu^{*}}{r_ {21}}+\frac{\mu^{*}}{r_{22}}\right), \tag{1}\]
we can write the equations of motion of \(P\) in a rotating frame centered on the barycenter of the system (\(M_{1}\)-\(M_{2}\)) as follows:
\[\begin{bmatrix}\dot{x}\\ \dot{y}\\ \dot{z}\\ \ddot{x}\\ \ddot{y}\\ \ddot{z}\end{bmatrix}=\begin{bmatrix}\dot{x}\\ \dot{y}\\ \dot{z}\\ 2\dot{y}+D_{x}\Omega\\ -2\dot{x}+D_{y}\Omega\\ D_{z}\Omega\end{bmatrix}, \tag{2}\]
Figure 1: Representative image of the geometric shape of the system considered (out of scale).
with
\[r_{1} =\sqrt{(x-x_{1})^{2}+y^{2}+z^{2}},\] \[r_{21} =\sqrt{(x-x_{21})^{2}+y^{2}+z^{2}},\] \[r_{22} =\sqrt{(x-x_{22})^{2}+y^{2}+z^{2}},\]
where \(D_{x}\Omega\) denotes the partial derivative of \(\Omega\) with respect to \(x\) and the same notation is used for \(y\) and \(z\). The dimensionless parameter \(k\) represents the ratio between gravitational and centrifugal accelerations, \(k=G(M)/(\omega^{*2}D^{3})\), where \(M\) is the total mass of the system in kg, \(\omega^{*}\) is the angular velocity of the \(M_{2}\) in rad/s, \(D\) is the distance, in meters, between \(M_{1}\) and the center of mass of \(M_{2}\) and, finally, \(G=6.67408\times 10^{-11}m^{3}kg^{-1}s^{-2}\)(Zeng et al., 2015; Feng et al., 2016).
The free parameters of the system are \(d\), \(\mu^{*}\) and \(k\), which correspond, respectively, to the size of \(M_{2}\), the mass ratio and a parameter accounting for the rotation of the asteroid. When \(k\) is equal to 1, the bodies orbit the center of mass of the system without any internal forces in the dipole. On the other hand, when \(k~{}<~{}1\), the dipole is stretching, while it is compressing when \(k~{}>~{}1\). Therefore, depending on the class of the binary system being analyzed, we need to consider the force ratio value (\(k\)). A particular case occurs when \(d\) (distance from the mass dipole) is equal to zero, causing the bodies of mass \(m_{21}\) and \(m_{22}\) to overlap, becoming a point of mass, with mass ratio \(2\mu^{*}\). The classical Restricted Three-Body Problem corresponds to the particular case \(d=0\) and \(k=1\)(McCuskey, 1963; Szebehely, 1967). Also, when \(d\neq 0\) and \(k=1\), we have the Restricted Synchronous Three-Body Problem (Barbosa Torres dos Santos et al., 2017).
### Equilibrium point and stability analysis
Let \(\mathbf{x}=(x,~{}y,~{}z,~{}\dot{x},~{}\dot{y},~{}\dot{z})\in\mathbb{R}^{6}\) be the state vector of a massless particle and \(f:\mathbb{R}^{6}~{}\rightarrow~{}\mathbb{R}^{6}\) be
\[f(\mathbf{x})=\begin{bmatrix}f_{1}\\ f_{2}\\ f_{3}\\ f_{4}\\ f_{5}\\ f_{6}\end{bmatrix}=\left[\begin{array}{c}\dot{x}\\ \dot{y}\\ \dot{z}\\ 2\dot{y}+D_{x}\Omega\\ -2\dot{x}+D_{y}\Omega\\ D_{z}\Omega\end{array}\right]. \tag{3}\]
The equilibrium points \(L_{i}\), \(i=1,2,3,4,5\), are defined as the zeros of \(f(\mathbf{x})\). To determine the linear stability of each equilibrium, one needs to translate the origin to the position of this equilibrium point and linearize the equations of motions around this point. Thus, the linearization over any of these equilibrium points is
\[\dot{\mathbf{x}}=D_{L_{i}}\mathbf{x} \tag{4}\]
where \(D_{L_{i}}\) is the derivative of \(f({\bf x})\) computed at the equilibrium point \(L_{i}\).
To determine the linear stability of the equilibrium points (\(L_{i}\), \(i=1,2,3,4\) and 5), it is necessary to transfer the origin of the coordinate system to the position of the equilibrium points (\(x_{0},y_{0},z_{0}\)) and then linearize the equations of motion around these points, obtaining the results shown below.
\[\ddot{\xi}\ -\ 2\dot{\eta}\ \ \ =\ \ \Omega_{xx}(x_{0},y_{0},z_{0})\xi\ +\ \Omega_{xy}(x_{0},y_{0},z_{0})\eta\ +\ \Omega_{xz}(x_{0},y_{0},z_{0})\zeta \tag{5}\]
\[\ddot{\eta}\ +\ 2\dot{\xi}\ \ \ =\ \ \ \Omega_{yx}(x_{0},y_{0},z_{0})\xi\ +\ \Omega_{yy}(x_{0},y_{0},z_{0})\eta\ +\ \Omega_{yz}(x_{0},y_{0},z_{0})\zeta \tag{6}\]
\[\ddot{\zeta}\ \ \ \ \ =\ \ \ \ \Omega_{zx}(x_{0},y_{0},z_{0})\xi\ +\ \ \Omega_{zy}(x_{0},y_{0},z_{0})\eta\ +\ \ \Omega_{zz}(x_{0},y_{0},z_{0})\zeta \tag{7}\]
where the partial derivatives in (\(x_{0},y_{0},z_{0}\)) mean that the value is calculated at the equilibrium point being analyzed. Partial derivatives are shown in Equations 8 - 13.
\[\Omega_{xx}=1+k\bigg{[}\frac{3(1-2\mu^{*})(x-x_{1})^{2}}{((x-x_{1 })^{2}+y^{2}+z^{2})^{5/2}}-\frac{1-2\mu^{*}}{((x-x_{1})^{2}+y^{2}+z^{2})^{3/2} }\big{)}+\\ \frac{3\mu^{*}(x-x_{21})^{2}}{((x-x_{21})^{2}+y^{2}+z^{2})^{5/2}} -\frac{\mu^{*}}{((x-x_{21})^{2}+y^{2}+z^{2})^{3/2}}-\\ \frac{3\mu^{*}(x-x_{22})^{2}}{((x-x_{22})^{2}+y^{2}+z^{2})^{5/2}} +\frac{\mu^{*}}{((x-x_{22})^{2}+y^{2}+z^{2})^{3/2}}\bigg{]}, \tag{8}\]
\[\Omega_{yy}=1+k\bigg{[}\frac{3(1-2\mu^{*})y^{2}}{((x-x_{1})^{2}+ y^{2}+z^{2})^{5/2}}-\frac{1-2\mu^{*}}{((x-x_{1})^{2}+y^{2}+z^{2})^{3/2}}+\\ \frac{3\mu^{*}y^{2}}{((x-x_{21})^{2}+y^{2}+z^{2})^{5/2}}-\frac{ \mu^{*}}{((x-x_{21})^{2}+y^{2}+z^{2})^{3/2}}+\\ \frac{3\mu^{*}y^{2}}{((x-x_{22})^{2}+y^{2}+z^{2})^{5/2}}-\frac{ \mu^{*}}{((x-x_{22})^{2}+y^{2}+z^{2})^{3/2}}\bigg{]}, \tag{9}\]
\[\Omega_{zz}=k\bigg{[}\frac{3(1-2\mu^{*})z^{2}}{((x-x_{1})^{2}+y^ {2}+z^{2})^{5/2}}-\frac{1-2\mu^{*}}{((x-x_{1})^{2}+y^{2}+z^{2})^{3/2}}+\\ \frac{3\mu^{*}z^{2}}{((x-x_{21})^{2}+y^{2}+z^{2})^{5/2}}-\frac{ \mu^{*}}{((x-x_{21})^{2}+y^{2}+z^{2})^{3/2}}+\\ \frac{3\mu^{*}z^{2}}{((x-x_{22})^{2}+y^{2}+z^{2})^{5/2}}-\frac{ \mu^{*}}{((x-x_{22})^{2}+y^{2}+z^{2})^{3/2}}\bigg{]}, \tag{10}\]
\[\Omega_{xy}=\Omega_{yx}=k\bigg{[}\frac{3(1-2\mu^{*})(x-x_{1})^{2}y}{((x- x_{1})^{2}+y^{2})^{5/2}}+\frac{3\mu^{*}(x-x_{21})^{2})y}{((x-x_{21})^{2}+y^{2})^{5/2} }+\\ \frac{3\mu^{*}(x-x_{22})y}{((x-x_{22})^{2}+y^{2})^{5/2}}\bigg{]}, \tag{11}\]
\[\Omega_{xz}=\Omega_{zx}=k\bigg{[}\frac{3(1-2\mu^{*})(x-x_{1})z}{(( x-x_{1})^{2}+y^{2}+z^{2})^{5/2}}+\frac{3\mu^{*}(x-x_{21})z}{((x-x_{21})^{2}+y^{2}+z ^{2})^{5/2}}+\\ \frac{3\mu^{*}(x-x_{22})z}{((x-x_{22})^{2}+y^{2}+z^{2})^{5/2}} \bigg{]}, \tag{12}\]
\[\Omega_{yz}=\Omega_{zy}=k\bigg{[}\frac{3(1-2\mu^{*})yz}{((x-x_{1})^{2}+y^{2}+ z^{2})^{5/2}}+\frac{3\mu^{*}yz}{((x-x_{21})^{2}+y^{2}+z^{2})^{5/2}}+\\ \frac{3\mu^{*}yz}{((x-x_{22})^{2}+y^{2}+z^{2})^{5/2}}\bigg{]}. \tag{13}\]
In Equations 5 - 7, \(\xi\), \(\eta\) and \(\zeta\) represent the position of the particle with respect to the equilibrium point. Through numerical analysis, we observed that the equilibrium points exist only in the \(xy\) plane, regardless of the values assigned to \(d\), \(\mu^{*}\) and \(k\). Due to the fact that the equilibrium points for the rotating mass dipole model are in the \(xy\) plane, the Equation 7 is decoupled (it does not depend on \(\xi\) and \(\eta\)), therefore, the equation of motion 7 becomes
\[\ddot{\zeta}=-\vartheta\zeta \tag{14}\]
where \(\vartheta\) is constant and depends on the values assigned to \(d\), \(\mu^{*}\) and \(k\). Equation 14 shows that the motion perpendicular to the \(xy\) plane is periodic with frequency \(\omega=\sqrt{\vartheta}\). Motion in the \(z\) direction is therefore limited with
\[\zeta=c_{3}\cos(\sqrt{\vartheta}t)+c_{4}\sin(\sqrt{\vartheta})t \tag{15}\]
where \(c_{3}\) and \(c_{4}\) are integration constants.
When the motion is in the \(xy\) plane, the non-trivial characteristic roots of the Equation 5, 6 were obtained in Barbosa Torres dos Santos et al. (2017) (considering \(k=1\).). The linearization around \(L_{1}\) and \(L_{2}\) provides a pair of real eigenvalues (\(saddle\)), corresponding to one-dimensional stable and unstable manifolds, and one pairs of imaginary eigenvalues, suggesting a two-dimensional central subspace in plane \(xy\), which accounts for an oscillatory behavior around the equilibrium point of the linear system (Howell, 1982; Haapala et al., 2015). Hence, in general, for \(L_{1}\) and \(L_{2}\), the stability type is \(saddle\times center\times center\) for the problem studied here and also for the CRTPB considering \(0<\mu\leq 0.5\). The Lyapunov Center Theorem guarantees, for the planar case, the existence of a one-parameter family of periodic orbits emanating from each of the collinear equilibrium points. Thus, for the spatial case, two one-parameter families of periodic orbits around \(L_{1}\) and \(L_{2}\) are expected. It was observed that the nature of the eigenvalues of the collinear equilibrium points is not altered when we vary \(d\), \(\mu^{*}\) and \(k\).
Consider the linearized dynamics around the \(L_{1}\) equilibrium point. We will adopt the coordinates \({\bf x^{\prime}}=(\xi;\,\eta;\,u;\,v)\), where \(u\) and \(v\) are the velocities in the \(x\) and \(y\) direction, respectively, for the physical variables in the linearized planar system. To differentiate, we will use the coordinates \({\bf x_{0}}=(x;\,\,y;\,\dot{x};\,\dot{y})\) for the physical variables in the nonlinear system and, finally, \({\bf y}=(y_{1};\,y_{2};\,y_{3};\,y_{4})\) for the variables in the diagonalized system. We know that if we choose an initial condition anywhere near the equilibrium point, the real components of the eigenvalues (stable and unstable) will dominate the particle's behavior. But instead of specifying any initial condition for the system, we want to find an orbit around the equilibrium point \(L_{1}\), for example, with some desired behavior, such as a periodic orbit. This becomes easy if we use the diagonalized system (\({\bf y_{0}}\)) to determine the initial conditions. As we want to minimize the component in the unstable direction of the non-linear path, we must choose the initial conditions that correspond to the harmonic motion of the linear system. Thus, we choose the initial condition in the diagonalized system as \({\bf y_{0}}=(0;\,0;\,y_{3};\!y_{4})\), where the non-zero initial values can be complex numbers and is intended to amplify the oscillatory terms. Null terms have the function of nullifying exponential (unstable) terms. In fact, if we want to get real solutions at the \({\bf x^{\prime}}\) coordinates, we must consider \(y_{3}\) and \(y_{4}\) as complex conjugates. Transforming these conditions back to the original coordinates of the linear system, from the transformation \({\bf x_{0}}=T{\bf y_{0}}\), we find the initial conditions in the linearized system \({\bf x^{\prime}_{0}}=(x^{\prime},\,y^{\prime},\,u,\,v)\), where \(T\) is the matrix of the eigenvectors of the state transition matrix \(A\). The Jacobian matrix \(A\) contains the pseudo potential Hessian, derived from the truncated Taylor series expansion over the reference solution.
Due to the fact that these initial conditions were chosen to nullify the unstable and stable eigenvectors, they provide a harmonic movement in the linear system.
Now that we have the initial conditions for the linear system, we want to find a periodic planar orbit in the nonlinear system.
We can note that the potential function for the system studied here depends only on the distances that a spacecraft are from the primary bodies, that is, it has symmetry with respect to the \(x\)-axis. Taking advantage of the fact that the planar orbits are symmetrical with respect to the \(x\)-axis, the initial state vector takes the form \({\bf x_{0}}\)= \([x_{0}\ 0,\,0,\,\dot{y}_{0}]^{T}\). These symmetries were used to find symmetric periodic orbits. This is done by determining the initial conditions, on the \(x\)-axis, where the initial velocity is perpendicular to this axis (\(\dot{y}\)) and then the integration is done until the path returns by crossing the \(x\)-axis with the speed orientation \(\dot{y}_{f}\) opposite to the initial condition. This orbit can be used as an initial guess to use Newton's method, where the target state is quoted above; that is, that the orbit returns to \(x\)-axis with normal velocity. The equations of motion and the State Transition Matrix are incorporated numerically until the trajectory crosses the \(x\)-axis again. The final desired condition has the following form \({\bf x_{f}}\)= \([x_{f}\ 0,\,0,\,\dot{y}_{f}]^{T}\).
Collinear equilibrium points as a function of the ratio between gravitational and centrifugal accelerations
In this section, we analyze the influence of the parameter \(k\) on the position of the collinear equilibrium points, since the influence of \(d\) and \(\mu^{*}\) on the collinear points has already been performed in the work of (Barbosa Torres dos Santos et al., 2017).
To determine how \(k\) affects the positions of the collinear equilibrium points, we consider \(\mu^{*}~{}=~{}1\times 10^{-3}\) and \(d~{}=~{}1/12\) canonical units.
Figure 2 shows the \(x\) coordinates of \(L_{1}\), \(L_{2}\) and \(L_{3}\) as a function of \(k\). Because they are at both ends of the \(x\) axis, the positions of \(L_{2}\) (right curve) and \(L_{3}\) (left curve) are more affected than the position of \(L_{1}\). Consider that there are three forces acting on the system: (i) the gravitational force of \(M_{1}\); (ii) the gravitational attraction of \(M_{2}\); and (iii) the centrifugal force, which is directly proportional to the angular velocity of the system around the center of mass and the distance between the equilibrium point and the center of mass of the system. Thus, by decreasing the angular velocity of the asteroid system around the center of mass, as \(k\) becomes larger, it is necessary to increase the distance between \(P\) and the center of mass such that the centrifugal force remains at the same value and it counterbalances the gravitational forces from \(M_{1}\) and \(M_{2}\), which remain unchanged. Thus, \(L_{2}\) and \(L_{3}\) move away from the center of mass of the system. Although \(L_{1}\) also moves away from the center of mass of the system, it does so in a more subtle way. This is because, when moving away from the center of mass of the system, \(L_{1}\) approaches \(M_{2}\). Regarding the gravitational force increases, a balancing force is needed to prevent \(L_{1}\) from going too close to \(M_{2}\).
As shown in Figure 2, the \(x\) coordinates of \(L_{2}\) and \(L_{3}\) tend to \(\pm~{}\infty\), respectively, when \(k~{}\rightarrow~{}\infty\), that is, when the asteroid system ceases to rotate. This implies that \(L_{2}\) and \(L_{3}\) cease to exist when the asteroids are static. On the other hand, the equilibrium point \(L_{1}\) continues to exist when \(k~{}\rightarrow~{}\infty\), due to the balance between the gravitational forces between \(M_{1}\) and \(M_{2}\).
Periodic orbits around the first and second collinear equilibrium points as a function of the mass parameter and the size of the dipole
Based on previous knowledge about Type A asteroids, we consider that the most massive primary is spherical in shape and with a diameter of 5 km (Pravec & Harris, 2007; Walsh & Jacobson, 2015). Also, knowing that, on average, the mutual orbit of type A binary asteroids has a semi-major axis of about 4.8 primary component radii (Walsh & Jacobson, 2015), we consider that the distance between the bodies is 12 km, which is the normalization factor for the distances. Finally, type A asteroids are known to have moderately sized secondary, ranging from 4% to 58% of the size of the primary, whose mass
ratio (\(m_{2}/(m_{1}+m_{2})\)) range varies from \(6.4\times 10^{-5}\) to \(2.0\times 10^{-1}\). Based on this evidence, we will consider in this analysis the dimension of the secondary body from 0 to 2 km, where we vary in step 500 meters, and a range of the mass ratio from 1\(\times 10^{-5}\) to 1\(\times 10^{-1}\), where we vary in step \(10^{-1}\).
Periodic orbits are of special interest to explore the dynamical behavior of a massless particle in the vicinity of two primary bodies.
The results below were obtained by calculating approximately 3500 orbits from each family, starting from an initial condition with very low amplitude, and continuing the families until the orbits obtained came near the surface of the asteroids. To find symmetric periodic orbits, we consider \(k=1\), that is, the bodies orbit the center of mass of the system without any internal forces.
Each family was calculated for different values of \(\mu^{*}\) and \(d\) to highlight the effect of the mass ratio of the system and of the elongated shape of the secondary body on the dynamical behavior of a space vehicle in the vicinity of the binary system. To analyze the influence of the elongation of the secondary body on the periodic orbits, we determined the periodic orbits considering the values \(d=\) 0; 0.5; 1; 1.5 and 2 km. Also aiming to understand the influence of \(\mu^{*}\) on the periodic orbits, we determine the periodic orbits considering the values \(\mu^{*}\) = \(10^{-5}\); \(10^{-4}\); \(10^{-3}\); \(10^{-2}\) and \(10^{-1}\).
We are interested in the stability of the periodic solutions, which can be determined by analyzing the eigenvalues of the monodromy matrix. Given the sympletic nature of the dynamical system, if \(\lambda\) is a characteristic multiplier, then \(1/\lambda\) is also, as well as, \(\overline{\lambda}\) and \(1/\overline{\lambda}\). Thus, the periodic solutions investigated have six characteristic multipliers that appear in reciprocal pairs, with two of them being unitary (Meyer & Hall 1992; Bosanac 2016). The other
Figure 2: \(x\)-coordinates of the equilibrium points \(L_{1}\), \(L_{2}\) and \(L_{3}\) for different values of \(k\).
four may be associated with the central subspace or with the stable/unstable subspace. In general, a particular orbit has six characteristic multipliers of the form 1, 1, \(\lambda_{1}\), 1/\(\lambda_{1}\), \(\lambda_{2}\) and 1/\(\lambda_{2}\).
The stability indices offer a useful measure of orbital stability. Following (Broucke 1969), the stability index is defined as \(s_{i}\) = \(|\lambda_{i}\) + 1/\(\lambda_{i}|\), \(i=1,2\). A periodic orbit is unstable and there is a natural flow out and into the orbit if any stability index is greater than 2, that is, if \(s_{i}>2\). On the other hand, a periodic orbit is stable and has no unstable subspace, that is, if \(s_{i}<2\) (Zimovan-Spreen et al. 2020). The magnitude of the stability index is directly related to the arrival/departure flow rate. The higher the value of \(s_{i}\), the more unstable is the periodic orbit and bifurcations can occur when \(s_{i}=2\).
Given that the periodic orbits growing from the collinear points inherit the stability properties of \(L_{1}\), \(L_{2}\), and \(L_{3}\), the eigenvalues of the monodromy matrix of these orbits and corresponding stability indices appear as: (i) a trivial pair of unitary values, resulting in \(s_{0}\) = 2; (ii) a real pair of reciprocals, resulting in \(s_{1}>2\); and (iii) a pair of complex conjugate eigenvalues with unitary absolute value, implying \(s_{2}<2\). Thus, given that, for the subsets of the periodic orbit (PO) families near the equilibria, \(s_{1}\) is related to the stable/unstable subspace (\(\lambda^{W_{s}}\)/ \(\lambda^{W_{u}}\)), while \(s_{2}\) is the stability index corresponding to the pair accounting for the central subspace.
### Planar orbits
Figure 3 shows a family of planar orbits around \(L_{1}\) with \(\mu^{*}=10^{-5}\) and \(d\) = 0. The orbits obtained do not intersect the asteroid although, as seen in Figure 3, as the amplitude increases along the family, the orbits expand from the vicinity of the equilibrium point towards the surface of the secondary body (black asterisk).
In Figure 3, the red orbits indicate where bifurcations occur, that is, when one of the stability indices \(s_{1}\) or \(s_{2}\) reaches the critical value 2. Note in Figure
Figure 3: Planar orbits around of the equilibrium point \(L_{1}\) considering \(\mu^{*}=10^{-5}\) and \(d\) = 0.
3 that the maximum position, when the second bifurcation occurs, reached by the infinitesimal mass body in the \(x\) component is greater than the position of the secondary body.
Although many bifurcations exist in dynamical systems, only two types of bifurcation are of particular interest for the focus of this work; the pitchfork and period-multiplying bifurcations.
A family of periodic orbits undergoes a pitchfork bifurcation when the stability of the periodic orbit changes as a parameter evolves, which in our case is the energy constant. During this type of local bifurcation, a pair of eigenvalues (not trivial) of the monodromy matrix pass through the critical values \(\lambda_{1}=1/\lambda_{1}\) (or \(\lambda_{2}=1/\lambda_{2}\)) = + 1 of the unit circle. Consequently, the stability index passes through \(s_{1}\) (or \(s_{2}\)) = 2 (Bosanac 2016). In addition, the stability of the periodic orbits changes along a family, an additional family of a similar period is formed. This new family of orbits has the same stability as the members of the original family before the bifurcation arose. On the other hand, a period-doubling bifurcation is identified when a pair of not trivial eigenvalues (\(\lambda_{1,2}\) and \(1/\lambda_{1,2}\), where \(\lambda_{1,2}\) means \(\lambda_{1}\) or \(\lambda_{2}\)), passes through \(\lambda_{1,2}=1/\lambda_{1,2}\) = - 1 of the unit circle. Therefore, it represents a critical value of the stability index, such that \(s_{1,2}\) = - 2 (Bosanac 2016).
When building the families of planar orbits, with \(d~{}=~{}0\) and \(\mu^{*}\) = \(10^{-5}\), we observe that the stability index (\(s_{2}\)) reaches the critical value three times for planar orbits around \(L_{1}\) and \(L_{2}\), as seen in Figures 4. In both figures, the horizontal axis display the minimum \(x\) value along the orbits. The stability index \(s_{1}\) does not reach the critical value for this \(\mu^{*}\) and \(d\).
For \(\mu^{*}\) = \(10^{-5}\), the equilibrium point \(L_{1}\) is located at \(x~{}=~{}0.981278\), \(y~{}=~{}0\) and \(z~{}=~{}0\), while \(L_{2}\) is at position \(x~{}=1.01892\), \(y~{}=~{}0\) and \(z~{}=~{}0\). The orbits with smaller amplitudes are close to the equilibrium point (right side of Figures 4) and the first bifurcation occurs for a small amplitude orbit (\(x~{}\approx 0.97583\) for \(L_{1}\) and \(x~{}\approx 1.01577\) for \(L_{2}\)). As we continue the planar family, the stability index \(s_{2}\) shown in Figure 4 continues to increase, reaches a maximum, decreases and reaches the value 2 again, where another bifurcation occurs, with \(x~{}\approx 0.99408\) for \(L_{1}\) and \(x~{}\approx 1.0056\) for \(L_{2}\). As we continue the families of planar orbits around \(L_{1}\) and \(L_{2}\), the stability index decreases, reache a minimum, increases and again and reaches the critical value 2, where another bifurcation occurs. After the third bifurcation, the stability index further increases and we did not detect additional bifurcations given that, as the orbits are very close to the center of mass of the secondary body, our Newton method looses track of planar orbits, converging to a completely different family of orbits.
Figures 5 (a), (b) and (c) provide information about the types of the bifurcations that occur along the family of planar orbits. For \(\mu^{*}\) = \(10^{-5}\) and \(d\) = 0, analyzing the path of the characteristic multipliers in Figures 5 (a) and (b), we find that the first bifurcation is a supercritical pitchfork bifurcation, while the second one corresponds to a subcritical pitchfork case. This suggests that new families of periodic orbits appear in those regions when
Figure 4: Stability index (\(s_{2}\)) around \(L_{1}\) (red) and \(L_{2}\) (green) considering \(d~{}=~{}0\) and \(\mu^{*}=10^{-5}\).
the bifurcation occurs (Feng et al., 2016). In fact, after the first bifurcation (low amplitude periodic orbit), it is possible to detect halo orbits, while after the second bifurcation the family of axial orbits appears (Grebow, 2006). Unlike the planar Lyapunov orbits, halo and axial orbits are three-dimensional.
Figure 5 (c) shows the behavior of the eigenvalues at the third bifurcation. The characteristic multipliers start in the imaginary plane and move until they collide on the negative real axis and start to obtain only real values on the negative axis. Thus, the eigenvalues indicate a period-doubling bifurcation.
Figure 6 and 7 provides information about the stability index (considering the values of \(s_{2}\)), around \(L_{1}\) and \(L_{2}\), respectively, when we increase the dipole dimension from 0 meters, that is, the body is modeled as a mass point, up to the dimension of 2000 meters. In this analysis we consider the constant mass ratio in the value of \(\mu^{*}~{}=~{}10^{-5}\). When we consider the dipole as a point mass body (\(d=0\)), it is possible to observe three bifurcations (the red curve passes through the critical value three times). We can observe that as we increase the dimension of the secondary, the second bifurcation points in
Figure 5: (a) Behavior of the characteristic multipliers at the first pitchfork bifurcation around \(L_{1}\) and \(L_{2}\). (b) Behavior of the characteristic multipliers at the second pitchfork bifurcation around \(L_{1}\) and \(L_{2}\). (c) Behavior of the characteristic multipliers that leads to the period-doubling bifurcation around \(L_{1}\) and \(L_{2}\). In these cases we consider \(d~{}=~{}0\) and \(\mu^{*}=10^{-5}\).
the planar orbits around \(L_{1}\) and \(L_{2}\) cease to exist because the trajectories collide with the secondary body. This is because, as the dimension of the dipole varies and the planar orbits approach the secondary body, our Newton method looses track of the planar orbits, converging to a completely different family of the orbits. Note that, the larger the dipole size, the smaller the planar orbit family found.
### Influence of the mass parameter and the size of the dipole on the planar orbits
Now, we investigate how the planar orbits evolve as a function of the dipole size and mass ratio in canonical units. With the normalization factor being \(D=12000\) meters, the dipoles sizes used in our study were \(d=\) 0, 500, 1000, 1500 and 2000 meters.
Figures 8 provide information about the stability index \(s_{1}\) of the planar orbits around \(L_{1}\), respectively, as a function of \(d\) and \(\mu^{*}\). In both figures, the color code accounts for the size of the dipole (\(d\)).
First, we investigate the solutions as \(d\) varies and \(\mu^{*}\) is kept constant. Note that, in Figures 8, in general, when the size of the dipole increases, the planar orbits become more unstable. This means that, the larger the secondary body, the more unstable the planar orbits are.
Figure 6: Planar orbit stability index around \(L_{1}\) for different values of \(d\).
Figure 8: Stability index (\(s_{1}\)) of the planar orbits around \(L_{1}\) for different values of \(d\) and \(\mu^{*}\).
Figure 7: Planar orbit stability index around \(L_{2}\) for different values of \(d\).
If we consider \(d=0\), which corresponds to the CRTBP, we observe that, as \(\mu^{*}\) increases, the orbits become increasingly unstable. On the other hand, when the elongated form of the secondary body is taken into account, \(s_{1}\) becomes smaller as \(\mu^{*}\) increases, and it only increases again after \(\mu^{*}=10^{-1}\). This information is important for space missions, since a high value in the stability index (\(s_{i}\)) indicates a divergent mode that moves the spacecraft away from the vicinity of the orbit quickly. In general, the stability index is directly related to the space vehicle's orbital maintenance costs and inversely related to the transfer costs. This same analysis was performed around the \(L_{2}\) equilibrium point, where we found similar results.
Next, we analyze the period of the planar orbits in terms of \(d\) and \(\mu^{*}\) around \(L_{1}\). As shown in Figures 9, for low amplitude, as \(d\) increases, with \(\mu^{*}\) kept constant, the period of the planar orbits decreases. This is because the mass distribution of the secondary body allows part of the mass of the asteroid to be closer to the negligible mass particle, causing the gravitational attraction to become larger, thus increasing the acceleration in the vicinity of the secondary body and decreasing the orbital period. On the other hand, when the \(x\)-amplitude is large, the results can be inverted, as shown in 9. In general, when the amplitude of the orbit increases, the orbital period becomes longer. Similar results were found in the vicinity of \(L_{2}\).
Considering the family with \(d=0\), when \(\mu^{*}\) increases, the period of the orbits remains similar, except when \(\mu^{*}=10^{-1}\). Conversely, when the elongation of the secondary body is considered, in general, for a given value of \(d\), the larger the mass ratio, the longer the orbital period.
Finally, we analyze the energy of the system in terms of \(d\) and \(\mu^{*}\), as shown in the Figures 10.
We find that when \(d\) or \(\mu^{*}\) increases, the energy required to orbit a given equilibrium point decreases. That is, the more elongated the secondary body and the larger the value of \(\mu^{*}\), the less energy is needed to orbit a given
Figure 9: Period of planar orbits around \(L_{1}\) for different values of \(d\) and \(\mu^{*}\).
equilibrium point. This also means that, as the size of the dipole increases or as the mass ratio of the system increases, the bifurcations occur at lower energies. The same analysis performed for \(L_{1}\) can be done for \(L_{2}\).
### Computing halo orbits
Halo orbits are a three-dimensional branch of planar orbits that appear when the planar orbit stability index reaches the critical value \(s_{2}=2\). Figure 11 shows a family of halo orbits around \(L_{1}\) with \(\mu^{*}=10^{-5}\) and \(d=0\). The orbits are in three-dimensional space and as the amplitude increases along the family, the halo orbits expand from the vicinity of the equilibrium point towards the surface of the secondary (black asterisk).
To find the initial conditions of the halo orbit, we keep the coordinate \(x_{0}\) fixed and search for \(z_{0}^{*}\), \(\dot{y}_{0}^{*}\) and \(T/2^{*}\) such that \(\dot{x}^{*}(T/2^{*})\), \(\dot{z}^{*}(T/2^{*})\) and \(y^{*}(T/2^{*})\) are all null. Then, to find the halo orbit, we use as initial guess the position \(x_{0}\), velocity (\(\dot{y}\)) and period (\(T\)) of the planar orbit when the stability index \(s_{2}=2\). Knowing these initial conditions, all that remains is to determine the initial guess of the position on the \(z\) axis, such that we can find the halo orbit. Because the halo and planar orbit are similar (when \(s_{2}=2\)), the position on the \(z\) axis of the halo orbit must have a very small value (almost planar orbit). Thus, in this work, the value of \(z_{0}=0.0001\) canonical unit was used as the initial guess for the position on the axis \(z\). A Newton method for this problem is
\[\mathbf{x}_{n+1}=\mathbf{x}_{n}-[Df(\mathbf{x}_{n})]^{-1}f(\mathbf{x}_{n}) \tag{16}\]
with \(\mathbf{x}=(z,\dot{y},\,T/2)\) and \(\mathbf{x}_{0}=(z_{0},\dot{y}_{0},\,T_{0}/2)\). Aqui (\(z_{0}\), \(\dot{y}_{0},\,T_{0}/2\)) is the initial guess of the halo orbit.
The differential is
\[Df(\mathbf{x}=\begin{bmatrix}\phi_{4,3}&\phi_{4,5}&g_{4}(x_{0},0,z(T/2),0,\dot {y}(T/2),0)\\ \phi_{6,3}&\phi_{6,5}&g_{6}(x_{0},0,z(T/2),0,\dot{y}(T/2),0)\\ \phi_{2,3}&\phi_{2,5}&g_{2}(x_{0},0,z(T/2),0,\dot{y}(T/2),0)\end{bmatrix} \tag{17}\]
Figure 10: Jacobi constant of planar orbits around \(L_{1}\) for different values of \(d\) and \(\mu^{*}\).
where \(\phi_{i,j}\) are elements of the monodromy matrix, \(g:U\subset\mathbb{R}^{6}\rightarrow\mathbb{R}^{6}\) is the vector field of the restricted synchronous three-body problem, \(z(T/2)=\phi_{3}(x_{0},0,z,0,y,0,T/2)\) and \(\dot{y}(T/2)=\phi_{5}(x_{0},0,z,0,y,0,T/)\). With this information, we expect that if \(\mathbf{x}_{0}\) is close enough to the halo orbit, then \(\mathbf{x}_{n}\rightarrow\mathbf{x}^{*}\) as \(n\rightarrow\infty\). \(g\) is given by Equation 18.
\[g(x,y,z,\dot{x},\dot{y},\dot{z})\quad=\quad\begin{bmatrix}g_{1}(x,y,z,\dot{x}, \dot{y},\dot{z})\\ g_{2}(x,y,z,\dot{x},\dot{y},\dot{z})\\ g_{3}(x,y,z,\dot{x},\dot{y},\dot{z})\\ g_{4}(x,y,z,\dot{x},\dot{y},\dot{z})\\ g_{5}(x,y,z,\dot{x},\dot{y},\dot{z})\\ g_{6}(x,y,z,\dot{x},\dot{y},\dot{z})\end{bmatrix}\quad=\quad\begin{bmatrix} \quad\quad\quad\dot{x}\\ \quad\quad\dot{y}\\ \quad\quad\dot{z}\\ 2\dot{y}+D_{x}\Omega\\ -2\dot{x}+D_{y}\Omega\\ D_{z}\Omega\end{bmatrix} \tag{18}\]
All the information we need to start Newton's method is shown above.
From the cylinder theorem, it was possible to find a halo orbit family. Thus, having found a halo orbit and noticing that it has exactly two unit eigenvalues, we can use that as a starting point to move along the cylinder. We use the initial conditions from the previous halo orbit as a starting point to find the next halo orbit at a slightly larger value of \(x\) (\(x\) coordinate closer to the secondary asteroid). If we find another halo orbit here, we iterate through the process. In this way it was possible to calculate a halo orbit family. The \(x\) coordinate step to determine each halo orbit was \(x\) = 0.00002.
### Halo orbits
Figures 12 and 13 illustrate how the halo orbits appear at the tangent bifurcations of the planar orbits around \(L_{1}\) and \(L_{2}\) when \(\mu^{*}=10^{-5}\) and
Figure 11: Halo orbits around the \(L_{1}\) equilibrium point considering \(\mu^{*}=10^{-5}\) and \(d\) = 0.
\(0\). As we built the family of halo orbits, we observed that the amplitude of the orbit increases as the halo orbits move towards the secondary.
For the conditions considered here, the halo orbit appears at \(x\approx 0.98418\) for \(L_{1}\) and at \(x\approx 1.01575\) for \(L_{2}\). Figure 14 shows the path of the characteristic multipliers over the unit circle to the halo orbit around \(L_{1}\) and \(L_{2}\). Initially, the characteristic multipliers move in the direction shown by the purple arrows until they collide with the negative real axis, configuring a periodic doubling bifurcation. After moving subtly along the real negative axis, the
Figure 12: Stability index (\(s_{2}\)) of the planar and halo orbit families around \(L_{1}\) considering \(d=0\) and \(\mu^{*}=10^{-5}\).
Figure 13: Stability index (\(s_{2}\)) of the planar and halo orbit families around \(L_{2}\) considering \(d=0\) and \(\mu^{*}=10^{-5}\).
characteristic multipliers return, moving in the direction of the red arrows, colliding again at -1 and then assuming imaginary values, configuring another periodic doubling bifurcation.
Figures 15 and 16 provide information about the \(s_{1}\) stability index as a function of \(d\) and \(\mu^{*}\). Note that the smaller the amplitudes of the halo orbits, the larger the value of the stability index \(s_{1}\), when considering fixed \(d\) and \(\mu^{*}\). As the amplitude of the halo orbit increases, the stability index decreases. If we set \(d\) = 0, we still detect stable halo orbits for small values of \(\mu^{*}\). These orbits were also found by several authors using the Restricted Three-Body Problem and are called Near Rectilinear Halo Orbits (NRHO) (Howell, 1982; Zimovan-Spreen et al., 2020). NRHOs, are defined as the subset of the halo orbit family with stability indexes around \(s_{i}\)\(\pm\) 2 and with no stability index considerably greater in magnitude than the others.
Figure 14: Behavior of the of characteristic multipliers at the period doubling bifurcation.
Figure 15: Stability index (\(s_{1}\)) of halo orbits around \(L_{1}\) as a function of \(d\) and \(\mu^{*}\).
Figures 17 and 18 provide information about the stability index \(s_{1}\) as the size of the dipole increases from 0 to 2000 meters and the mass ratio is kept constant at \(\mu^{*}=10^{-5}\). The influence of the dimension of the secondary body on the stability of the halo orbits is clear in that plots. Note that, as the size of the secondary increases, the values of \(s_{1}\) become larger in the vicinity of the equilibrium point \(L_{1}\) and \(L_{2}\).
Note that it is unlikely to detect NRHOs around \(L_{1}\) when we take into account the elongated shape of the secondary body and assume small values of \(\mu^{*}\). On the other hand, there are several NRHOs around \(L_{2}\). In this work, we found NRHOs up to \(d=1500\) meters, as shown in Figure 18.
However, as shown in Howell (1982), the stability index also depends on the mass ratio of the system. Considering \(d=0\) and increasing \(\mu^{*}\), the stability index \(s_{1}\) increases. We did not detect any NRHO for values of \(\mu^{*}\geq 10^{-1}\)
Figure 16: Stability index (\(s_{1}\)) of halo orbits around \(L_{2}\) as a function of \(d\) and \(\mu^{*}\).
Figure 17: Halo orbit stability index around \(L_{1}\) for different values of \(d\).
and \(d~{}=~{}0\). On the other hand, we find NRHO for \(\mu^{*}\geq 10^{-1}\) and \(d~{}=~{}0\) around \(L_{2}\). These results are similar to those obtained by Howell (1982). On the other hand, taking into account the elongation of the secondary and assuming large values of \(\mu^{*}\) (\(\mu^{*}\geq 10^{-1}\)), it is possible to find family members of stable halo orbits around \(L_{1}\) and \(L_{2}\). Thus, in the model used in this article, stable periodic orbits in the vicinity of irregular bodies exist, even when the secondary has non-spherical shape. This agrees with the results obtained by Chappaz & Howell (2015), who found stable orbits around \(L_{1}\) and \(L_{2}\) taking into account the elongated shape of the secondary body and considering \(\mu=0.4\) with the triaxial ellipsoid model.
Now we analyze how the period of the halo orbits around \(L_{1}\) and \(L_{2}\) is affected by \(d\) and \(\mu^{*}\). As \(d\) increases and \(\mu^{*}\) is kept constant, the periods of the halo orbits decrease, as shown in Figures 19 and 20.
This is because the gravitational attraction is stronger near the particle, due to the mass distribution of the secondary body, causing the acceleration
Figure 19: Period of the halo orbits around \(L_{1}\) as a function of \(d\) and \(\mu^{*}\).
Figure 18: Halo orbit stability index around \(L_{2}\) for different values of \(d\).
to increase and the orbital period to decrease. As the amplitude of the halo orbit increases, its orbital period becomes shorter.
Considering the elongated shape of the asteroid, but keeping \(d\) constant and increasing \(\mu^{*}\), we notice that the period of the halo orbits become longer. This is because, as \(\mu^{*}\) increases, the equilibrium point move away from the secondary body, thus the halo orbits are further away from the secondary body, which causes the gravitational acceleration to decrease, and thus the orbital period of the particle along the orbit to increase.
Figures 21 and 22 provide information on the behavior of the Jacobi constant of the halo orbits as a function of \(d\) and \(\mu^{*}\). Note that when \(d\) or \(\mu^{*}\) increases, the range of value of the Jacobi constant also increases. This is important information in terms of the application of space mission. Note that the larger the mass ratio of the system, or the longer the secondary body, less energy is needed for the halo orbits to branch from the planar orbits.
Figure 21: Jacobi constant of the halo orbits around \(L_{1}\) with respect to \(d\) and \(\mu^{*}\).
Figure 20: Period of the halo orbits around \(L_{2}\) as a function of \(d\) and \(\mu^{*}\).
## 5 Conclusion
In this paper, the general dynamical environment in the vicinity of binary asteroid systems is explored. Based on the physical and orbital parameters of type A asteroids, the positions of the collinear balance points as a function of angular velocity were computed. We found that the locations of the collinear equilibrium points \(L_{3}\) and \(L_{2}\) are more sensitive to changes in the rotation rate, compared to \(L_{1}\).
Families of planar and Halo orbits were computed around these equilibrium points and we found that the closer the periodic orbits are to the equilibrium point, the more unstable they are.
Numerical evidence shows that the stability of the periodic orbits around the equilibrium points depends on the size of the secondary body and the mass ratio of the system. We observed that, the more elongated the secondary body, the more unstable the planar orbits are. Additionally, we detected unstable and stable halo orbits when \(d\) = 0 and when \(d\ \neq\ 0\).
Finally, we observed that, keeping the mass ratio constant, the more elongated the secondary body, the lower the orbital periods of planar and halo orbits around the equilibrium points.
Thus, if a spacecraft were to be placed in the vicinity of an equilibrium point, fuel consumption required for orbital maintenance would be higher around more elongated secondary bodies.
## 6 Acknowledgements
The authors wish to express their appreciation for the support provided by: grants 140501/2017-7, 150678/2019-3, 422282/2018-9 and 301338/2016-7 from the National Council for Scientific and Technological Development (CNPq); grants 2016/24561-0, 2016/18418-0, 2018/06966-8 and 2018/07377-6 from Sao Paulo Research Foundation (FAPESP); grant 88887.374148/2019-00 from the National Council for the Improvement of Higher Education (CAPES); grant
Figure 22: Jacobi constant of the halo orbits around \(L_{2}\) with respect to \(d\) and \(\mu^{*}\).
E-26/201.877/2020, from Rio de Janeiro Research Foundation (FAPERJ) and to the National Institute for Space Research (INPE). This publication has been supported by the RUDN University Scientific Projects Grant System, project No 202235-2-000
|
2302.05864 | Intelligent Reflecting Surface Aided Wireless Sensing: Applications and
Design Issues | Intelligent reflecting surface (IRS) is an emerging technology that is able
to significantly improve the performance of wireless communications, by smartly
tuning signal reflections at a large number of passive reflecting elements. On
the other hand, with ubiquitous wireless devices and ambient radio-frequency
signals, wireless sensing has become a promising new application for the
next-generation/6G wireless networks. By synergizing low-cost IRS and fertile
wireless sensing applications, this article proposes a new IRS-aided sensing
paradigm for enhancing the performance of wireless sensing cost-effectively.
First, we provide an overview of wireless sensing applications and the new
opportunities of utilizing IRS for overcoming their performance limitations in
practical scenarios. Next, we discuss IRS-aided sensing schemes based on three
approaches, namely, passive sensing, semi-passive sensing, and active sensing.
We compare their pros and cons in terms of performance, hardware cost and
implementation complexity, and outline their main design issues including IRS
deployment, channel acquisition and reflection design, as well as sensing
algorithms. Finally, numerical results are presented to demonstrate the great
potential of IRS for improving wireless sensing accuracy and the superior
performance of IRS active sensing compared to other schemes. | Xiaodan Shao, Changsheng You, Rui Zhang | 2023-02-12T06:32:58Z | http://arxiv.org/abs/2302.05864v1 | # Intelligent Reflecting Surface Aided Wireless Sensing: Applications and Design Issues
###### Abstract
Intelligent reflecting surface (IRS) is an emerging technology that is able to significantly improve the performance of wireless communications, by smartly tuning signal reflections at a large number of passive reflecting elements. On the other hand, with ubiquitous wireless devices and ambient radio-frequency signals, wireless sensing has become a promising new application for the next-generation/6G wireless networks. By synergizing low-cost IRS and fertile wireless sensing applications, this article proposes a new IRS-aided sensing paradigm for enhancing the performance of wireless sensing cost-effectively. First, we provide an overview of wireless sensing applications and the new opportunities of utilizing IRS for overcoming their performance limitations in practical scenarios. Next, we discuss IRS-aided sensing schemes based on three approaches, namely, passive sensing, semi-passive sensing, and active sensing. We compare their pros and cons in terms of performance, hardware cost and implementation complexity, and outline their main design issues including IRS deployment, channel acquisition and reflection design, as well as sensing algorithms. Finally, numerical results are presented to demonstrate the great potential of IRS for improving wireless sensing accuracy and the superior performance of IRS active sensing compared to other schemes.
## I Introduction
The next-generation or 6G wireless networks will embrace a horizon of new applications such as tactile internet, autonomous transportation, ubiquitous sensing, extended reality and so on, which demand not only higher-quality wireless connectivity but also unprecedentedly higher-accuracy sensing than today's wireless systems [1]. Meanwhile, the widely deployed massive multiple-input multiple-output (MIMO) and wide-band millimeter wave (mmWave) communication technologies have enabled high-resolution wireless signals in both space and time, which can be utilized to achieve high-performance sensing. Motivated by the above, 6G wireless networks are envisioned to share the communication platforms and resources such as base stations (BSs)/access points (APs), antennas, spectrum, waveforms, etc. for achieving the dual functions of communication and sensing cost-efficiently, leading to a new paradigm called _integrated sensing and communication_ (ISAC) [2]. Furthermore, wireless sensing can provide useful environment and device location/channel information, which can also be utilized efficiently for improving the communication performance [3].
Generally speaking, wireless/radio-frequency (RF) sensing refers to detecting the presence or estimating various physical characteristics (e.g., position, orientation, speed, material) of objects in the environment by exploiting radio wave transmission, reflection, diffraction, and/or scattering. Some typical applications for wireless sensing are given as follows: 1) _radar sensing_, where the radar radiates electromagnetic waves to hit the target and the reflected echo signals are received to estimate the target's physical information such as distance, velocity, azimuth, height, etc.; 2) _wireless localization_, where wireless device sends signal to one or more APs/BSs to estimate its location; 3) _simultaneous localization and mapping (SLAM)_, which extends wireless localization by simultaneously estimating the location of the wireless device as well as scatterers/obstacles in its surrounding environment for constructing a map of the unknown environment; 4) _RF imaging_, which generates/reconstructs high-resolution images of environment objects by exploiting RF signals that bounce off the objects and are resolved in different dimensions such as time, space, and frequency [4]; 5) _human-activity recognition_, which detects or recognizes human activities (e.g., gesture, sleep, and breathing) by leveraging the amplitude/phase variations in wireless signals; and 6) _spectrum sensing_, which aims to obtain awareness about the spectrum usage and detect the existence of primary (high-priority) users to enable the communications of secondary (low-priority) users.
However, existing wireless sensing systems face critical challenges in practice, which fundamentally limit their performance in future wireless networks, elaborated as follows.
1. **Non-availability of line-of-sight (LoS) link:** Wireless sensing performance can be significantly deteriorated in unfavorable propagation environment with severe signal blockages, such as radar sensing and human-activity recognition, as shown in Figs. 1(a) and 1(e). In these applications, only the signal that propagates over the LoS links is useful for sensing, while the signals from other non-LoS (NLoS) links cause interferences that can degrade the sensing accuracy.
2. **Lack of multipath propagation:** In contrast, some sensing applications rely on the multipath propagation in the environment, such as wireless localization and SLAM, as shown in Figs. 1(b) and 1(c). Although more BSs/APs/devices can be deployed in the environment to create additional signal paths between the sensing transmitter and receiver, this entails higher cost and may not be practically feasible.
3. **Limited sensing range due to high path loss:** Due to the increased path loss of wireless signals over distance, all sensing applications generally have limited operation
range and/or accuracy. For example, in the case of spectrum sensing as shown in Fig. 1(f), the signal from the primary user may be severely attenuated when arriving at the secondary user, which makes its detection of the primary-user signal against the background noise practically difficult. Although the path loss can be compensated by employing massive MIMO receiver and/or increasing transmit power, these methods incur higher cost and more power consumption, thus may be infeasible when there are limited budgets on hardware and the power supply.
To address the above issues, _intelligent reflecting surface_ (IRS) or its various equivalents such as reconfigurable intelligent surface (RIS), which has been recently proposed to reconfigure the radio propagation environment for boosting the wireless communication capacity [5, 6], can also be exploited to enhance the performance of existing wireless sensing systems. Specifically, IRS is a digitally-controlled metasurface consisting of a large number of passive reflecting elements, each being capable of inducing a controllable amplitude and/or phase change to the incident signal independently [5]. IRS possesses several appealing advantages for practical use. For example, it entails much lower hardware and energy cost as compared to the conventional active relay, since it is free of costly RF chains and amplifiers. Besides, IRS operates in a full-duplex mode without incurring processing delay and self-interference. Thus, integrating IRS into wireless sensing systems provides new opportunities to overcome their aforementioned performance limitations in a cost-effective way, which are further explained as follows.
1. **Create virtual LoS links:** For example, in radar sensing (Fig. 1(a)), when environmental obstacle blocks the direct link between the sensing BS and target, IRS can be installed (e.g., on the facade of a high-rise building) to help establish a virtual LoS link from the BS to the IRS and then to the target for enabling the detection of the echo signal that is reflected back via the same virtual LoS link in the reverse direction [7, 8]. Besides, for human-activity recognition (Fig. 1(e)), IRS can be used as a controllable reflector to create LoS path between the person's hand and a far-apart receiver to improve the recognition accuracy.
2. **Establish controllable multipath propagation:** For instance, in wireless localization (Fig. 1(b)), two (or more) IRSs can be employed as reflectors to create multipath propagation for enabling localization under blockage. For SLAM (Fig. 1(c)), IRS can create a multipath-richer propagation environment to more accurately estimate the user's location as well as the local map [9]. Moreover, for RF imaging (Fig. 1(d)), to successfully image the backside of the object, IRS can be properly deployed and smartly controlled for its reflected beam to illuminate the object's blind spot.
3. **Enable flexible passive beam scanning to compensate path loss:** For example, in spectrum sensing (Fig. 1(f)), to increase the detection accuracy of the spectrum usage for the primary user, IRS beam scanning can be applied by dynamically changing reflected wave's focusing direction for increasing its peak power towards the secondary user, hence boosting the primary-user signal strength received at the secondary user for more accurate detection [10].
The above new applications of IRS for different wireless sensing systems have a great potential to fundamentally improve their performance in practice. However, compared to IRS-aided communication, IRS-aided sensing is still in its infancy and calls for great research efforts to resolve its unique challenges that differ significantly from their communication
Fig. 1: Typical wireless sensing applications and the use of IRS for improving their performance.
counterparts. Specifically, unlike IRS-aided communication which targets for improving data transmission performance, IRS-aided sensing aims to accurately and efficiently detect, estimate, and extract useful information/features of targets subjected to environment noise and interference. Thus, new design issues pertinent to IRS-aided sensing arise, such as system architecture, IRS deployment and reflection design, sensing algorithm design, etc. This article is thus aimed to provide an overview of these important issues as well as propose promising approaches to tackle them in practical wireless sensing systems. For ease of exposition, we mainly focus on IRS-aided radar sensing in the sequel of this article, while similar discussions can be inferred for other wireless sensing applications as they usually face similar design issues and practical performance limitations, as illustrated in the above. The rest of this article is organized as follows. Section II introduces various system architectures and discusses the main design issues for IRS-aided radar sensing. Section III presents numerical results to evaluate the performance of different IRS-aided radar sensing architectures and algorithms. In Section IV, other related research directions are discussed, followed by the conclusions made in Section V.
## II IRS-Aided Radar Sensing: System Architectures and Design Issues
In this section, we present the typical architectures and main design issues for IRS-aided radar sensing, including IRS deployment, channel acquisition, reflection design, and sensing algorithms. Promising approaches to resolve these issues as well as open problems worthy of further investigation are also highlighted.
### _System Architecture_
Depending on whether receive/transmit RF chains are installed at the IRS or not, IRS-aided radar sensing systems can be generally classified into the following three categories: IRS passive sensing, IRS semi-passive sensing, and IRS active sensing. _IRS passive sensing_ is shown in Fig. 2(a), where the IRS consists of a number of passive reflecting elements and a smart controller only, and the controller is responsible for adjusting IRS reflections as well as exchanging low-rate information with the BS/AP. As such, passive IRS can only reflect signals from the BS/AP to illuminate the targets for detection. Although IRS passive sensing enjoys low hardware and energy cost, its sensing performance (e.g., accuracy and range) is practically limited. This is because the signal in IRS passive sensing needs to travel over a triple-reflection link from the BS's transmitter to its receiver, i.e., BS\(\xrightarrow{\text{\tiny{\textregistered}}}\)IRS elements-\(\xrightarrow{\text{\tiny{\textregistered}}}\)target-\(\xrightarrow{\text{\tiny{\textregistered}}}\)IRS elements\(\xrightarrow{\text{\tiny{\textregistered}}}\)BS. As a result, IRS passive sensing suffers severe product-distance path loss and hence low received signal-to-noise ratio (SNR).
The second category is _IRS semi-passive sensing_, where additional low-cost sensors are installed on the IRS, as shown in Fig. 2(b). Thereby, the IRS can not only reflect signals from the BS, but also receive echo signals from the target directly for radar sensing. Compared with IRS passive sensing, IRS semi-passive sensing experiences less path loss, since its signal goes through a double-reflection link only, i.e., BS\(\xrightarrow{\text{\tiny{\textregistered}}}\)IRS elements-\(\xrightarrow{\text{\tiny{\textregistered}}}\)target-\(\xrightarrow{\text{\tiny{\textregistered}}}\)IRS sensors. However, if the sensing BS is far from the IRS, the sensing accuracy of IRS semi-passive sensing is still limited by the severe path loss in the BS-IRS link.
To address the above issues, a new type of IRS-aided radar sensing architecture, called _IRS active sensing_, has been recently proposed [11] (see Fig. 2(c)). Specifically, in IRS active sensing, the IRS controller, conventionally responsible for IRS reflection control and information exchange only, also acts as a transmitter (equipped with an RF source) to send probing signals for sensing. In this case, the IRS controller placement is no longer flexible as in IRS passive/semi-passive sensing cases. Instead, it should be placed in front of IRS elements in close proximity, e.g., hanging out perpendicularly from the IRS front plane for the convenience of transmitting signals towards IRS reflecting elements, as illustrated in Fig. 2(c). Moreover, dedicated sensors are also installed on the IRS to receive echo signals from targets. As such, the IRS can actively send and receive radar signals for target localization by itself, without the need of any BSs/APs for signal transmission/reception as in conventional radar sensing systems. Although IRS active sensing generally incurs higher hardware
Fig. 2: Schematics for three IRS-aided radar sensing architectures.
and energy cost than IRS passive/semi-passive sensing, it is expected to achieve superior sensing performance, since the IRS controller\(\rightarrow\)IRS elements link is much shorter than the BS\(\rightarrow\)IRS elements link in IRS passive/semi-passive sensing. Moreover, IRS active sensing can also leverage a direct echo link, i.e., IRS controller\(\rightarrow\)target\(\rightarrow\)IRS sensors, which can be combined constructively at the sensors with the IRS reflected echo link (i.e., IRS controller\(\rightarrow\)IRS elements\(\rightarrow\)target\(\rightarrow\)IRS sensors) by properly tuning the IRS reflection, thus further enhancing the sensing performance. In Table I, we summarize the comparison of the three IRS-aided radar sensing system architectures in different aspects.
### _IRS Deployment_
How to judiciously deploy IRS in different IRS-aided radar sensing systems to achieve the optimal sensing performance is a crucial problem. Specifically, for IRS semi-passive and active sensing, IRS should be deployed at a location with a clear IRS-target LoS path, as the measurements of radar sensing associated with the LoS path are directly related to the target's location. This thus gives rise to a fundamental trade-off between increasing the IRS altitude to enhance the LoS probability and lowering the IRS altitude to reduce the path loss with ground target, as illustrated in Fig. 3(a). On the other hand, for IRS passive sensing, the target's direction-of-arrival (DoA) cannot be resolved if there is only one single LoS path between the BS and IRS. This is because it requires at least two degrees-of-freedom to identify both the complex path gain and the angle parameter for target localization. Therefore, for IRS passive sensing, it is desirable to place the IRS at a location possessing a strong LoS path with the target, as well as an adequate number of NLoS paths with the BS.
Next, we consider the IRS deployment design for the general multi-IRS aided radar sensing system in a more complex radio propagation environment. In Fig. 3(b), we illustrate two typical IRS deployment strategies which are applicable to all the three IRS-aided radar sensing architectures: 1) _centralized IRS deployment_, where all reflecting elements are aggregated to form a single large-size IRS, and 2) _distributed IRS deployment_, where the reflecting elements are divided to form multiple small-size IRSs, each located near one region of interest. The centralized IRS deployment strategy is more advantageous for offering higher beamforming gain; while in contrast, the distributed strategy is more likely to establish LoS paths with the target in high-blockage environment. Hence, IRS placement and elements allocation should be jointly designed to balance the above trade-off for wireless sensing, which deserves future investigation.
Furthermore, in high-mobility scenarios (e.g., highway and railway), the channels between the static BS and in-vehicle users change rapidly due to the environment's random scattering, which makes radar sensing more challenging to implement. In this case, IRS can be properly installed on the surface of high-speed vehicles or along the roadside to aid radar sensing, referred to as the _vehicle-side_ IRS and _road-side_ IRS, respectively, as illustrated in Fig. 3(c). Specifically, for vehicle-side IRS, as the users largely remain quasi-static with the IRS, the IRS can be exploited to track the relatively low-mobility in-vehicle users, while the BS can easily keep track of the large-size IRS even in high mobility. However, IRS needs to be separately deployed in each vehicle for enhancing its radar sensing performance, hence resulting in high hardware costs for vehicle manufacturing. Alternatively, deploying roadside IRSs for vehicle sensing can help exploit the potential multi-IRS cooperative localization gain for improving the radar sensing performance. The design and performance comparison of the above two IRS-aided sensing systems in high-mobility scenarios have not been studied yet, and deserve further investigation.
### _IRS Channel Acquisition_
For IRS-aided radar sensing, channel acquisition refers to obtaining useful channel information for extracting target's physical parameters (e.g., DoA) embedded in the multi-reflection channel. Different from the conventional radar sensing system with only one unknown BS-target channel, the IRS-aided radar sensing system involves not only the unknown channel directly associated with the target (e.g., IRS elements\(\rightarrow\)target), but also other target-unrelated unknown channels (e.g., BS\(\rightarrow\)IRS elements channel in IRS passive/semi-passive sensing, and IRS controller\(\rightarrow\)IRS elements channel in IRS active sensing). As such, the static BS/IRS controller\(\rightarrow\)IRS elements channel can be estimated in an offline phase, followed by an online phase that utilizes such estimated channel information to extract the target's DoA embedded in the IRS elements\(\rightarrow\)target channel (more details will be given in Section II-E).
There are various efficient methods for estimating the BS/IRS controller-IRS elements channel for the three IRS-aided radar sensing architectures. For IRS semi-passive/active sensing, the IRS elements can be switched to the OFF mode, while the installed sensors are activated to receive the signals from the BS/IRS controller for estimating their associated channels to IRS elements. On the other hand, for IRS passive sensing, anchor nodes can be deployed near the IRS for facilitating its associated BS to acquire the BS-IRS channel.
### _IRS Reflection Design_
Different from IRS-aided communication that usually aims to maximize the received SNR/signal-to-interference-plus
noise ratio (SINR) for rate maximization, IRS-aided radar sensing considers the performance metrics directly related to parameter estimation and/or target detection. Specifically, for parameter estimation, Cramer-Rao lower bound (CRLB) or mean squared error (MSE) minimization is usually adopted. Since CRLB is generally a complicated function of the received SNR [3], CRLB minimization may not be equivalently reduced to the more convenient but less accurate SNR maximization (albeit this may hold in some cases of radar sensing without IRS [13]). To balance complexity and performance, for IRS active sensing, the authors in [11] proposed to optimize IRS's reflective beamforming over time by maximizing the average received signal power at IRS sensors. Without any prior information, the resulting optimal IRS reflection design over time was shown to be an omnidirectional beampattern in the angular domain for scanning unknown targets in all possible directions. On the other hand, if given prior information on the region of target location, IRS reflection can be more delicately devised to form a flat and wide beampattern uniformly covering the specific region of interest, thus increasing the received SNR for target localization.
Besides the optimization-based methods, the codebook-based IRS reflection design can also be explored to reduce the design complexity. For DoA estimation, the number of scanning beams formed by IRS should be large enough to cover the sensing area of interest. However, given the sum-/average-power constraint, this method can only support a small transmit power per beam. Alternatively, the hierarchical codebook-based IRS reflection design can be applied to balance the above tradeoff. Specifically, IRS wide beams can be generated in the first phase to scan the entire area to determine the sector where the target locates; then, in the second phase, IRS can steer narrow beams within this sector to further resolve the fine-grained target DoA [7].
Next, for IRS passive/semi-passive sensing, the IRS reflection, in general, needs to be jointly designed with the BS transmit beamforming to optimize the sensing performance, which renders the optimization problem more difficult to solve. To address this issue, the alternating optimization technique can be employed to sub-optimally solve this problem by iteratively optimizing one of the BS transmit beamforming and IRS reflection with the other being fixed [12]. To further reduce the computational complexity, one can design the BS transmit beamformer so that it points towards the IRS controller, while IRS reflections over time can be dynamically tuned to find the target DoA with respect to the IRS controller. This design significantly differs from the conventional MIMO radar, where the BS directly sends orthogonal waveforms to scan the targeted area. In general, such a joint design of BS transmit beamforming and IRS reflection should be superior to the IRS reflection design alone in terms of DoA estimation accuracy, which requires further research, especially for the case without any prior information of the target location.
### _Sensing Algorithm_
Given the IRS reflection or beam scanning design, the BS/IRS sensors receive a sequence of signals reflected by the target over time, which are then used for target detection (i.e., determining the presence or absence of a target or target parameter estimation). Specifically, for IRS active/semi-passive sensing, the echoes from different targets usually have different directionality at the IRS sensors (i.e., target\(\rightarrow\)IRS sensors angle). Thus, the detection/estimation techniques adopted in conventional MIMO radar are also applicable to IRS active/semi-passive sensing. For instance, the celebrated subspace-based methods, such as the multiple signal classification (MUSIC) algorithm and estimation of signal parameters via rotational invariance techniques (ESPRIT), can be employed to estimate the target DoA with super-resolution.
However, for IRS passive sensing, the echoes from different targets and then reflected by the IRS hit the BS antennas in the same direction (i.e., IRS\(\rightarrow\)BS angle), thus making the simultaneous detection/estimation of multiple targets more difficult. In this case, the computationally efficient ESPRIT algorithm cannot be directly applied to DoA estimation, since the necessary condition (i.e., the array displacement invariance structure depends directly on the target location) for using it may not hold. Alternatively, the non-subspace-based methods, such as the maximum likelihood estimation
Fig. 3: Different IRS deployment strategies in radar sensing system.
(MLE), can be applied to estimate the target DoA in IRS passive sensing, which, however, incurs prohibitively high complexity. As such, how to design a low-complexity DoA estimation algorithm for IRS passive sensing is a new and challenging problem to solve in future work. For multi-target detection, instead of distinguishing different targets based on their echo signals' directions, one can also detect the transmit signature sequences by establishing a one-to-one mapping between signature sequence and target direction to improve the detection performance [14].
Although deploying multiple IRSs can provide more opportunities to establish LoS paths with targets, collecting the global information at all IRSs for cooperative localization may incur unaffordable high overhead and complexity. One promising approach to tackle this issue is by developing efficient distributed sensing methods, where each IRS acts as a local DoA estimator, and distributed IRSs exchange low-dimensional local information with each other, thus avoiding the need of a global data-fusion processing. However, this approach may incur notable performance loss due to incomplete data collection. To further improve the performance and yet maintain low complexity, tensor-based algorithms can be applied to exploit the inherent Katri-Rao structure of multi-dimensional signals.
## III Numerical Results
In this section, we present numerical results to compare the sensing performance of different IRS-aided radar sensing architectures as well as sensing algorithms. Specifically, we assume that the IRS has \(128\) reflecting elements and \(8\) sensors, while the BS has \(128\) transmit antennas and \(8\) receive antennas. The BS-IRS distance is \(100\) m, and the IRS controller-IRS elements distance is \(0.5\) m. We assume that all channels are LoS. The discrete Fourier transform (DFT)-based IRS passive reflection design proposed in [11] is adopted for all IRS-aided sensing architectures.
In Fig. 4, we compare the IRS beampatterns of different IRS-aided radar sensing architectures. We assume that the target is located at \(30^{\circ}\) horizontally from the IRS. It is observed that the beampatterns of three architectures all correctly focus their mainlobes towards the target direction. In addition, IRS active sensing is observed to achieve a significantly larger received signal power than IRS semi-passive/passive sensing at the target angle. This is because IRS active sensing not only has the minimum path loss, but also exploits both the IRS passive beamforming gain and the direct echo link gain, as discussed in Section II-A. In contrast, IRS semi-passive sensing has a lower received signal power since the signal traveling distance from the transmitter to the IRS is much longer. Moreover, IRS passive sensing has the least received signal power due to the highest path loss.
Next, in Fig. 5, we compare the root mean square error (RMSE) performance of different IRS-aided radar sensing architectures with different sensing algorithms, including MUSIC, ESPRIT, and MLE. Two targets are placed at the horizontal angles of \(60^{\circ}\) and \(65^{\circ}\), respectively. First, it is observed that for either the ESPRIT or MUSIC algorithm, IRS active sensing consistently achieves a much smaller RMSE than IRS passive/semi-passive sensing under different IRS-targets distances. It is worth noting that the targets' DoAs cannot be resolved by IRS passive sensing, since this requires that there are at least two signal paths in the BS-IRS channel, which is not true in this case with LoS channel only. Second, one can observe that MUSIC achieves a smaller RMSE than ESPRIT, while the former generally requires higher computational complexity due to the need of exhaustive search over all possible directions.
## IV Other Extensions
In addition to the above design issues for IRS-aided wireless sensing, there also exist other related and important problems that are open and worth investigating in future work. In the following, we outline such extendable directions to motivate future work.
### _Non-terrestrial IRS Aided Sensing_
Besides terrestrial IRS, the non-terrestrial IRS can also be employed to enable high-accuracy sensing cost-effectively. For
Fig. 4: Beampattern comparison of three IRS-aided radar sensing architectures.
Fig. 5: RMSE versus IRS-targets distance for different IRS-aided radar sensing architectures and sensing algorithms.
instance, a UAV-mounted IRS can be exploited to assist the BS to "see" the terrestrial targets from different angles by creating multiple LoS links to bypass terrestrial obstacles. However, Doppler shift due to UAV mobility may cause ambiguity in the velocity estimation. Therefore, the joint design of UAV position/trajectory, IRS reflection, and terrestrial BS beamforming/sensing is crucial to achieve optimal sensing performance. Besides, for low earth orbit (LEO) satellites enabled target tracking, their high velocity brings the requirement for steerable antennas. One effective approach is to deploy IRSs on the backside of solar panels of LEO satellite for realizing dynamic passive beamforming/beam scanning, thus its ground coverage area can be adaptively adjusted for enabling more efficient terrestrial communication and sensing.
### _Active-IRS-Aided Sensing_
To compensate for the severe product-distance path loss, active IRS has been recently proposed, which is able to achieve simultaneous signal reflection and amplification at moderately higher hardware and energy cost than the conventional passive IRS [15]. Instead of using massive number of passive reflecting elements, a smaller-size active IRS with much less number of active reflecting units can be deployed to effectively amplify the reflected signal, thus leading to enhanced signal power at the target for improving the sensing performance. However, active IRS also introduces non-negligible amplification noise, thus calling for efficient reflection designs to balance the tradeoff between the signal and noise amplification for wireless sensing.
### _IRS for ISAC_
IRS for ISAC is a promising direction, since IRS is able to create virtual LoS links for improving both the communication and sensing performance. In particular, IRS reflection should be carefully designed to balance the communication and sensing performance. One simple approach is by using orthogonal time sharing, where IRS-aided communication and sensing are executed in orthogonal time with carefully designed time allocation. Besides, by properly deploying IRSs in the networks, the IRS reflection can be exploited to simultaneously improve the communication and sensing channel gains, if the two channels are similar or highly correlated. Moreover, how to design an efficient IRS-aided ISAC system without significantly changing the existing protocols in cellular/WiFi systems and achieve the optimum communication-and-sensing performance tradeoff are important directions for future research.
## V Conclusions
In this article, we propose IRS-aided wireless sensing to achieve cost-effective performance enhancement. Specifically, we discuss the new opportunities of applying IRS to overcome the performance limitations of existing wireless sensing systems. Moreover, we present three practical IRS-aided radar sensing architectures and compare their performance and implementation cost/complexity, as well as the pertinent design issues and promising solution approaches. In particular, we show that IRS active sensing can achieve the best sensing performance compared to IRS passive/semi-passive sensing. Open problems and other extendable research directions for IRS-aided wireless sensing are also outlined to inspire future work. This article is expected to provide a useful and effective guide for future research on IRS-aided wireless sensing.
|
2306.06642 | Well-Calibrated Probabilistic Predictive Maintenance using Venn-Abers | When using machine learning for fault detection, a common problem is the fact
that most data sets are very unbalanced, with the minority class (a fault)
being the interesting one. In this paper, we investigate the usage of
Venn-Abers predictors, looking specifically at the effect on the minority class
predictions. A key property of Venn-Abers predictors is that they output
well-calibrated probability intervals. In the experiments, we apply Venn-Abers
calibration to decision trees, random forests and XGBoost models, showing how
both overconfident and underconfident models are corrected. In addition, the
benefit of using the valid probability intervals produced by Venn-Abers for
decision support is demonstrated. When using techniques producing opaque
underlying models, e.g., random forest and XGBoost, each prediction will
consist of not only the label, but also a valid probability interval, where the
width is an indication of the confidence in the estimate. Adding Venn-Abers on
top of a decision tree allows inspection and analysis of the model, to
understand both the underlying relationship, and finding out in which parts of
feature space that the model is accurate and/or confident. | Ulf Johansson, Tuwe Löfström, Cecilia Sönströd | 2023-06-11T10:32:44Z | http://arxiv.org/abs/2306.06642v1 | # Well-Calibrated Probabilistic Predictive Maintenance using Venn-Abers+
###### Abstract
When using machine learning for fault detection, a common problem is the fact that most data sets are very unbalanced, with the minority class (a fault) being the interesting one. In this paper, we investigate the usage of Venn-Abers predictors, looking specifically at the effect on the minority class predictions. A key property of Venn-Abers predictors is that they output well-calibrated probability intervals. In the experiments, we apply Venn-Abers calibration to decision trees, random forests and XGBoost models, showing how both overconfident and underconfident models are corrected. In addition, the benefit of using the valid probability intervals produced by Venn-Abers for decision support is demonstrated. When using techniques producing opaque underlying models, e.g., random forest and XGBoost, each prediction will consist of not only the label, but also a valid probability interval, where the width is an indication of the confidence in the estimate. Adding Venn-Abers on top of a decision tree allows inspection and analysis of the model, to understand both the underlying relationship, and finding out in which parts of feature space that the model is accurate and/or confident.
Keywords:Calibration Predictive Maintenance Venn-Abers
## 1 Introduction
Manufacturers and other companies increasingly collect data from sensors in their factories and products. Predictive maintenance evaluates the condition of machinery and other equipment by performing offline or online condition monitoring. The goal of the approach is to decide when a maintenance activity is most cost-effective but before the equipment loses performance. The intended result is primarily a reduction in unplanned downtime costs because of failure, but also to lower the maintenance costs. Recently, predictive maintenance based on data-driven methods applying machine learning, has become the most effective solution for smart manufacturing and industrial big data, especially for
performing health perception, e.g., fault diagnosis and remaining life assessment [16].
When performing fault detection, most data sets are very unbalanced, and it is the minority class (a fault) that is the interesting one. In fact, it could be argued that precision must be sacrificed for increased recall, i.e., more false alarms is acceptable, as long as most faults are in fact detected. With this in mind, many approaches directly target increasing the number of detected faults, using for instance cost-sensitive learning or different sampling techniques, see e.g., [9].
In this paper, we investigate another approach, where we produce well-calibrated probabilistic predictors. The overall purpose is to provide the user with a correct estimation, on the instance level, of the probability that the instance represents a fault.
For the calibration, we suggest the use of _Venn-Abers_ predictors, which in addition to providing calibrated probabilities, are able to output well-calibrated probability intervals. In that setting, the width of an interval represents the confidence in the probability estimation. In the experiments, we first show the effects of Venn-Abers calibration, using different underlying models and comparing the results to existing alternatives. After that, we look specifically at the added informativeness when returning probability intervals instead of just a single probability estimate, differentiating between opaque and interpretable predictive models.
## 2 Background
In this section, we first discuss probabilistic prediction in general, before briefly describing some well-known calibration techniques and providing an introduction to Venn prediction.
### Probabilistic Prediction
A probabilistic predictor outputs not only the predicted class label, but also a probability distribution over the labels. Ideally, a probabilistic predictor should be _valid_, i.e., the predicted probability distributions should perform well against statistical tests based on subsequent observation of the labels.
In this paper, we focus on _calibration_, i.e., we want:
\[p(c_{j}\mid p^{c_{j}})=p^{c_{j}}, \tag{1}\]
where \(p^{c_{j}}\) is the probability estimate for class \(j\). Specifically, in order to not be misleading, predicted probabilities must be matched by observed accuracy. As an example, if a number of predictions with the probability estimate 0.95 are made, these predictions should be correct in about 95% of the cases. While most predictive models are capable of producing probability estimates, there is no guarantee that these are well-calibrated. In fact, several models like naive Bayes
[10] and decision trees [12, 8] are notorious for being being poorly calibrated off-the-shelf. Recent studies show that even models assumed to be generally well-calibrated like modern (i.e., deep) neural networks [5] and traditional neural networks [6] often are not. To rectify this, there exist a number of external methods for calibration, where Platt scaling [11] and isotonic regression [15] are the two most frequently used. Both these techniques, as well as Venn-Abers [13], normally utilize a separate labeled data set for the actual calibration.
### Platt Scaling and Isotonic Regression
Platt scaling [11] fits a sigmoid function to the scores obtained by the model. The function is
\[\hat{p}(c\mid s)=\frac{1}{1+e^{As+B}}, \tag{2}\]
where \(\hat{p}(c\mid s)\) is the probability that an instance belongs to class \(c\), given its score \(s\). The parameters \(A\) and \(B\) are found by a gradient descent search, minimizing the loss function suggested in [11].
Isotonic regression [15], is a non-parametric method, where an _isotonic_, i.e., non-decreasing, step-wise regression function is used for generalized binning. The function is fitted to the model scores, minimizing a mean square error criterion, typically using the pair-adjacent violators algorithm.
### Venn-Abers Predictors
Venn predictors [14] are multi-probabilistic predictors, i.e., they output \(C\) probability distributions - where \(C\) is the number of classes - with one of them, by construction, guaranteed to be valid, as long as the data is IID. The probabilities produced for a specific class label can be converted into a probability interval for that label, with the size of the interval indicating the confidence in the estimates, for details see e.g., [7].
_Venn-Abers predictors_ are Venn predictors, i.e., they inherit the validity guarantees. Venn-Abers predictors are, however, in the basic form restricted to two-class problems since they assume that the underlying model is a _scoring classifier_. After calibration, which is done by fitting two isotonic regression [15] models to the calibration set, each prediction is associated with a well-calibrated probability interval.
More formally, for each test object \(x_{l+1}\) being predicted, an inductive Venn-Abers predictor is defined in the following way.
1. Let \(\{z_{1},\ldots,z_{l}\}\) be a training set where each instance \(z=(x,y)\) consists of an _object_\(x\) and a _label_\(y\).
2. Train a scoring classifier using a proper training set \(\{z_{1},\ldots,z_{q}\}\) using \(q<l\) instances.
3. Produce prediction scores for the objects in the calibration set, including the test object, \(\{s(x_{q+1}),\ldots,s(x_{l}),s(x_{l+1})\}\).
4. Fit two isotonic calibrators, \(g_{0}\) and \(g_{1}\), using \(\{(s(x_{q+1}),y_{q+1}),\ldots,(s(x_{l}),y_{l}),(s(x_{l+1}),0)\}\), and \(\{(s(x_{q+1}),y_{q+1}),\ldots,(s(x_{l}),y_{l}),(s(x_{l+1}),1)\}\), respectively.
5. Finally, let the probability interval for \(y_{l+1}=1\) be \([g_{0}(s(x_{l+1})),g_{1}(s(x_{l+1}))]\).
It should be noted that when an inductive Venn-Abers predictor is applied on top of a decision tree, all instances falling in the same leaf will obtain the same prediction score, so after calibration every leaf will contain a specific prediction, consisting of a label and an associated confidence (a probability interval). This very informative model can be inspected and analyzed like any decision tree.
## 3 Method
The data set used in the experiments was introduced in [9] with the purpose of providing the research community with a typical predictive maintenance data set. The data set is presented in detail in the original paper, but will be briefly described here as well. It consists of six input features representing quality (low/medium/high), air temperature in Kelvin, process temperature in Kelvin, rotational speed in rounds per minute (rpm), torque in Newton meter (Nm), and tool wear in minutes. The class labels are failure (minority class) and no failure. Out of 10 000 instances, 339 are failures, making the problem highly imbalanced. Failures could occur for various reasons, represented as three hidden binary features in the data set, i.e., these should not be used for the model learning. If one of these hidden binary features is active, then the instance is labeled as a fault. We have transformed the quality feature into numeric format and of course removed the three hidden features describing the specific kinds of failures, but otherwise kept the data as is.
In this paper, we have used _decision trees_ as the transparent models, since they provide acceptable predictive performance while still being interpretable. Decision trees are able to provide probability estimates for its predictions, given as the proportion of samples of the predicted class in a leaf. The decision tree implementation used is the _DecisionTreeClassifier_ in _scikit-learn_.
As one of the opaque models, we used standard _Random forests_[1], i.e. ensembles of random trees, trained using bagging and restricted to a random subset of the available attributes in each split. When using the _RandomForestClassifier_ in _scikit-learn_, the probability estimates of the forest is the mean predicted class probabilities of all trees, where the class probability of a single tree is the proportion of samples of the predicted class in a leaf.
_Extreme Gradient Boosting (XGBoost)_[2], is a highly scalable implementation of _gradient boosting_[4]. XGBoost includes several improvements over the original gradient boosting algorithm, such as weighted quantile sketch for approximate tree learning as well as enabling parallel and distributed computation of sparse data. The same calculation to get the probability estimate as the _RandomForestClassifier_ is used by _XGBClassifier_ in the _xgboost_ package, implementing the algorithm in Python.
_Logistic Regression_[3] is a regression method using the logit function to estimate the log-odds of a binary event based on a linear combination of one or more attributes. We include logistic regression in the experimentation since it, by construction, should produce well-calibrated probability estimates. The _scikit-learn_ implementation of the algorithm was used with the _max-iter_ parameter set to 500.
When obtaining a single probability estimate from the Venn-Abers probability interval \((p_{0},p_{1})\), for comparison with the other approaches, the suggestion in [13] was used:
\[p=\frac{p_{1}}{1-p_{0}+p_{1}} \tag{3}\]
thus producing a regularized value with the estimate moved slightly towards the neutral value 0.5.
A number of metrics are used in the analysis. Predictive performance is analysed using accuracy, area under the ROC curve (AUC), precision and recall.
The quality of the calibration was evaluated using the _expected calibration error_ (ECE). When calculating _ECE_, the probability estimates for class 1 are divided into \(M\), in this study ten, equally sized bins, before taking a weighted average of the absolute differences between the fraction of correct (_foc_) predictions and the mean of the prediction probabilities (_mop_). Here, \(n\) is the size of the data set and \(B_{m}\) represents bin \(m\).
\[\text{ECE}=\sum_{m=1}^{M}\frac{|B_{m}|}{n}\Big{|}\text{foc}(B_{m})-\text{mop}( B_{m})\Big{|} \tag{4}\]
For the evaluation, standard 10x10-fold cross-validation was used, so all results reported are averaged over the 100 folds.
## 4 Results
Table 1 shows the predictive performance for decision trees, random forests and XGBoost, for Uncalibrated (_Uncal_) models and after calibration using Venn-Abers (_VA_), Platt scaling (_Platt_) and isotonic regression (_IsoReg_). Logistic Regression is also used as comparison. We see that both the random forest and XGBoost, as expected, are more accurate and have higher _AUC_ than decision trees and logistic regression. Overall, calibration has a minor impact on predictive performance, although the _AUC_ decreases slightly for all three calibrated models. The reason is the smaller training set when setting aside instances for calibration.
Looking at precision and recall, the random forest has a precision of almost 0.90 and a recall of approximately 0.62, before calibration. Once calibrated, the precision goes down to between 0.83 and 0.85 and the recall up to between 0.63 and 0.65. This is expected, since random forests are generally underconfident in their probabilistic predictions. For the decision tree, however, the calibration has the opposite effect; the precision increases from 0.77 to between 0.79 and
0.81, while the recall decreases from 0.65 to between 0.57 and 0.60. This is consistent with the fact that decision trees tend to be overconfident in their probability estimates. For XGBoost, calibration reduces recall, similarly to the effect seen on decision trees, but a corresponding increase in precision is not evident. Logistic regression achieves very low performance, especially considering recall, and is only able to predict very few instances as positive. In the targeted scenario, this would not be acceptable, so we exclude logistic regression from further evaluation.
Turning to the calibration results, we present results for the overall calibration, i.e., the probability estimates for the minority class 1 (i.e., faults), but we look specifically at the predictions that are actually for the minority class. As explained above, the most important effect of the calibration would be to improve the probability estimates for the minority class predictions, i.e., having \(p\geq 0.5\) and consequently predicting a failure.
We now look at the reliability plots. Figure 1 shows an overall reliability plot when using decision trees. The decision tree appears to be excellently calibrated out-of-the box, with an _ECE_ of approximately 0.012. However, the three calibration techniques all yield _ECE:s_ between 0.001 and 0.002. Furthermore, when looking at all bins but the leftmost, we can see that the curves representing the calibrated values are much closer to the diagonal, especially for Venn-Abers and isotonic regression. We can also see that the trees are indeed overconfident in their probability estimates, as indicated by the green line being consistently below the dotted line.
Figure 2 shows the corresponding reliability plot when looking only at the minority class predictions. This is a very interesting plot, since it directly shows how calibration affects the minority class prediction, i.e., in the investigated scenario the predictions that would require an action. Here, it is clear that the decision
\begin{table}
\begin{tabular}{l l l l l l l}
**Model** & **Cal** & **Acc** & **AUC** & **Prec** & **Rec** & **\#PosPred** \\ \hline DT & Uncal &.981 &.904 &.768 &.645 & 2846 \\ & VA &.981 &.902 &.790 &.595 & 2551 \\ & Platt &.981 &.897 &.810 &.566 & 2369 \\ & IsoReg &.981 &.897 &.796 &.588 & 2505 \\ \hline RF & Uncal &.985 &.966 &.895 &.616 & 2334 \\ & VA &.984 &.962 &.831 &.648 & 2644 \\ & Platt &.984 &.957 &.845 &.630 & 2529 \\ & IsoReg &.983 &.959 &.834 &.634 & 2577 \\ \hline XGB & Uncal &.986 &.975 &.858 &.696 & 2752 \\ & VA &.984 &.969 &.842 &.662 & 2664 \\ & Platt &.984 &.932 &.861 &.647 & 2547 \\ & IsoReg &.984 &.967 &.845 &.645 & 2590 \\ \hline LR & Uncal &.970 &.894 &.705 &.196 & 941 \\ \end{tabular}
\end{table}
Table 1: Predictive performance
Figure 1: Reliability plot for all instances. Decision trees
Figure 2: Reliability plot for minority class predictions. Decision trees
tree is very overconfident in these predictions. In fact, after calibration, both Venn-Abers and isotonic regression result in very few predictions with a higher confidence than 0.9. Apart from Platt scaling, which is still slightly overconfident, calibrating the decision tree results in very good calibration without any clear tendency to over- or underconfidence. The _ECE_ drops from approximately 0.09 to between 0.02 and 0.05.
Turning to the random forests, Figure 3 shows the overall reliability plot. Again, the probability estimates are well-calibrated, with an _ECE_ below 0.01. Consequently, calibration has limited effect.
However, when looking at the corresponding reliability plot for the minority class predictions only, in Figure 4, it is seen that the random forest is in fact very underconfident when predicting the minority class. Again, calibration using Venn-Abers and isotonic regression are able to correct this, leading to an _ECE_ of 0.03, compared to the original 0.15. Platt scaling again improves the calibration compared to the underlying model, but to a lesser degree than the other calibrators.
Figure 5 shows the overall reliability for XGBoost. Again, the calibration is excellent, with an _ECE_ below 0.01, so further calibration has a very minor effect. When looking at the corresponding reliability plot for the minority class predictions only, in Figure 6, it is seen that XGBoost is slightly overconfident when predicting the minority class. Even though XGBoost is fairly well calibrated on
Figure 3: Reliability plot for all instances. Random forests
Figure 4: Reliability plot for minority class predictions. Random forests
Figure 5: Reliability plot for all instances. XGBoost
minority class predictions, both Venn-Abers and isotonic regression are able to slightly improve calibration.
Table 2 summarizes the calibration results. ECE is the expected calibration error on all instances, and ECE-1 is the expected calibration error on the minority class predictions only.
All-in-all, the experimental results show that applying calibration is very successful when looking at the minority class predictions. Interestingly, calibration is able to correct both the underconfident random forests and the overconfident techniques decision trees and XGBoost. Specifically, the probability estimates for the minority class predictions are much closer to the actual accuracy, i.e., they would provide a better decision support.
\begin{table}
\begin{tabular}{l c c c|c c c} & \multicolumn{3}{c|}{**ECE**} & \multicolumn{3}{c}{**ECE-1**} \\
**Cal** & **DT** & **RF** & **XGB** & **DT** & **RF** & **XGB** \\ \hline Uncal &.012 &.007 &.006 &.086 &.147 &.036 \\ VA &.001 &.003 &.005 &.016 &.026 &.031 \\ Platt &.002 &.003 &.002 &.045 &.044 &.039 \\ IsoReg &.001 &.001 &.002 &.022 &.024 &.032 \\ \end{tabular}
\end{table}
Table 2: Expected Calibration Error - ECE
Figure 6: Reliability plot for minority class predictions. XGBoost
So far we have treated Venn-Abers as a calibration technique, only using the regularized midpoint of the probability interval. However, for decision-making, there are two main reasons for why it makes a lot of sense to also use the probability intervals: i) they provide valid probability estimates of each class label and, ii) the width of the interval conveys insight into the uncertainty of the probability estimates.
When using opaque models, Venn-Abers will provide probability intervals per instance for the positive class, as can be seen in Table 3. The columns are \(y=\) true target, \(\hat{y}=\) prediction from the underlying model, \(p_{1}=\) the probability estimate for the Failure class of the underlying model, \(\tilde{y}=\) the prediction of the Venn-Abers after calibration, and finally \(p_{1}^{L}\) and \(p_{1}^{H}\) represent the probability interval for the positive class.
The examples shown in Table 3 illustrate how the interval of a Venn-Abers predictor provides both a valid probability interval and a level of certainty. The third and fourth instances are interesting examples, where the probabilities are close to 0.5, thus indicating uncertainty about the label. In addition, the intervals are wide, showing that the probability estimate in itself is uncertain.
When Venn-Abers is applied to a decision tree, the leaves will, after calibration, contain valid probability intervals, i.e., the tree can, in addition to providing explanations of the predictions, be inspected and analyzed to increase our understanding of the underlying relationship. Furthermore, by looking at the widths of the probability intervals in the leaves, we can identify different parts of feature space where the model is confident in its estimations and not. Fig. 7 shows a stylized Venn-Abers tree without the split criteria and the actual probability intervals. This tree is also pruned to a maximum depth of 5 for visualization purposes.
The Venn-Abers tree consequently conveys three things:
* The predicted class in the different leaves is given using different colors; blue and orange.
* The probability estimates are shown by colour intensity, with higher intensity representing higher probabilities.
* The confidence in the estimates, represented by the width of the leaves.
The right branch of the tree is more or less strongly dedicated to the prediction of failures, with some leaves being more certain about their probability estimates (smaller intervals and stronger color intensity), whereas other leaves
\begin{table}
\begin{tabular}{c||c|c||c|c c||} \(y\) & \(\hat{y}\) & \(p_{1}\) & \(\tilde{y}\) & \(p_{1}^{L}\) & \(p_{1}^{H}\) \\ \hline
1 & 1 &.92 & 1 &.97 & 1.0 \\
1 & 1 &.78 & 1 &.85 & 1.0 \\
0 & 0 &.47 & 1 (?) &.50 &.70 \\
0 & 0 &.46 &? &.41 &.70 \\ \end{tabular}
\end{table}
Table 3: Venn-Abers predictions from a Random Forest
are less certain about the correct class (lower color intensity), or indicate the model being less confident about its probability estimate (wider leaves). Most of the central and left parts of the tree are dedicated to the prediction of no failures with different degrees of probability and confidence, with some leaves having high probability with high confidence and others exhibiting a lot of uncertainty. The white and wide leaves indicate both a probability close to 0.5 and low confidence.
Each leaf also represents a description, in the form of conjunctive conditions defined for the input attributes of the instances in that part of the feature space. Examples of rules describing the three numbered leaves in Fig. 7 are listed in Fig. 8. The interval at the end of each rule is the Venn-Abers interval for the probability. Here, the first rule is very interesting, since it with high probability and confidence identifies faults. In a similar way, Rule 3, where a vast majority of all instances end up, identifies no faults. Rule 2, finally, points out a part of feature space where both the probability estimate and the confidence are rather low. Here, it could be noted that the only difference between this rule and Rule 3 is whether the value of the feature tool wear is higher or lower than 201.5, suggesting that this could be looked into.
Figure 7: A well-calibrated and interpretable Venn tree. Colors represent the target classes with either no failure (orange) or failure (blue). The colour intensity corresponds to the estimated probability, while the width of the leaves gives the sizes of the Venn-Abers probability intervals, indicating how certain the probability estimates are.
## 5 Concluding Remarks
We have in this paper demonstrated how Venn-Abers calibration could be used in a fault detection scenario with highly imbalanced classes. The experimental results show that calibration using either Venn-Abers or the common alternatives Platt scaling and isotonic regression, will lead to significantly improved minority class predictions. Specifically, predictions from both overconfident and underconfident models are corrected.
In the second part, we argued for using Venn-Abers not only for calibration, but actually utilizing the added information present in the valid probability intervals produced. When used on top of an opaque model, each predicted label is complemented with a valid probability interval where the width indicates the confidence in the estimate. Adding Venn-Abers to a decision tree makes it straightforward to inspect and analyze the model in order to understand not only the underlying relationship, but also in which parts of feature space the model is strong and/or confident.
|
2304.08016 | Does stress drop positively or negatively correlate with rupture speed? | Rupture speed Vr and stress drop {\Delta}{\tau} are two key parameters that
can characterize earthquake source and the associated potential for ground
shaking. Despite their importance, a controversy has emerged in recent years
regarding whether there is a positive or negative correlation between
{\Delta}{\tau} and Vr. Here I attempt to reconcile the controversy by
presenting a context-based solution and a physics-based solution. The first
solution calls for attention to the specific context under which Vr and
{\Delta}{\tau} are discussed, as their meanings and estimated values can vary
between different studies. It is noted that a negative correlation between
{\Delta}{\tau} and Vr can result, at least partly, from a tradeoff effect
inherent to certain analysis method. For the second solution, it is shown that
the specific correlation between {\Delta}{\tau} and Vr can depend on the
condition of fracture energy Gc. Constant Gc often favors a positive
correlation, whereas introducing a variability of Gc can lead to a negative
correlation. More efforts are needed to improve the methods for estimating Vr
and {\Delta}{\tau}, and to explore other mechanisms that may explain the
correlation between the two parameters. | Shiqing Xu | 2023-04-17T06:47:50Z | http://arxiv.org/abs/2304.08016v1 | # Does Stress Drop Positively or Negatively Correlate With Rupture Speed?
###### Abstract
We present a new method for measuring the energy of a single particle in a \(\mu\)-\(\mu\)-\(\mu\) plane, using a
###### Abstract
Rupture speed \(V_{r}\) and stress drop \(\Delta\tau\) are two key parameters that can characterize earthquake source and the associated potential for ground shaking. Despite their importance, a controversy has emerged in recent years regarding whether there is a positive or negative correlation between \(\Delta\tau\) and \(V_{r}\). Here I attempt to reconcile the controversy by presenting a context-based solution and a physics-based solution. The first solution calls for attention to the specific context under which \(V_{r}\) and \(\Delta\tau\) are discussed, as their meanings and estimated values can vary between different studies. It is noted that a negative correlation between \(\Delta\tau\) and \(V_{r}\) can result, at least partly, from a tradeoff effect inherent to certain analysis method. For the second solution, it is shown that the specific correlation between \(\Delta\tau\) and \(V_{r}\) can depend on the condition of fracture energy \(G_{c}\). Constant \(G_{c}\) often favors a positive correlation, whereas introducing a variability of \(G_{c}\) can lead to a negative correlation. More efforts are needed to improve the methods for estimating \(V_{r}\) and \(\Delta\tau\), and to explore other mechanisms that may explain the correlation between the two parameters.
## Plain Language Summary
Rupture speed describes how fast an earthquake rupture propagates, and stress drop dictates how much strain energy stored in the surrounding media is released by an earthquake. From an energy-based point of view, it may be intuitive to anticipate a positive correlation between stress drop and rupture speed, because larger stress drop would imply more energy supply (converted from strain energy) for rupture propagation. Meanwhile, several recent studies also reveal a negative correlation between stress drop and rupture speed. To reconcile the discrepancy, it is necessary to recognize (1) how rupture speed and stress drop are defined and estimated, and (2) the importance of both energy supply and consumption. Especially, it is shown that reducing the energy consumption required for rupture propagation, known as fracture energy, can lead to a negative correlation between stress drop and rupture speed. Therefore, the detailed correlation between stress drop and rupture speed can depend on whether fracture energy remains invariant. Other mechanisms may also produce a positive or negative correlation between stress drop and rupture speed, and deserve to be explored in the future.
## 1 Introduction
Since the seminal work of Griffith (1921), it is now generally accepted that fracture of brittle materials is described by an energy balance criterion (Broberg, 1999; Freund, 1990). Significant efforts have been made to extend the key concepts in material science focusing on fracture to earthquake science focusing on frictional slip (Andrews, 1976; Ben-Zion, 2001; Burridge, 1973; Das, 2003; Madariaga, 2012; Rice, 1980), after which energy partitioning can be discussed during an earthquake (Kanamori & Rivera, 2006; Rivera & Kanamori, 2005). Recent laboratory experiments, theoretical analyses, and numerical simulations further support the validity of fracture mechanics for describing the behaviors of both slow and fast earthquakes
(Svetlizky & Fineberg, 2014; Kammer et al., 2018; Reches & Fineberg, 2023; Weng & Ampuero, 2022).
One fundamental question in earthquake science is how fast an earthquake rupture can propagate, since this will affect the understanding of earthquake physics and the assessment of seismic hazard. According to fracture mechanics, rupture speed is controlled by the balance between energy release rate and fracture energy (Freund, 1990). The former tells how much available energy is released per unit rupture length, which is a function of stress drop and rupture speed; while the latter describes how much energy must be dissipated in order to advance the rupture front. Following this idea, a variety of rupture phase diagrams have been constructed to connect rupture speed with other source parameters, such as stress drop or a function of it (Andrews, 1976; Liu et al., 2014; Madariaga & Olsen, 2000; Passelegue et al., 2020; Tremborg et al., 2011; Wei et al., 2021; Xu et al., 2015).
While the importance of rupture speed and stress drop has been recognized, how the two parameters correlate with one another is still a subject of debate. Some studies show a positive correlation based on experimental observations (Chen et al., 2021; Passelegue et al., 2013; Svetlizky et al., 2017; Xu et al., 2018), whereas others report a negative correlation based on source inversion of natural earthquakes (e.g., Chounet et al., 2018). In this commentary, I present two solutions for reconciling the discrepancy. The first solution calls for attention to the context under which rupture speed and stress drop are discussed, as their meanings can vary between different studies. The second solution is more physics based and will invoke fracture energy to tune the correlation between stress drop and rupture speed.
**2 Different contexts for discussing rupture speed and stress drop**
While the definitions of rupture speed and stress drop are clear in the literature (Bizzarri, 2011; Kanamori & Rivera, 2006; Noda et al., 2013), to estimate their values from actual observations requires experience and sometimes can be challenging.
In the laboratory, rupture speed \(V_{r}\) is usually estimated by two approaches: (1) counting the travel time of rupture front over some propagation distance, and (2) matching the near-field waveform against a \(V_{r}\)-dependent reference solution. Although the two approaches can often yield similar results, some issues deserve to be mentioned. First, the approach of waveform matching has a low and high sensitivity to \(V_{r}\) when \(V_{r}\) is slow and fast, respectively, owing to the pronounced Lorentz effect only when \(V_{r}\) approaches the limiting speed (Svetlizky & Fineberg, 2014; Svetlizky
et al., 2020). Consequently, one study recommends combining rupture front trajectory and detailed waveform pattern for a robust estimation of rupture properties (Xu et al., 2019). Second, when rupture process is not smooth (Xu et al., 2023) or when rupture evolution has not become spontaneous, e.g., during rupture nucleation (Guerin-Marthe et al., 2019), then the two approaches may not converge. The estimation of stress drop \(\Delta\tau\) in the laboratory also requires experience and caution. Here I don't consider the cases equipped with only macroscopic observations (Baumberger & Caroli, 2006; Leeman et al., 2016; Nielsen et al., 2016), because rupture propagation is not explicitly involved. For other cases where rupture propagation can be resolved, \(\Delta\tau\) is typically estimated during the passage of rupture front (Bayart et al., 2018; Xu et al., 2018), known as dynamic stress drop. The purpose is to minimize undesired effects, such as fault re-rupturing, healing, reflected waves from sample boundaries, and interaction with external apparatus. That said, caution must be taken to study the static stress drop finalized after rupture termination (Ke et al., 2018; Passelegue et al., 2016) or the long-tailed slip-weakening process (Brener & Bouchbinder, 2021; Paglialunga et al., 2022), since undesired effects can be involved if the selected time window is long. Moreover, spatial heterogeneity (Bayart et al., 2018), off-fault measurement (Xu et al., 2019), and intermittent rupture process (Rubino et al., 2022; Xu et al., 2023) can also complicate the estimation of stress drop or slip-weakening curve.
Despite the aforementioned various issues, as long as careful calibrations are made, rupture speed and stress drop can be estimated directly and accurately in the laboratory, whose values are sometimes cross-validated by fracture mechanics (Bayart et al., 2018; Kammer et al., 2018; Svetlizky et al., 2017; Xu et al., 2019).
Except for some cases equipped with near-field stations (Fukuyama & Mikumo, 2007; Fukuyama & Suzuki, 2016), the source properties of natural earthquakes usually cannot be directly measured; instead, they are inferred from remote observations in the context of certain models, and thus are subject to attenuation and model dependence. Although dynamic inversion has been applied to a few cases (Madariaga & Ruiz, 2016), common practice still assumes a kinematic model with prescribed rupture process to invert for earthquake source properties. For large earthquakes, finite fault inversion can be performed to discretize the source region into several patches. Depending on data quality and model parameterization, either a constant (Hartzell & Heaton, 1983; Kikuchi & Kanamori, 1991; Ye et al., 2016) or a variable rupture speed (Ji et al., 2002; Minson et al., 2013) can be inverted. With the inverted slip history, a dislocation model can
be further applied to obtain the evolution of stress (Bouchon, 1997; Ide & Takeo, 1997; Tinti et al., 2005) and the final static stress drop (Okada, 1992), either on each patch or over the entire region (Noda et al., 2013; Shao et al., 2012). For small-to-moderate earthquakes, simple models such as the circular crack or rectangular fault models are preferred (Brune, 1970; Haskell, 1964; Kaneko & Shearer, 2015; Madariaga, 1976; Sato & Hirasawa, 1973), although only some of them explicitly consider rupture propagation (Udias et al., 2014). Another useful approach is to analyze the second moment tensor (McGuire, 2004; Meng et al., 2020). Several studies have already applied simple models to global (Allmann & Shearer, 2009; Chounet et al., 2018) or regional earthquakes (Abercrombie & Rice, 2005; Abercrombie et al., 2017; Yoshida & Kanamori, 2023). In these studies, rupture speed is either assumed or inferred from pre-defined misfit functions, while stress drop is estimated from the corner frequency of source spectrum and the scaling relation related to seismic moment.
Three issues deserve to be mentioned for the kinematic inversion of natural earthquakes. First, the inverted rupture speed does not necessarily reflect the true rupture speed, because there is no cross-validation against fracture mechanics. Second, for most cases, the estimated stress drop depends on the entire rupture process, including fault rupturing, re-rupturing and healing, and thus can deviate from the dynamic stress drop solely produced by one rupture front (Madariaga, 1976; Kaneko & Shearer, 2015; Ke et al., 2022; Song & Dalguer, 2017). Third, there is a tradeoff between stress drop \(\Delta\tau\) and rupture speed \(V_{r}\) through the scaling relation \(\Delta\tau\cdot(V_{r})^{3}\propto M_{0}\), where \(M_{0}\) denotes seismic moment (Kanamori & Rivera, 2004). This can easily render a negative correlation between \(\Delta\tau\) and \(V_{r}\)(Ye et al., 2016).
In summary, the context for discussing rupture speed and stress drop can vary between laboratory and natural earthquakes (Table 1), or between studies on the same earthquake but with different models. Consequently, it is quite possible that the derived correlation between stress drop and rupture speed can also vary. To avoid apple vs. orange comparison, it is better to stick to the same context, and mind the assumptions and limitations of the employed model.
## 3 Physical mechanisms explaining the correlation between stress drop and rupture speed
In this section, I focus on the same context(s) where rupture speed and stress drop can be estimated in a consistent way. Although tradeoff effect and estimation error may still exist, the main purpose here is to seek physical mechanisms for understanding the correlation between stress drop and rupture speed.
In the laboratory, several studies have revealed a positive correlation between stress drop \(\Delta\tau\) and rupture speed \(V_{r}\)(Chen et al., 2021; Okubo & Dieterich, 1984; Passelegue et al., 2013, 2016; Svetlizky et al., 2017; Xu et al., 2018). This can be understood by the balance between dynamic energy release rate \(G_{d}(V_{r},\Delta\tau)=g(V_{r})\cdot G_{s}(\Delta\tau)\) and fracture energy \(G_{c}\)(Freund, 1990). Here, \(g(V_{r})\) is a monotonically decreasing function of \(V_{r}\), while \(G_{s}(\Delta\tau)\) is a functional of \(\Delta\tau\) and known as static energy release rate. Assuming \(G_{c}\) is constant, larger \(\Delta\tau\) in general would indicate larger \(G_{s}(\Delta\tau)\). To still hold a balance between \(G_{d}(V_{r},\Delta\tau)\) and \(G_{c}\), the function \(g(V_{r})\) must decrease, which then implies an increase in \(V_{r}\). While fracture energy \(G_{c}\) does appear constant in some cases once the conditions for loading and fault interface are set (Bayart et al., 2016), there is no particular reason to believe that \(G_{c}\) must always remain constant. Indeed, \(G_{c}\) can vary by a factor of two during the same sequence of earthquakes (Xu et al., 2019), or by two orders of magnitude between the primary and secondary rupture fronts (Kammer & McLaskey, 2019). Taken to the extreme,
\begin{table}
\begin{tabular}{l|l|l} \hline & Laboratory earthquakes & Natural earthquakes \\ & (with rupture propagation) & (kinematic inversion) \\ \hline Near-field observation & yes & sometimes yes but mostly no \\ \hline \(V_{r}\) & directly measured or inferred & inferred or assumed; \\ & & constant or variable \\ \hline \(\Delta\tau\) & directly measured; & indirectly estimated; \\ & mostly dynamic stress drop & mostly static stress drop \\ \hline Independent estimations & yes & usually no \\ of \(V_{r}\) and \(\Delta\tau\) & & \\ \hline \(V_{r}\) and \(\Delta\tau\) cross-validated & sometimes yes & no \\ by fracture mechanics & & \\ \hline \end{tabular}
\end{table}
Table 1: Different contexts for estimating rupture speed \(V_{r}\) and stress drop \(\Delta\tau\)
one study shows that some secondary slip fronts, corresponding to one type of interface waves, can propagate rapidly with zero \(G_{c}\) and zero \(\Delta\tau\)(Xu et al., 2019b). Apparently, if one relaxes the assumption of constant \(G_{c}\), then there will be room for a negative correlation between \(\Delta\tau\) and \(V_{r}\). This is further demonstrated by a recent experimental work showing that a smooth fault tends to produce subshear ruptures with larger \(\Delta\tau\) and longer recurrence interval, whereas a rough fault can host supershear ruptures with smaller \(\Delta\tau\) and shorter recurrence interval (Xu et al., 2023). It is conceived that \(G_{c}\), which scales with recurrence interval, is smaller on the rough fault.
The role of fracture energy may also explain the behaviors of natural earthquakes. Episodic Tremor and Slip (ETS), one form of slow earthquakes (Peng & Gomberg, 2010), has been observed to advance slowly at a speed of \(\sim 10\) km/day, along the strike of the subducting plate in Cascadia (Houston et al., 2011) and southwest Japan (Obara et al., 2012). Occasionally, secondary slip fronts can propagate backward along the strike at a speed of \(\sim 100\) km/day, or back and forth along the dip at a speed of \(\sim 1000\) km/day (Figure 1a) (Ghosh et al., 2010; Houston et al., 2011; Luo & Ampuero, 2017; Nakamoto et al., 2021; Obara et al., 2012). Theoretical analyses suggest that, for a secondary slip front to attain a faster propagation speed \(V_{r}\) than the main one, either slip rate \(\nu_{\rm slip}\)
Figure 1: Schematic diagrams showing secondary slip fronts in a region that has already been ruptured by a main slip front. (a) In the transition zone of a subducting plate, along-strike tremor reversal and along-dip tremor streak can emerge in the wake of a main ETS front. Figure drawn based on figures 3 and 6 in Luo & Ampuero (2017). (b) Under the condition of constant friction (implying zero fracture energy \(G_{c}\)) behind a primary rupture front, secondary slip fronts can propagate at the Rayleigh, S, or P wave speed (denoted by \(C_{\rm R}\), \(C_{\rm S}\), and \(C_{\rm P}\), respectively). Figure drawn based on figure 1 in Dunham et al. (2003).
needs to be increased or peak-to-residual strength drop \(\Delta\tau_{p-r}\) needs to be reduced, according to the scaling relation: \(V_{r}=\alpha\cdot v_{\text{slip}}\cdot\frac{\mu}{\Delta\tau_{p-r}}\), where \(\alpha\) is a geometric factor of order 1 and \(\mu\) is the shear modulus (Ampuero & Rubin, 2008; Rubin & Armbruster, 2013). One possibility for achieving higher \(v_{\text{slip}}\) and/or lower \(\Delta\tau_{p-r}\) is to reduce fracture energy \(G_{c}\)(Hawthorne et al., 2016). This is mechanically feasible, because \(G_{c}\) must be weakened by the main slip front and can only be partially recovered for a short elapse time. Similar feature has also been reported for numerically simulated fast earthquakes, where polarized secondary slip fronts can propagate at around the Rayleigh or body wave speed (\(\sim\) 3-6 km/s for typical crustal rocks) behind a main slip front (Figure 1b) (Dunham et al., 2003; Dunham, 2005). In this case, \(G_{c}\) remains zero after the passage of the main slip front, so that those secondary slip fronts are accompanied with zero stress drop and zero strength drop, despite their fast propagation speeds.
Last but not the least, there is another way for explaining a negative correlation between stress drop \(\Delta\tau\) and rupture speed \(V_{r}\). To do so, one needs to extrapolate the classical concept of fracture energy to any dissipative processes (e.g., off-fault damage) that can effectively damp the acceleration of the rupture front (Andrews, 2005; Ben-Zion & Dresen, 2022; Cocco et al., 2023; Gabriel et al., 2013; Nielsen, 2017; Templeton, 2009). Let's consider a scenario where the entire on- and off-fault region is on the verge of failure and then a dynamic rupture is activated along the fault. On one hand, larger (or smaller) \(\Delta\tau\) tends to favor faster (or slower) \(V_{r}\), as already explained earlier. On the other hand, larger (or smaller) \(\Delta\tau\) also tends to induce more (or less) extensive off-fault damage, which in turn can quench (or promote) the further acceleration of the rupture front. A delicate balance between the two competing effects could cause a negative correlation between \(\Delta\tau\) and \(V_{r}\). This is the physical mechanism invoked by Chounet et al. (2018) for understanding their inferred results on global earthquakes. Similarly, a non-monotonic friction law, characterized by rate-weakening at low slip rate but rate-strengthening at high slip rate (Bar-Sinai et al., 2014; Reches & Lockner, 2010; Rubin, 2011; Shibazaki & Iio, 2003), may also cause a negative correlation between \(\Delta\tau\) and \(V_{r}\).
## 4 Conclusions
Contrasting views on the correlation between stress drop \(\Delta\tau\) and rupture speed \(V_{r}\) have been reported in recent years. Two solutions are presented to reconcile the discrepancy: (1) different contexts for discussing \(V_{r}\) and \(\Delta\tau\), and (2) some physical mechanisms. In (1), \(V_{r}\) and \(\Delta\tau\) can have
different meanings in different studies, which may hinder a direct comparison on the derived correlations. Moreover, a negative correlation may reflect a tradeoff effect inherent to the analysis method. In (2), the specific correlation between \(\Delta\tau\) and \(V_{r}\) can depend on fracture energy \(G_{c}\). Constant \(G_{c}\) often favors a positive correlation, whereas variable \(G_{c}\) can lead to a negative correlation. It is hoped that this commentary can help clarify the estimations of \(V_{r}\) and \(\Delta\tau\) under different contexts, and can stimulate future studies to investigate the correlation between the two parameters.
## Acknowledgments
I thank Eiichi Fukuyama and Futoshi Yamashita for guiding me to the field of laboratory earthquakes. I also thank Lingling Ye for fruitful discussion on the observation of natural earthquakes. This work was supported by National Key R&D Program of China 2021YFC3000700 and NSFC grant 42074048.
## Data Availability Statement
No data were used in this study.
|
2301.02483 | On the ergodic theory of the real Rel foliation | Let $\mathcal{H}$ be a stratum of translation surfaces with at least two
singularities, let $m_{\mathcal{H}}$ denote the Masur-Veech measure on
$\mathcal{H}$, and let $Z_0$ be a flow on $(\mathcal{H}, m_{\mathcal{H}})$
obtained by integrating a Rel vector field. We prove that $Z_0$ is mixing of
all orders, and in particular is ergodic. We also characterize the ergodicity
of flows defined by Rel vector field, for more general spaces $(\mathcal{L},
m_{\mathcal{L}})$, where $\mathcal{L} \subset \mathcal{H}$ is an orbit-closure
for the action of $G = \mathrm{SL}_2(\mathbb{R})$ (i.e., an affine invariant
subvariety) and $m_{\mathcal{L}}$ is the natural measure. Our results are
conditional on a forthcoming measure classification result of Brown, Eskin,
Filip and Rodriguez-Hertz.We also prove that the entropy of the action of $Z_0$
on $(\mathcal{L}, m_{\mathcal{L})$ has zero entropy. | Jon Chaika, Barak Weiss | 2023-01-06T12:42:10Z | http://arxiv.org/abs/2301.02483v2 | # On the ergodic theory of the real real foliation
###### Abstract.
Let \(\mathcal{H}\) be a stratum of translation surfaces with at least two singularities, let \(m_{\mathcal{H}}\) denote the Masur-Veech measure on \(\mathcal{H}\), and let \(Z_{0}\) be a flow on \((\mathcal{H},m_{\mathcal{H}})\) obtained by integrating a Rel vector field. We prove that \(Z_{0}\) is mixing of all orders, and in particular is ergodic. We also characterize the ergodicity of flows defined by Rel vector fields, for more general spaces \((\mathcal{L},m_{\mathcal{L}})\), where \(\mathcal{L}\subset\mathcal{H}\) is an orbit-closure for the action of \(G=\mathrm{SL}_{2}(\mathbb{R})\) (i.e., an affine invariant subvariety) and \(m_{\mathcal{L}}\) is the natural measure. These results are conditional on a forthcoming measure classification result of Brown, Eskin, Filip and Rodriguez-Hertz. We also prove that the entropy of \(Z_{0}\) with respect to any of the measures \(m_{\mathcal{L}}\) is zero.
## 1. Introduction
Let \(\mathcal{H}\) be a stratum of area-one translation surfaces and let \(G\stackrel{{\mathrm{def}}}{{=}}\mathrm{SL}_{2}(\mathbb{R})\). There is a \(G\)-invariant finite measure \(m_{\mathcal{H}}\) on \(\mathcal{H}\) known as the _Masur-Veech measure_, and the dynamics of the \(G\)-action on \((\mathcal{H},m_{\mathcal{H}})\) have been intensively studied in recent years and are intimately connected to many problems in geometry and ergodic theory, see e.g. [MaTa, Zo]. Suppose that surfaces in \(\mathcal{H}\) have \(k\) singularities, where \(k\geq 2\). Then there is a \(k-1\)-dimensional foliation of \(\mathcal{H}\), known as the _real Rel foliation_. A precise definition of the foliation and some of its properties will be given below in SS2.2. Loosely speaking, two surfaces are in the same real Rel leaf if one can be obtained from the other by a surgery in which singular points are moved with respect to each other in the horizontal direction, without otherwise changing the geometry of the surface. A natural question, which we address here, is the ergodic properties of this foliation.
As we review in SS2.2, by labeling the singularities and removing a set of leaves of measure zero, we can think of the real Rel leaves as being the orbits of an action of a group \(Z\) on \(\mathcal{H}\), where \(Z\cong\mathbb{R}^{k-1}\), and the restriction of this action to any one-dimensional subgroup of \(Z\) defines a flow. Our first main result is the following.
**Theorem 1.1**.: _Let \(\mathcal{H}\) be a connected component of a stratum \(\mathcal{H}(a_{1},\dots,a_{k})\) with all \(a_{i}>0\) (i.e., no marked points). Let \(m_{\mathcal{H}}\) be the Masur-Veech measure on \(\mathcal{H}\), let \(Z\cong\mathbb{R}^{k-1}\) be the corresponding action given by translation along the leaves of the real Rel foliation, and let \(Z_{0}\subset Z\) be any one-dimensional connected subgroup of \(Z\). Then the \(Z_{0}\)-flow on \((\mathcal{H},m_{\mathcal{H}})\) is mixing of all orders (and in particular, ergodic)._
The definition of mixing of all orders is given in SS3.3. For purposes of this introduction it is enough to note that it implies ergodicity of any nontrivial element. Note that when \(\mathcal{H}\) has marked points, there will be subgroups \(Z_{0}\) which only move the marked points on surfaces without otherwise changing the geometry, and the
conclusion of Theorem 1.1 will not hold. This is the only obstruction to generalizing our results to strata with marked points, see Theorem 8.1.
The proof of Theorem 1.1, as well as most of the other results of this paper, relies crucially on measure-rigidity results of Eskin, Mirzakhani and Mohammadi [EM, EMM], and further forthcoming work extending these results, which we will describe in SS5.
Theorem 1.1 improves on the results of several authors. In those results, ergodicity for the _full Rel foliation_ was studied. The full Rel foliation (also referred to as the 'kernel foliation', 'isoperiodic foliation', or 'absolute period foliation') will also be defined in SS2.2. Its leaves are of dimension \(2(k-1)\), that is, twice the dimension of the real Rel leaves. Loosely speaking, two surfaces are in the same leaf for this foliation if one can be obtained from the other by moving the singularities (without otherwise affecting the geometry of the surface). That is, we relax the hypothesis that points can only be moved horizontally. The first ergodicity results for the full Rel foliation were obtained by McMullen [McM], who proved ergodicity in the two strata \(\mathcal{H}(1,1)\) and \(\mathcal{H}(1,1,1,1)\). Subsequently, Calsamiglia, Deroin and Francaviglia [CDF] proved ergodicity in all principal strata, and Hamenstadt [Ham] reproved their result by a simpler argument. Recently, Winsor [Wi1] proved ergodicity for most of the additional strata, and in [Wi2], showed that there are dense orbits for the \(Z_{0}\)-flow, for any \(Z_{0}\) as in Theorem 1.1. Note that ergodicity for a foliation is implied by ergodicity for any of its subfoliations, and that ergodicity implies the existence of dense leaves, and thus Theorem 1.1 generalizes all of these results. Also note that full Rel is a foliation which is not given by a group action, and the notions of mixing and multiple mixing do not make sense in this case.
The papers [McM, CDF] go beyond ergodicity and obtain classifications of full Rel closed leaves and leaf-closures in their respective settings. We suspect that there is not a reasonable classification of real Rel leaf-closures, indeed it is already known (see [HW]) that there are real Rel trajectories that leave every compact set never to return.
The strata \(\mathcal{H}\) support other interesting measures for which similar questions could be asked. Namely, by work of Eskin, Mirzakhani and Mohammadi [EM, EMM], for any \(q\in\mathcal{H}\), the orbit-closure \(\mathcal{L}\stackrel{{\mathrm{def}}}{{=}}\overline{Gq}\) is the support of a unique smooth \(G\)-invariant measure which we denote by \(m_{\mathcal{L}}\). Let \(Z_{\mathcal{L}}\) be the subgroup of \(Z\) leaving \(\mathcal{L}\) invariant. Then \(Z_{\mathcal{L}}\) also preserves \(m_{\mathcal{L}}\) and for many choices of \(\mathcal{L}\), we have \(\dim Z_{\mathcal{L}}>0\). In these cases, for any closed connected \(Z_{1}\subset Z_{\mathcal{L}}\), there is a _complexification_\(\mathfrak{R}_{1}\), which gives a foliation of \(\mathcal{L}\) whose leaves \(\mathfrak{R}_{1}(q)\) have dimension \(2\dim Z_{1}\) (see SS2.2). The leaves \(\mathfrak{R}_{1}(q)\) have a natural translation structure, and this induces a natural locally finite translation-invariant measure on each leaf. With this terminology we can now state the main result of this paper:
**Theorem 1.2**.: _Let \(\mathcal{L}\) be a \(G\)-orbit closure, and let \(m_{\mathcal{L}},\,Z_{\mathcal{L}},\mathfrak{R}_{\mathcal{L}}\) be as above, where \(\dim Z_{\mathcal{L}}>0\). Let \(z_{0}\) be a nontrivial element of \(Z_{\mathcal{L}}\) and let \(Z_{0}=\operatorname{span}_{\mathbb{R}}(z_{0}).\) Then either_
1. _The action of_ \(Z_{0}\) _on_ \((\mathcal{L},m_{\mathcal{L}})\) _is mixing of all orders (and in particular,_ \(z_{0}\) _acts ergodically); or_
2. _there is an intermediate closed connected subgroup_ \(Z_{1}\) _so that_ \(Z_{0}\subset Z_{1}\subset Z_{\mathcal{L}}\)_, and the complexification_ \(\mathfrak{R}_{1}\) _of_ \(Z_{1}\) _satisfies_
_;_
* _for every_ \(q\in\mathcal{L}\)_, the leaf_ \(\mathfrak{R}_{1}(q)\) _is closed, and_
* _for_ \(m_{\mathcal{L}}\)_-a.e._ \(q\)_,_ \(\mathfrak{R}_{1}(q)\) _is of finite volume with respect to its translation-invariant measure, and_ \(\overline{Z_{0}q}=\mathfrak{R}_{1}(q)\)_._
Thus, in order to establish ergodicity of real Rel subfoliations on \(G\)-orbit-closures, it is enough to rule out Case (2). We will prove Proposition 7.1, which provides a simple way to achieve this, using cylinder circumferences of surfaces in \(\mathcal{L}\). Theorems 1.1 and 8.1 are deduced from Theorem 1.2 using Proposition 7.1.
The following statement is an immediate consequence of Theorem 1.2.
**Corollary 1.3**.: _Let \(\mathcal{L}\) be a \(G\)-orbit-closure, let \(m_{\mathcal{L}},Z_{\mathcal{L}}\) be as above, and let \(Z_{1}\subset Z_{\mathcal{L}}\) be one-dimensional. Suppose that the foliation induced by the complexification \(\mathfrak{R}_{1}\) has a dense leaf. Then the \(Z_{1}\)-flow on \((\mathcal{L},m_{\mathcal{L}})\) is mixing of all orders (and in particular, ergodic)._
The density of certain leaves of the full Rel foliation in \(G\)-orbit-closures of rank one was obtained by Ygouf in [Y]. Using these results we obtain ergodicity of one-dimensional subgroups of the real Rel foliation in many cases. For instance, using [Y, Thm. A & Prop. 5.1] we have:
**Corollary 1.4**.: _The real Rel foliation is mixing of all orders (and in particular, ergodic) in any eigenform locus in \(\mathcal{H}(1,1)\) with a non-square discriminant._
Recall that in [Wi2] Winsor proved the existence of dense real Rel leaves, and dense leaves for one-dimensional flows \(Z_{0}\), in all strata. Using these results in conjunction with Corollary 1.3, one can obtain an alternative proof of Theorem 1.1 that avoids the use of Proposition 7.1.
We also consider the entropy of real-Rel flows, and show the following:
**Theorem 1.5**.: _Let \(\mathcal{L},m_{\mathcal{L}},Z_{\mathcal{L}},z_{0}\) be as in the statement of Theorem 1.2. Then the entropy of the action of \(\operatorname{Rel}_{z_{0}}\) on the measure space \((\mathcal{L},m_{\mathcal{L}})\) is zero._
Using the geodesic flow one easily shows that \(\operatorname{Rel}_{z_{0}}\) is conjugate to \(\operatorname{Rel}_{tz_{0}}\) for any \(t>0\), and from this it follows that the entropy is either zero or infinite. However, the Rel flow is not continuous, and we could not find a simple way to rule out infinite entropy. Our proof gives a more general result -- see Theorem 9.1. However, the argument fails for \(Z_{0}\)-invariant measures for which the backward time geodesic flow diverges almost surely, and thus we do not settle the question of whether the topological entropy of real Rel flows is zero.
### Outline
In SS2 we give background material on translation surfaces, their moduli spaces, and the Rel foliation. In SS3 we use standard facts about joinings to build a measure \(\theta\) on the product of two strata (see (3.1)), depending on a real Rel flow \(Z_{0}\), such that if \(\theta\) is the product measure, then \(Z_{0}\) is ergodic. In SS3.3 we discuss a technique of Mozes that makes it possible to upgrade ergodicity to mixing of all orders. In SS4 we show that \(\theta\) is ergodic for the diagonal action of the upper triangular group \(P\subset G\) on the product of two strata. In SS5 we state a far-reaching measure rigidity result of Brown, Eskin, Filip and Rodriguez-Hertz for \(P\)-ergodic measures on products of two strata. In SS6 we use this measure rigidity result, as well as prior results for the action on one stratum due to Wright, in order to characterize the situations in which \(\theta\) is not a product measure, thus proving
Theorem 1.2. Proposition 7.1 is proved in SS7, and we check its conditions to deduce Theorems 1.1 and 8.1 in SS8. We prove Theorem 1.5 in SS9.
### Acknowledgements
We are very grateful to Alex Eskin for crucial contributions to this project. We also thank Simion Filip, Curt McMullen and Alex Wright for useful discussions, and acknowledge the support of ISF grant 2919/19, BSF grant 2016256, NSFC-ISF grant 3739/21, a Warnock chair, a Simons Fellowship, and NSF grants DMS-1452762 and DMS-2055354.
## 2. Preliminaries about translation surfaces
### Strata, period coordinates
In this section we collect standard facts about translation surfaces, and fix our notation. For more details, we refer to reader to [20, Wr1, BSW]. Below we briefly summarize the treatment in [2].
Let \(S\) be a compact oriented surface of genus \(g\), \(\Sigma=\{\xi_{1},\ldots,\xi_{k}\}\subset S\) a finite set, \(a_{1},\ldots,a_{k}\) non-negative integers with \(\sum a_{i}=2g-2\), and \(\mathcal{H}=\mathcal{H}(a_{1},\ldots,a_{k})\) the corresponding stratum of unit-area translation surfaces. We let \(\mathcal{H}_{\rm m}=\mathcal{H}_{\rm m}(a_{1},\ldots,a_{k})\) denote the stratum of unit-area marked translation surfaces and \(\pi:\mathcal{H}_{\rm m}\to\mathcal{H}\) the forgetful mapping. Our convention is that singular points are labeled, or equivalently, \(\mathcal{H}=\mathcal{H}_{\rm m}/\operatorname{Mod}(S,\Sigma)\), where \(\operatorname{Mod}(S,\Sigma)\) is the group of isotopy classes of orientation-preserving homeomorphisms of \(S\) fixing \(\Sigma\), up to an isotopy fixing \(\Sigma\).
There is an \(\mathbb{R}_{>0}\)-action that dilates the atlas of a translation surface by \(c\in\mathbb{R}_{>0}\). For a stratum \(\mathcal{H}\) and marked stratum \(\mathcal{H}_{\rm m}\), we denote the collection of surfaces of arbitrary area, obtained by applying such dilations, by \(\bar{\mathcal{H}},\,\bar{\mathcal{H}}_{\rm m}\). The marked stratum \(\bar{\mathcal{H}}_{\rm m}\) is a linear manifold modeled on the vector space \(H^{1}(S,\Sigma;\mathbb{R}^{2})\). It has a developing map \(\operatorname{dev}:\bar{\mathcal{H}}_{\rm m}\to H^{1}(S,\Sigma;\mathbb{R}^{2})\), sending an element of \(\bar{\mathcal{H}}_{\rm m}\) represented by \(f:S\to M\), where \(M\) is a translation surface, to \(f^{*}(\operatorname{hol}(M,\cdot))\), where for an oriented path \(\gamma\) in \(M\) which is either closed or has endpoints at singularities, \(\operatorname{hol}(M,\gamma)=\begin{pmatrix}\int_{\gamma}dx\\ \int_{\gamma}dy\end{pmatrix}\), and \(dx,dy\) are the \(1\)-forms on \(M\) inherited from the plane. Furthermore, there is an open cover \(\{\mathcal{U}_{\tau}\}\) of \(\mathcal{H}_{\rm m}\), indexed by triangulations \(\tau\) of \(S\) with triangles whose vertices are in \(\Sigma\), and maps \(\operatorname{dev}|_{\mathcal{U}_{\tau}}:\mathcal{U}_{\tau}\to H^{1}(S,\Sigma ;\mathbb{R}^{2})\), which are homeomorphisms onto their image, and such that the transition maps on overlaps for this atlas are restrictions of linear automorphisms of \(H^{1}(S,\Sigma;\mathbb{R}^{2})\).
This atlas of charts \(\{(\mathcal{U}_{\tau},\operatorname{dev}|_{\mathcal{U}_{\tau}})\}\) is known as _period coordinates_. Since each \(\mathcal{U}_{\tau}\) is identified via period coordinates with an open subset of the vector space \(H^{1}(S,\Sigma;\mathbb{R}^{2})\), the tangent space at each \(\mathcal{U}_{\tau}\) is identified canonically with \(H^{1}(S,\Sigma;\mathbb{R}^{2})\), and thus the tangent bundle of \(\mathcal{H}_{\rm m}\) is locally constant. A sub-bundle of the tangent bundle is called _locally constant_ or _flat_ if it is constant in the charts afforded by period coordinates. The \(\operatorname{Mod}(S,\Sigma)\)-action on \(\mathcal{H}_{\rm m}\) is properly discontinuous, and hence \(\mathcal{H}\) is an orbifold, and the map \(\pi:\mathcal{H}_{\rm m}\to\mathcal{H}\) is an orbifold covering map.
The group \(G\) acts on translation surfaces in \(\mathcal{H}\) by modifying planar charts, and acts on \(H^{1}(S,\Sigma;\mathbb{R}^{2})\) via its action on \(\mathbb{R}^{2}\), thus inducing a \(G\)-action on \(\mathcal{H}_{\rm m}\). The \(G\)-action commutes with the \(\operatorname{Mod}(S,\Sigma)\)-action, and thus the map \(\pi\) is \(G\)-equivariant for these actions. The \(G\)-action on \(\mathcal{H}_{\rm m}\) is free, since \(\operatorname{dev}(g\mathbf{q})\neq\operatorname{dev}(\mathbf{q})\) for any
nontrivial \(g\in G\). We will use the following subgroups of \(G\):
\[g_{t}=\begin{pmatrix}e^{t}&0\\ 0&e^{-t}\end{pmatrix},\quad u_{s}= \begin{pmatrix}1&s\\ 0&1\end{pmatrix}\] \[U=\{u_{s}:s\in\mathbb{R}\},\quad P= \left\{\begin{pmatrix}a&b\\ 0&a^{-1}\end{pmatrix}:a>0,\ b\in\mathbb{R}\right\}.\]
### Rel foliation and real Rel foliation
We define and list some important properties of the Rel foliation, the real Rel foliation, and the corresponding action on the space of surfaces without horizontal saddle connections. See [MW, BSW] for more details. See also [Zo, McM], and references therein.
We have a canonical splitting \(\mathbb{R}^{2}=\mathbb{R}\oplus\mathbb{R}\) and we write \(\mathbb{R}^{2}=\mathbb{R}_{\rm x}\oplus\mathbb{R}_{\rm y}\) to distinguish the two summands in this splitting. There is a corresponding splitting
\[H^{1}(S,\Sigma;\mathbb{R}^{2})=H^{1}(S,\Sigma;\mathbb{R}_{\rm x})\oplus H^{1} (S,\Sigma;\mathbb{R}_{\rm y}). \tag{2.1}\]
We also have a canonical restriction map \(\operatorname{Res}:H^{1}(S,\Sigma;\mathbb{R}^{2})\to H^{1}(S;\mathbb{R}^{2})\) (given by restricting a cochain to absolute periods). Since \(\operatorname{Res}\) is topologically defined, its kernel \(\ker(\operatorname{Res})\) is \(\operatorname{Mod}(S,\Sigma)\)-invariant. Moreover, from our convention that singular points are marked, the \(\operatorname{Mod}(S,\Sigma)\)-action on \(\ker(\operatorname{Res})\) is trivial.
Let
\[\mathfrak{R}\stackrel{{\rm def}}{{=}}\ker(\operatorname{Res}) \quad\text{and}\quad Z\stackrel{{\rm def}}{{=}}\mathfrak{R}\cap H ^{1}(S,\Sigma;\mathbb{R}_{\rm x}). \tag{2.2}\]
Since \(H^{1}(S,\Sigma;\mathbb{R}_{\rm x})\) and \(H^{1}(S,\Sigma;\mathbb{R}_{\rm y})\) are naturally identified with each other via their identification with \(H^{1}(S,\Sigma;\mathbb{R})\), for each \(Z_{1}\subset Z\) we can define the space \(\mathfrak{R}_{1}\) spanned by the two copies of \(Z_{1}\) in \(H^{1}(S,\Sigma;\mathbb{R}_{\rm x})\) and \(H^{1}(S,\Sigma;\mathbb{R}_{\rm y})\) respectively. The space \(\mathfrak{R}_{1}\) is the _complexification_ of \(Z_{1}\). This terminology arises from viewing \(H^{1}(S,\Sigma;\mathbb{R}^{2})\) as \(H^{1}(S,\Sigma;\mathbb{C})\), a vector space over \(\mathbb{C}\), viewing \(H^{1}(S,\Sigma;\mathbb{R}_{\rm x})\) and \(H^{1}(S,\Sigma;\mathbb{R}_{\rm y})\) as the real and imaginary subspace of this complex vector space. With this viewpoint, \(\mathfrak{R}_{1}\) is the \(\mathbb{C}\)-span of \(Z_{1}\).
For any subspace \(Z_{1}\subset\mathfrak{R}\), we can foliate the vector space \(H^{1}(S,\Sigma;\mathbb{R}^{2})\) by affine subspaces parallel to \(Z_{1}\). Pulling back this foliation using the period coordinate charts gives rise to a foliation of \(\bar{\mathcal{H}}_{\rm m}\). Since monodromy acts trivially on \(\mathfrak{R}\), this foliation descends to a well-defined foliation on \(\bar{\mathcal{H}}\). It is known (see e.g. [BSW, Prop. 4.1]) that the area of a surface is constant on leaves of the Rel foliation, and thus the Rel foliation and any of its subfoliations descend to a foliation of \(\mathcal{H}\). The foliation corresponding to \(\mathfrak{R}\) (respectively, to \(Z\)) is known as the _Rel foliation_ (respectively, the _real Rel foliation_).
Because the \(\operatorname{Mod}(S,\Sigma)\)-monodromy action fixes all points of \(\mathfrak{R}\), the leaves of the Rel foliation, and any of its sub-foliations, acquire a translation structure. In particular, they are equipped with a natural measure.
For any \(v\in Z\) we have a constant vector field, well-defined on \(\mathcal{H}_{\rm m}\) and on \(\mathcal{H}\), everywhere equal to \(v\). Integrating this vector field we get a partially defined _real REL flow (corresponding to \(v\))_\((t,q)\mapsto\operatorname{Rel}_{tv}(q)\); the flow may not be defined for all time due to possible 'collide of zeroes'. For every \(q\in\mathcal{H}\) it is defined for \(t\in I_{q}\), where the _domain of definition_\(I_{q}=I_{q}(v)\) is an open subset of \(\mathbb{R}\) which contains \(0\). The sets \(I_{q}(v)\), are explicitly described in [BSW, Thm. 6.1]. Let \(\hat{\mathcal{H}}\) denote the set of surfaces in \(\mathcal{H}\) with no horizontal saddle connections. Then \(I_{q}=\mathbb{R}\) for all \(q\in\hat{\mathcal{H}}\).
If \(q\in\mathcal{H}\), \(s\in\mathbb{R}\) and \(\tau\in I_{q}\) then
\[\tau\in I_{u_{s}q}\quad\text{and}\quad\text{Rel}_{\tau v}(u_{s}q)=u_{s}\text{Rel }_{\tau v}(q).\]
Similarly, if \(q\in\mathcal{H}\), \(t\in\mathbb{R}\) and \(\tau\in I_{q}\) then
\[\tau^{\prime}\mathop{=}^{\text{def}}e^{t}\tau\in I_{g_{t}q}\quad\text{and} \quad\text{Rel}_{\tau^{\prime}v}(g_{t}q)=g_{t}\text{Rel}_{\tau v}(q). \tag{2.3}\]
In particular, since \(P\) preserves \(\hat{\mathcal{H}}\) and \(P=\{g_{t}u_{s}:t,s\in\mathbb{R}\}\), there is an action of \(P\ltimes Z\) on \(\hat{\mathcal{H}}\), given by \((p,z).q=p\text{Rel}_{z}(q)\).
## 3. Preliminaries from ergodic theory
### Ergodic decomposition
We will use the notation \(\mathbb{G}\circlearrowright(X,\mu)\) to indicate that \(\mathbb{G}\) is a locally compact second countable group, \((X,\mathcal{B})\) is a standard Borel space, and \(\mu\) is a probability measure on \(\mathcal{B}\) preserved by the \(\mathbb{G}\)-action. We say that \(\mathbb{G}\circlearrowright(Y,\nu)\) is a _factor_ of \((X,\mu)\) if there is a measurable \(\mathbb{G}\)-invariant conull subset \(X_{0}\subset X\), and a measurable map \(T:X_{0}\to Y\) such that \(T\circ g=g\circ T\) for all \(g\in\mathbb{G}\), and \(\nu=T_{*}\mu\). In this situation we refer to \(T\) as the factor map. Given a factor map, there is a (unique up to nullsets) measure disintegration \(\mu=\int\mu_{y}\,d\nu(y)\), for a Borel mapping \(y\mapsto\mu_{y}\) from \(Y\) to the space of Borel probability measures on \(X\), such that \(\mu_{y}(T^{-1}(y))=1\) for \(\nu\)-a.e. \(y\). Equivalently we can write \(\mu=\int_{x}\mu_{x}^{\prime}\,d\mu(x)\), where \(\mu_{x}^{\prime}\mathop{=}^{\text{def}}\mu_{T(x)}\). For a closed subgroup \(H\subset\mathbb{G}\), we say that \(\mu\) is \(H\)-ergodic if any invariant set is null or conull. We have the following well-known ergodic decomposition theorem:
**Proposition 3.1**.: _Suppose \(\mathbb{G}\circlearrowright(X,\mu)\), and \(H\) is a closed subgroup of \(\mathbb{G}\). Then there is a factor of \(H\circlearrowright(X,\mu)\), called the space of ergodic components and denoted by \(X/\!\!/H\), with the following properties:_
1. _For_ \(\nu\)_-a.e._ \(y\in X/\!\!/H\)_,_ \(\mu_{y}\) _is_ \(H\)_-invariant and_ \(H\)_-ergodic._
2. \(H\) _acts trivially on_ \(X/\!\!/H\)_._
3. \(H\circlearrowright(X,\mu)\) _is ergodic if and only if_ \(X/\!\!/H=\{pt.\}\)_._
4. _The properties (i)-(iii) uniquely determine the factor_ \(X/\!\!/H\) _up to measurable isomorphism._
5. _If_ \(H\lhd\mathbb{G}\) _then_ \(\mathbb{G}\circlearrowright(X/\!\!/H,\nu)\)_._
Proof.: For (i) and (ii) see [Va, Thm. 4.4] (in the notation of [Va], these assertions follow from the fact that \(\beta\) is a map on points and is \(H\)-invariant). Assertion (iii) is immediate from definitions and (iv) follows from [Va, Lemma 4.4]. For (v), one can argue using the uniqueness property (iv), and the fact that the image of an \(H\)-invariant ergodic measures under any element \(g\in\mathbb{G}\) is also \(H\)-invariant and ergodic.
**Remark 3.2**.: _An action is called prime if it has no factors (besides the action itself, and the trivial action on a point). The construction above shows that if \(H\lhd\mathbb{G}\), \(\mathbb{G}^{\prime}\) is a subgroup of \(\mathbb{G}\) so that \(\mathbb{G}^{\prime}\circlearrowright(X,\mu)\) is prime and \(H\circlearrowright(X,\mu)\) is not isomorphic to the trivial action, then \(H\circlearrowright(X,\mu)\) is ergodic. This is not the approach we will take for proving Theorem 1.1._
### Joinings
We recall some well-known facts about joinings, see [dlR] and references therein. Let \(\mathbb{G}\circlearrowright(X_{i},\mu_{i})\) for \(i=1,2\). A _joining_ is a measure \(\theta\) on \(X_{1}\times X_{2}\), invariant under the diagonal action of \(\mathbb{G}\), such that \(\pi_{i*}\theta=\mu_{i}\). A _self-joining_ is a joining in case \(X_{1}=X_{2}\). If \((X_{i},\mu_{i})\to(Z,\nu)\) is a joint factor then the _relatively independent joining over \(Z\)_ is the joining \(\int_{Z}(\mu_{1})_{z}\times(\mu_{2})_{z}\,d\nu(z)\), where \(\mu_{i}=\int_{Z}(\mu_{i})_{z}\,d\nu(z)\) is the disintegration of \(\mu_{i}\). In case \(X_{1}=X_{2}=X\), and \(Z=X/\!\!/H\) is the space of ergodic components of the action of \(H\) on \((X,\mu)\) as in Proposition 3.1, we obtain the _relatively independent self-joining over \(X/\!\!/H\)_. This joining satisfies:
**Proposition 3.3**.: _The following are equivalent:_
* \(H\circlearrowright(X,\mu)\) _is ergodic._
* _The relatively independent self-joining over_ \(X/\!\!/H\) _is_ \(\mu\times\mu\)_._
We note two properties of this self-joining. We fix a topology on \(X\) which generates the \(\sigma\)-algebra, and denote by \(\operatorname{supp}\mu\) the topological support of \(\mu\), i.e., the smallest closed set of full measure.
**Proposition 3.4**.: _Let \(\theta\) be the measure on \(X\times X\) which is the relatively independent self-joining over \(X/\!\!/H\), for some \(H\), and let \(T:X\to X/\!\!/H\) be the factor map. Then the following hold:_
* _We have_ (3.1) \[\theta=\int_{X}\mu_{T(x)}\times\mu_{T(x)}\,d\mu(x).\]
* _The set_ \(\big{\{}x\in X:x\notin\operatorname{supp}\mu_{T(x)}\big{\}}\) _is of_ \(\mu\)_-measure zero._
* _If_ \(X=\operatorname{supp}\mu\) _then_ \(\operatorname{supp}\theta\) _contains the diagonal_ \(\Delta_{X}\stackrel{{\mathrm{def}}}{{=}}\{(x,x):x\in X\}\)_._
Proof.: Formula (3.1) is immediate from the definition of the relatively independent self-joining over \(X/\!\!/H\). Since each \(\mu^{\prime}_{x}=\mu_{T(x)}\) is \(H\)-invariant and ergodic, and \(\mu^{\prime}_{x}(T^{-1}(T(x)))=1\), the set \(\{x\in X:x\notin\operatorname{supp}\mu^{\prime}_{x}\}\) is a nullset. From this, and from (3.1) we obtain the last assertion.
### Ergodicity, mixing, and mixing of all orders
For \(\mathbb{G}\circlearrowright(X,\mu)\), let \(L^{2}_{0}(\mu)\) denote the Hilbert space of \(L^{2}\)-functions on \((X,\mu)\) of integral zero, and let \(k\geq 2\). The action is called _\(k\)-mixing_ if for any \(f_{1},\dots,f_{k}\in L^{2}_{0}(\mu)\) and for any \(k-1\) sequences \(\left(g^{(i)}_{n}\right)_{n\in\mathbb{N}}\in\mathbb{G},\ i=1,\dots,k-1\), for which all of the sequences
\[\left(g^{(i)}_{n}\right)_{n\in\mathbb{N}}\ \ (1\leq i\leq k-1)\quad\text{ and } \left(g^{(i)}_{n}(g^{(j)}_{n})^{-1}\right)_{n\in\mathbb{N}}\ \ (1\leq i<j\leq k-1)\]
eventually leave every compact subset of \(\mathbb{G}\), we have
\[\int_{X}f_{1}\left(g^{(1)}_{n}x\right)\cdots f_{k-1}\left(g^{(k-1)}_{n}x \right)f_{k}(x)\,d\mu(x)\stackrel{{ n\to\infty}}{{\longrightarrow}}\prod_{i=1}^{k} \int_{X}f_{i}\,d\mu.\]
We say that the action is _mixing_ if it is \(2\)-mixing, and _mixing of all orders_ if it is mixing of order \(k\) for any \(k\geq 2\). It is easy to check that mixing implies ergodicity of any unbounded subgroup of \(\mathbb{G}\). We have the following:
**Proposition 3.5**.: _Let \(Z_{0}\cong\mathbb{R}\) and let \(\{g_{t}\}\) be a one-parameter group acting on \(Z_{0}\) by dilations, i.e., for all \(v\in Z_{0}\) and \(t\in\mathbb{R}\) we have \(g_{t}v=e^{\lambda t}v\) for some \(\lambda\neq 0\)._
_Let \(F=\{g_{t}\}\ltimes Z_{0}\) and let \(F\circlearrowleft(X,\mu)\) be a probability space. The following are equivalent:_
1. _the restricted flow_ \(Z_{0}\circlearrowleft(X,\mu)\) _is ergodic;_
2. _the restricted flow_ \(Z_{0}\circlearrowleft(X,\mu)\) _is mixing of all orders;_
3. _the restricted flow_ \(Z_{0}\circlearrowleft(X,\mu)\) _is mixing;_
4. _any nontrivial element of_ \(Z_{0}\) _acts ergodically._
**Remark 3.6**.: _The group \(F\) appearing in Proposition 3.5 is isomorphic as a Lie group to the subgroup \(P\) of upper triangular matrices in \(G\), but in our application we will use it for the group generated by a one-parameter real Rel flow \(Z_{0}\) and the diagonal flow \(\{g_{t}\}\)._
Proof.: Clearly (b) \(\implies\) (c) \(\implies\) (d) \(\implies\) (a). We assume that the \(Z_{0}\)-flow is ergodic. To see that it is mixing, it is enough by [P, Chap. 2, Prop. 5.9] to prove that it has countable Lebesgue spectrum, and for this, use [KT, Prop. 1.23 & Prop. 2.2]. The proof of mixing of all orders follows verbatim from an argument of Mozes [Mo], for mixing actions of Lie groups which are 'Ad-proper'. Since our group \(F\) is not Ad-proper, we cannot cite [Mo] directly, so we sketch the proof. For notational convenience we deduce 3-fold mixing from mixing (the proof that '\(k\)-fold mixing \(\implies\)\(k+1\)-fold mixing', for \(k\geq 3\), is identical but requires more cumbersome notation).
We use additive notation in the group \(Z_{0}\), and denote the action of \(Z_{0}\) on \(X\) by \((z,x)\mapsto z.x\). Let \(\left(b_{n}\right)_{n\in\mathbb{N}}\) and \(\left(c_{n}\right)_{n\in\mathbb{N}}\) be sequences in \(Z_{0}\) such that each of the sequences \(\left(b_{n}\right)_{n\in\mathbb{N}},\,\left(c_{n}\right)_{n\in\mathbb{N}},\, \left(b_{n}+c_{n}\right)_{n\in\mathbb{N}}\) eventually leaves every compact set, and let \(f_{1},f_{2},f_{3}\) be in \(L_{0}^{2}(\mu)\). We need to prove that
\[\int_{X}f_{1}(x)f_{2}(b_{n}.x)f_{3}((b_{n}+c_{n}).x)\,d\mu(x)\stackrel{{ n\to\infty}}{{\longrightarrow}}\int_{X}f_{1}\,d\mu\,\int_{X}f_{2}\,d\mu\, \int_{X}f_{3}\,d\mu.\]
For each \(n\), define a measure \(\mu_{n}\) on \(X^{3}\stackrel{{\rm def}}{{=}}X\times X\times X\) by
\[\int_{X^{3}}f\,d\mu_{n}\stackrel{{\rm def}}{{=}}\int_{X}f(x,b_{n }.x,(b_{n}+c_{n}).x)\,d\mu(x),\ \ \forall f\in C_{c}(X^{3}).\]
That is, \(\mu_{n}\) is the pushforward of the diagonal measure on \(X^{3}\) by the triple \((0,b_{n},b_{n}+c_{n})\). It is easy to see that 3-mixing is equivalent to the fact that the weak-* limit of \(\mu_{n}\) is the measure \(\mu^{3}\stackrel{{\rm def}}{{=}}\mu\times\mu\times\mu\). The group \(F^{3}\stackrel{{\rm def}}{{=}}F\times F\times F\) acts on \(X^{3}\) by acting separately on each component, and as in [Mo], since \(Z_{0}\) is mixing, it suffices to show that any measure \(\nu\) on \(X^{3}\) which is a weak-* limit of a subsequence of \(\left(\mu_{n}\right)_{n\in\mathbb{N}}\), is invariant under \((0,u,v)\in\mathbb{R}^{3}\subset F^{3}\), for some \((u,v)\in\mathbb{R}^{2}\smallsetminus(0,0)\). We claim that for any \(s\in\mathbb{R}\) the measure \(\mu_{n}\) is invariant under
\[h_{n}(s)\stackrel{{\rm def}}{{=}}\left(g_{s},b_{n}\cdot g_{s} \cdot(-b_{n}),(b_{n}+c_{n})\cdot g_{s}\cdot(-b_{n}-c_{n})\right),\]
where the multiplication is in the group \(F^{3}\). Indeed, since \(\mu\) is \(\{g_{s}\}\)-invariant,
\[\int_{X^{3}}f\,d\mu_{n}=\int_{X}f\left(g_{s}x,b_{n}.(g_{s}x),(b_{n}+c_{n}).(g_{ s}x)\right)\,d\mu(x),\]
and
\[h_{n}(s)\cdot({\rm id}_{F},b_{n},b_{n}+c_{n})=(g_{s},b_{n}\cdot g_{s},(b_{n}+c _{n})\cdot g_{s}).\]
That is, applying \(h_{n}(s)\) changes one description of \(\mu_{n}\) to another.
We embed \(F\) as a multiplicative group of matrices in \(\operatorname{GL}_{2}(\mathbb{R})\) and let \(d_{F}\) be the metric on \(F\) induced by some norm on \(\operatorname{GL}_{2}(\mathbb{R})\). By a straightforward computation we have
\[h_{n}(s)=\left(g_{s},\ (1-e^{\lambda s})b_{n}\cdot g_{s},\ (1-e^{\lambda s})(b_{n }+c_{n})\cdot g_{s}\right),\]
and \(d_{F}(\operatorname{id}_{F},h_{n}(s_{n}))\) is a continuous function of \(s\) which goes to \(0\) as \(s\to 0\) and for any fixed \(s>0\), increases to infinity as \(n\to\infty\). Therefore we can choose \(s_{n}\to 0\) so that \(d_{F}(\operatorname{id}_{F},h_{n}(s_{n}))=1\) for all large enough \(n\). As in [Mo], \(\nu\) is invariant under some subsequential limit of \(h_{n}(s_{n})\) which is of the form \((0,u,v)\) for some \((u,v)\in\mathbb{R}^{2}\smallsetminus(0,0)\). This establishes our sufficient condition.
## 4. The relatively independent self-joining for a Rel flow
Recall that \(\hat{\mathcal{L}}\subset\mathcal{L}\) is the set of surfaces without horizontal saddle connections, and this is a \(P\)-invariant set of full measure with respect to \(m_{\mathcal{L}}\). We can combine the product action of \(Z_{\mathcal{L}}\times Z_{\mathcal{L}}\) on \(\hat{\mathcal{L}}\times\hat{\mathcal{L}}\) with the diagonal action of \(P\) to obtain an action of the semi-direct product \(P\ltimes(Z_{\mathcal{L}}\times Z_{\mathcal{L}})\) on \(\hat{\mathcal{L}}\times\hat{\mathcal{L}}\). Since \(\hat{\mathcal{L}}\subset\mathcal{L}\) is of full measure, and the arguments of this section involve passing to sets of full measure, in the remainder of this section we will ignore the distinction between \(\mathcal{L}\) and \(\hat{\mathcal{L}}\).
**Proposition 4.1**.: _Let \(Z\subset Z_{\mathcal{L}}\) be a closed connected subgroup. If \(\theta\) is an invariant probability measure for an action of the semidirect product \(P\ltimes(Z\times Z)\) on \(\mathcal{L}\times\mathcal{L}\), then any \(f\in L^{2}(\theta)\) which is \(\{g_{t}\}\)-invariant is also \(Z\times Z\)-invariant._
Proof.: For any \(z\in Z\times Z\), \(g_{t}zg_{-t}\to_{t\to-\infty}0\). So the claim follows from the Mautner phenomenon, see e.g. [EW, Prop 11.18].
**Proposition 4.2**.: _Let \((\mathcal{L},m_{\mathcal{L}})\) be a \(G\)-orbit-closure with a fully supported \(P\)-invariant ergodic measure, let \(Z\subset Z_{\mathcal{L}}\) be a connected closed subgroup, and let \(\theta\) on \(\mathcal{L}\times\mathcal{L}\) be the relatively independent joining over \(\mathcal{L}/\!\!/Z\). Then \(\theta\) is \(P\)-invariant and \(\{g_{t}\}\)-ergodic (and hence \(P\)-ergodic). Also \(\Delta_{\mathcal{L}}\subset\operatorname{supp}\theta\)._
As we will see in SS5, under the conditions of the Proposition, \(m_{\mathcal{L}}\) is the so-called 'flat measure' on \(\mathcal{L}\).
Proof.: Let \(\pi:\mathcal{L}\times\mathcal{L}\to\mathcal{L}\) be the projection onto the first factor, and let \(\nu=\pi_{*}\theta\). For each \(x\in\mathcal{L}\), let \(\Omega_{x}\overset{\text{def}}{=}\pi^{-1}(x)=\{x\}\times\mathcal{L}\) be the fiber, and let \(\theta_{x}\) be the fiber measure appearing in the disintegration \(\theta=\int_{\mathcal{L}}\theta_{x}\,d\nu(x)\). Then \(Z\) acts on \(\Omega_{x}\) via the second factor in \(Z\times Z\), and \(\theta_{x}\) is \(Z\)-invariant and ergodic by the definition of the ergodic decomposition.
It follows from Proposition 3.1(v) that \(\theta\) is \(P\)-invariant. To prove ergodicity, let \(f\in L^{2}(\mathcal{L}\times\mathcal{L},\theta)\) be a \(P\)-invariant function. By Proposition 4.1, \(f\) is \(Z\times Z\)-invariant. For each \(x\in\mathcal{L}\), let \(f_{x}\overset{\text{def}}{=}f|_{\Omega_{x}}\). There is \(\mathcal{L}_{0}\subset\mathcal{L}\) such that \(m_{\mathcal{L}}(\mathcal{L}_{0})=1\) and for every \(x\in\mathcal{L}_{0}\), \(f_{x}\) belongs to \(L^{2}(\Omega_{x},\theta_{x})\) and is \(Z\)-invariant. Hence, by ergodicity, there is \(\bar{f}:\mathcal{L}_{0}\to\mathbb{R}\) such that for every \(x\in\mathcal{L}_{0}\), \(\bar{f}(x)\) is the \(\theta_{x}\)-almost-sure value of \(f_{x}\). Since \(f\) is \(P\)-invariant for the diagonal action of \(P\), \(\bar{f}\) is \(P\)-invariant for the action of \(P\) on \(\mathcal{L}\). By ergodicity of \(P\circlearrowright(\mathcal{L},m_{\mathcal{L}})\), \(\bar{f}\) is \(\nu\)-a.e. constant, and thus \(f\) is \(\theta\)-a.e. constant.
The last assertion follows from Proposition 3.4.
## 5. An upgraded magic wand theorem
The celebrated'magic wand' Theorem of Eskin and Mirzakhani [EM], and ensuing work of Eskin, Mirzakhani and Mohammadi [EMM], classified \(P\)- and \(G\)-invariant probability measures and orbit-closures on strata of translation surfaces. These results can be summarized as follows (see [EM, Defs. 1.1 & 1.2, Thms. 1.4 & 1.5]):
**Theorem 5.1**.: _Let \(\mathcal{H},\,\mathcal{H}_{\mathrm{m}},\,\bar{\mathcal{H}},\,\bar{\mathcal{H} }_{\mathrm{m}}\) be as in SS2.1. Any \(P\)-invariant ergodic probability measure \(m\) has the following properties:_
1. _It is_ \(G\)_-invariant._
2. _There is a complex-affine manifold_ \(\mathcal{N}\) _and a proper immersion_ \(\varphi:\mathcal{N}\to\bar{\mathcal{H}}\) _such that_ \[\mathcal{L}\operatorname*{\stackrel{{\mathrm{def}}}{{=}}} \operatorname{supp}m=\mathcal{H}\cap\varphi(\mathcal{N}).\]
3. _There is an open_ \(G\)_-invariant subset_ \(U\subset\bar{\mathcal{H}}\) _satisfying_ \(m(U)=1\)_, and for any_ \(x\in U\cap\mathcal{L}\) _there is an open set_ \(V\) _containing_ \(x\) _such that_ \(V\) _is evenly covered by_ \(\mathcal{V}\subset\mathcal{H}_{\mathrm{m}}\) _under the map_ \(\pi:\bar{\mathcal{H}}_{\mathrm{m}}\to\bar{\mathcal{H}}\)_, and_ \(\psi\operatorname*{\stackrel{{\mathrm{def}}}{{=}}}\operatorname {dev}\circ(\pi|_{\mathcal{V}})^{-1}\circ\varphi\) _coincides on its domain with a_ \(\mathbb{C}\)_-linear map, with real coefficients._
4. _The subspace_ \(W\operatorname*{\stackrel{{\mathrm{def}}}{{=}}}\operatorname{Im}(\psi)\) _is symplectic, and the measure_ \(m\) _is obtained via the cone construction from the Lebesgue measure on_ \(W\)_._
5. _The complement_ \(\mathcal{L}\smallsetminus U\) _is a finite union of supports of measures satisfying properties (i)-(iv), for which the manifolds_ \(\mathcal{N}^{\prime}\) _appearing in (ii) satisfy_ \(\dim\mathcal{N}^{\prime}<\dim\mathcal{N}\)_._
_Any orbit-closure for the \(P\)-action is a set \(\mathcal{L}\) as above._
We will refer to \(\mathcal{L}\) as an _orbit-closure_ and to \(m=m_{\mathcal{L}}\) as a _flat measure_ on \(\mathcal{L}\). Orbit-closures are referred to as _affine invariant manifolds_ and also as _invariant subvarieties_. The use of an evenly covered neighborhood in item (iii) is a standard approach for defining period coordinates (see e.g. [MS]). We refer to [Wr1] for a survey containing more information on orbit-closures.
In a forthcoming work of Brown, Eskin, Filip and Rodriguez-Hertz, the same conclusion is obtained for the diagonal actions of \(P\) and \(G\) on a product of strata \(\mathcal{H}\times\mathcal{H}^{\prime}\). Namely, the following is shown:
**Theorem 5.2**.: _Let \(\mathcal{H},\mathcal{H}^{\prime}\) be strata of translation surfaces, and let \(P\) and \(G\) act on \(\mathcal{H}\times\mathcal{H}^{\prime}\) via their diagonal embeddings in \(G\times G\). Then all of the conclusions of Theorem 5.1 hold for this action (with \(\bar{\mathcal{H}}\times\bar{\mathcal{H}}^{\prime}\) replacing \(\bar{\mathcal{H}}\))._
## 6. Proof of main result
Using Theorem 5.2 and further work of Wright [Wr2], we can prove our main result.
Proof of Theorem 1.2.: Let \(Z_{0}=\operatorname{span}_{\mathbb{R}}(z_{0})\) be a one-dimensional connected real Rel subgroup. Assume that (1) fails, so that the action of \(Z_{0}\) on \((\mathcal{L},m_{\mathcal{L}})\) is not mixing of all orders. Then, by Proposition 3.5 it is not ergodic. Let \(\theta\) be the relatively independent self-joining over \(\mathcal{L}/\!\!/Z_{0}\). Applying Propositions 3.3 and 3.4 we have that \(\theta\neq m_{\mathcal{L}}\times m_{\mathcal{L}}\) and \(\Delta_{\mathcal{L}}\subset\operatorname{supp}\theta\). Applying Proposition 4.2 and Theorem 5.2, we have that there is a \(G\)-invariant open subset \(U\) of full \(\theta\)-measure
such that \(U\cap\operatorname{supp}\theta\) is the isomorphic image of an affine complex-linear manifold whose dimension is strictly smaller than \(2\dim\bar{\mathcal{H}}\), and \(\theta\) is obtained from Lebesgue measure on this complex-linear manifold by the cone construction.
We claim that the set
\[U_{1}\operatorname{\overset{\mathrm{def}}{=}}\{q\in\mathcal{H}:(q,q)\in U\}\]
is of full measure for \((\pi_{1})_{*}\theta\), where \(\pi_{1}:\mathcal{L}\times\mathcal{L}\to\mathcal{L}\) is the projection onto the first factor. Indeed, the measure \(\theta\) is invariant under \(Z_{0}\times\{\operatorname{Id}\}\), and hence so is its support. Since \(Z_{0}\) acts by homeomorphisms where defined, and using property (v) in Theorems 5.1 and 5.2, we have that the set \(U\) is also \(Z_{0}\times\{\operatorname{Id}\}\)-invariant. Thus for any \(Z_{0}\)-ergodic measure, it is either null or conull. Thus if \(q\notin U_{1}\) and \(q\) is generic for the measure \(\mu_{T(q)}\) appearing in (3.1), then \(\mu_{T(q),q}\) assigns measure zero to \(U\), where \(\mu_{T(q),q}\) is the measure on \(\operatorname{supp}\theta\) defined by \(\mu_{T(q),q}(A)=\mu_{T(q)}(\{q^{\prime}:(q^{\prime},q)\in A\})\). If this were to happen for a positive measure of \(q\) it would follow from (3.1) and the fact that \(\mu_{T(q)}\times\mu_{T(q)}=\int\mu_{T(q^{\prime}),q^{\prime}}d\mu_{T(q)}\) that \(U\) does not have full measure for \(\theta\).
For \(q\in U_{1}\), let \(N_{q}\) denote the connected component of \(U\cap\pi_{1}^{-1}(q)\cap\operatorname{supp}\theta\) containing \((q,q)\). Since the fibers \(\pi_{1}^{-1}(q)\) are also affine submanifolds of \(\mathcal{L}\times\mathcal{L}\), we have that the \(N_{q}\) are affine submanifolds contained in \(\pi_{1}^{-1}(q)\cong\mathcal{L}\), so we can identify them with invariant submanifolds in \(\mathcal{L}\) (which we continue to denote by \(N_{q}\)). With this notation we have \(q\in N_{q}\).
The mapping \(q\mapsto T(N_{q})\) is locally constant; that is, letting \(V\subset\bar{\mathcal{H}}\) and \(\mathcal{V}\subset\bar{\mathcal{H}}_{\mathrm{m}}\) be open sets such that \(\pi|_{\mathcal{V}}:\mathcal{V}\to V\) is a homeomorphism and \(q\in V\), the map \(q\mapsto\operatorname{dev}\circ\pi|_{\mathcal{V}}^{-1}(q)\) sends a neighborhood of \(q\) in \(N_{q}\) to an affine subspace \(W_{q}\) of \(H^{1}(S,\Sigma;\mathbb{R}^{2})\), and the corresponding linear spaces \(W_{q}-W_{q}\) are the same for all \(q\in V\). Since \(m_{\mathcal{L}}\times m_{\mathcal{L}}\) is the unique \(P\)-invariant ergodic measure on \(\mathcal{L}\times\mathcal{L}\) of full support, we have \(\dim N_{q}<\dim\mathcal{L}\) for every \(q\in U_{1}\).
Let \(\bar{N}_{q}\) denote the set of surfaces (not necessarily of area one) which are obtained by rescaling surfaces in \(N_{q}\), and let
\[\mathfrak{N}_{q}\operatorname{\overset{\mathrm{def}}{=}}T_{q}(\bar{N}_{q})\]
(the tangent space to \(\bar{N}_{q}\) at \(q\), thought of as a subset of the tangent space \(T_{q}(\bar{\mathcal{L}})\)). The assignment \(q\mapsto\mathfrak{N}_{q}\) defines a proper flat sub-bundle of the tangent bundle \(T(\bar{\mathcal{L}})\). Flat sub-bundles of \(T(\bar{\mathcal{L}})\) were classified in [Wr2]. According to [Wr2, Thm. 5.1], \(\mathfrak{N}_{q}\subset\mathfrak{R}_{\mathcal{L}}\) for each \(q\), and \(\mathfrak{N}_{q}\) is a complex linear subspace which is locally constant. Since \(\mathfrak{R}_{\mathcal{L}}\) is acted on trivially by monodromy, we in fact have that \(\mathfrak{N}_{q}\) is independent of \(q\), and we denote it by \(\mathfrak{R}\). The leaves \(\mathfrak{R}(q)\) are contained in \(\bar{N}_{q}\) for each \(q\), and of the same dimension. That is, \(\mathfrak{R}(q)\) is the connected component of \(\bar{N}_{q}\) containing \(q\). Since \(\operatorname{Rel}\) deformations do not affect the area of the surface, we see that \(\bar{N}_{q}=N_{q}\). In particular \(\mathfrak{R}(q)\) is closed for each \(q\).
By Proposition 3.4, for a.e. \(q\), \(N_{q}\) is the support of the ergodic component \((m_{\mathcal{L}})_{q}\), and in particular
\[(m_{\mathcal{L}})_{q}(N_{q})<\infty,\quad\text{for a.e. }q.\]
Since \(Z_{0}\) acts ergodically with respect to \((m_{\mathcal{L}})_{q}\), we have that almost surely \(N_{q}=\mathfrak{R}(q)\). Since the measure \((m_{\mathcal{L}})_{q}\) is affine in charts, it is a scalar multiple of the translation-invariant measure on \(\mathfrak{R}(q)\), and thus the volume \(V_{q}\) of \(\mathfrak{R}(q)\) (with respect to its translation-invariant measure) is almost surely finite. It is clear that the function \(q\mapsto V_{q}\) is \(U\)-invariant, and by ergodicity, it is constant almost surely.
**Remark 6.1**.: _We note that the above argument works under much weaker conclusions than those given in Theorem 5.2. Indeed, in the first step of the argument, Theorem 5.2 was used simply to extract a \(G\)-invariant assignment \(q\mapsto N_{q}\), where \(N_{q}\) is a subspace of \(T_{q}(\mathcal{L})\), which is proper if \(\theta\) is not the product joining. A fundamental fact about such \(G\)-invariant assignments is that they are very restricted - besides [Wr2], see [EFW] and [Fi]. In particular, [Fi] gives strong restrictions on assignments that are only assumed to be defined almost everywhere and measurable._
## 7. A topological condition for Rel ergodicity
Let \(Z_{0}\subset Z\) be a subspace. We say that a translation surface \(x\) is \(Z_{0}\)_-stably periodic_ if it can be presented as a finite union of horizontal cylinders and the \(Z_{0}\)-orbit of \(x\) is well defined. Recall that a _horizontal separatrix_ is a horizontal leaf whose closure contains at least one singularity, and it is a _horizontal saddle connection_ if its closure contains two singularities. Then the condition of being \(Z_{0}\)-stably periodic is equivalent to requiring that all horizontal separatrices starting at singular points are on horizontal saddle connections, and \(Z_{0}\) preserves the holonomy of every horizontal saddle connection on \(x\). In case \(Z=Z_{0}\) is the full real Rel group, we say that \(x\) is _fully stably periodic._ This is equivalent to saying that all horizontal separatrices starting at singular points are on saddle connections, and all horizontal saddle connections start and end at the same singularity. In particular, for any cylinder \(C\) on a fully stably periodic surface, each boundary component of \(C\) is made of saddle connections starting and ending at the same singular point \(\xi\); we say that the boundary component _only sees singularity \(\xi\)_. For more information on the real Rel action on surfaces which are horizontally completely periodic, see [HW, SS6.1].
**Proposition 7.1**.: _Suppose \(x\) is a surface which is \(Z_{0}\)-stably periodic, and \(v\in Z_{0}\) moves two singularities \(p\) and \(q\) with respect to each other. Suppose that \(x\) contains two cylinders \(C_{1}\) and \(C_{2}\) that both only see singularity \(p\) on one boundary component and only see singularity \(q\) on another boundary component. Finally suppose the circumferences \(c_{1},c_{2}\) of these cylinders satisfy \(\frac{c_{1}}{c_{2}}\notin\mathbb{Q}\). Then Case (2) of Theorem 1.1 does not hold for \(x\)._
Proof.: Since \(\frac{c_{1}}{c_{2}}\notin\mathbb{Q}\), the trajectory \(\{\operatorname{Rel}_{tv}(x):t\in\mathbb{R}\}\) is not closed, let \(\mathcal{L}\) denote its closure. We claim that the tangent space to \(\mathcal{L}\) is not contained in \(Z\). Let \(\sigma_{1}\) denote a saddle connection from \(p\) to \(q\) in \(C_{1}\) and let \(\sigma_{2}\) denote a saddle connection from \(q\) to \(p\) in \(C_{2}\). Let \(\sigma\) be the concatenation. Then \(\sigma\) represents an absolute homology class because it goes from \(p\) back to \(p\), and it is nontrivial because the vertical component of its holonomy on \(x\) is nonzero. If we consider the restriction of the rel-action to \(C_{1}\cup C_{2}\) then it only affects the twist parameters, which is a 2-dimensional space. This space can be generated by the horizontal holonomy of \(\sigma_{1}\) and the horizontal holonomy of \(\sigma_{2}\). Since \(\frac{c_{1}}{c_{2}}\notin\mathbb{Q}\), this restricted action does not give a closed orbit. So the tangent space to \(\mathcal{L}\) contains directions, which continuously affect the holonomy of \(\sigma\). Since \(\sigma\) is an absolute period, we see that the tangent space to \(\mathcal{L}\) is not contained in \(Z\)
## 8. Checking the condition for strata
Let \(\mathcal{H}=\mathcal{H}(a_{1},\ldots,a_{k})\) and for \(i,j\in\{1,\ldots,k\}\), let \(\xi_{i},\xi_{j}\) be the corresponding singular points of a surface in \(\mathcal{H}\). Let \(z\in\mathfrak{R}\) be a Rel cohomology class. We say that \(z\)_moves \(\xi_{i},\xi_{j}\) with respect to each other_ if for some (equivalently, every) \(\alpha\in H_{1}(S,\Sigma)\) represented by a path starting at \(\xi_{i}\) and ending at \(\xi_{j}\), we have \(z(\alpha)\neq 0\). Below when we discuss a stratum \(\mathcal{H}(a_{1},\ldots,a_{k})\) we allow \(a_{i}=0\), that is we allow marked points. We call points with cone angle \(2\pi\) (that is, with \(a=0\)) _removable singularities_, and otherwise we call them _non-removable_. The following result, which clearly implies Theorem 1.1, allows strata with removable singularities.
**Theorem 8.1**.: _Let \(\mathcal{H}\) be a connected component of a stratum \(\mathcal{H}(a_{1},\ldots,a_{k})\). Let \(m_{\mathcal{H}}\) be the Masur-Veech measure on \(\mathcal{H}\), let \(Z\) be the corresponding real Rel foliation, and let \(Z_{0}\subset Z\) be a one-dimensional connected subgroup of \(Z\). Suppose that there are \(1\leq i<j\leq k\) with corresponding singular points \(\xi_{i},\xi_{j}\), such that \(a_{i}>0,\,a_{j}>0\) and such that some element of \(Z_{0}\) moves \(\xi_{i},\xi_{j}\) with respect to each other. Then the \(Z_{0}\)-flow on \((\mathcal{H},m_{\mathcal{H}})\) is mixing of all orders (and in particular, ergodic)._
Clearly, Theorem 8.1 follows from Theorem 1.2, Proposition 7.1, and the following result.
**Proposition 8.2**.: _Let \(\mathcal{H}\subset\mathcal{H}(a_{1},\ldots,a_{k})\) be a connected component of a stratum of translation surfaces with at least two non-removable singular points. If \(p\neq q\) is any pair of non-removable singularities then there exists \(M\in\mathcal{H}\), which has cylinders \(C_{1},C_{2}\) with circumferences \(c_{1},c_{2}\) so that_
1. \(M\) _is fully stably periodic._
2. \(\frac{c_{1}}{c_{2}}\notin\mathbb{Q}\)_._
3. _Both_ \(C_{1}\) _and_ \(C_{2}\) _only see singularity_ \(p\) _on one boundary component and only see singularity_ \(q\) _on the other boundary component._
For the proof of Proposition 8.2 we will also need the following:
**Proposition 8.3**.: _Let \(\mathcal{H}=\mathcal{H}(a_{1},\ldots,a_{k})\) be a stratum of translation surfaces with at least two singular points (that is \(k\geq 2\)). If \(p\neq q\) is any pair of distinct singularities (possibly removable), then there exists \(M\in\mathcal{H}\), so that \(M\) is fully stably periodic and there exists a cylinder on \(M\) that only sees singularity \(p\) on one boundary component, and only sees singularity \(q\) on the other boundary component._
Propositions 8.2 and 8.3 will both be proved by induction, after some preparations.
**Lemma 8.4** (The basic surgery - gluing in a torus).: _Let \(\mathcal{H}=\mathcal{H}(b_{1},\ldots,b_{\ell})\) be a stratum of translation surfaces, and let \(M\in\mathcal{H}\), with singularities labeled by \(\xi_{1},\ldots,\xi_{\ell}\), so that the order of \(\xi_{i}\) is \(b_{i}\). Suppose \(M\) has a horizontal cylinder \(C\), with circumference \(c\), where one boundary component is made of saddle connections that begin and end at \(\xi_{i}\), and the other is made of saddle connections that begin and end at \(\xi_{j}\), where \(b_{i}\geq 0\) and \(b_{j}\geq 0\) (so that \(\xi_{i},\xi_{j}\) might be removable). Then for all \(w>0\) there exists \(M^{\prime}\in\mathcal{H}(b_{1},\ldots,b_{i}+1,\ldots,b_{j}+1,\ldots,b_{\ell})\), with singularities labeled \(\xi^{\prime}_{1},\ldots,\xi^{\prime}_{\ell}\), which has two horizontal cylinders \(C^{\prime}_{1}\), \(C^{\prime}_{2}\), where \(C^{\prime}_{1}\) has circumference \(c+w\) and \(C^{\prime}_{2}\) has circumference \(w\). The complements \(M\smallsetminus C\) and \(M^{\prime}\smallsetminus(C_{1}\cup C_{2})\) are isometric, by an isometry mapping \(\xi^{\prime}_{i}\) to \(\xi_{i}\) for all \(i\). The cylinders \(C_{1}\) and
only see singularity \(\xi^{\prime}_{i}\) on one boundary component, and \(\xi^{\prime}_{j}\) on another. Moreover, if \(M\) is fully stably periodic then so is \(M^{\prime}\)._
Proof.: It will be easier to follow the proof while consulting Figures 1 (before) and 2 (after). Given a polygonal presentation for \(M\), we give a polygonal presentation for \(M^{\prime}\). Let \(M\) be a polygon representation for \(M\) in which the cylinder \(C\) is represented by a parallelogram \(P\) (in Figure 1, the large rectangle in the center of the presentation), with two horizontal sides of length \(c\), non-horizontal sides identified to each other, and the singular points \(\xi_{i}\), \(\xi_{j}\) on adjacent corners of \(P\). Thus the non-horizontal sides of \(P\) represent a saddle connection \(\sigma\) on \(M\) connecting \(\xi_{i}\) to \(\xi_{j}\). We consider the two non-horizontal sides of \(P\) as distinct and label them by \(\sigma_{1},\sigma_{2}\). Let \(P^{\prime}\) be a parallelogram with sides parallel to those of \(P\), where the
Figure 1. The surface \(M\) has a cylinder of circumference \(c\), and its boundary components see only the singularities \(\xi_{i}\) and \(\xi_{j}\) (denoted by \(\circ\) and \(\bullet\)). The edges not labeled by \(\triangle\) are connected to \(M\smallsetminus C\).
horizontal sides have length \(w\) and the nonhorizontal sides are longer than the ones on \(P\) (in Figure 2, \(P^{\prime}\) is to the right of \(P\)).
Label the two horizontal sides of \(P^{\prime}\) by \(h^{\prime}_{1}\) and \(h^{\prime}_{2}\), and identify them by a translation. Partition the non-horizontal sides of \(P^{\prime}\) into two segments. The segments \(\sigma^{\prime}_{1},\sigma^{\prime}_{2}\) are parallel to each other and have the same length as \(\sigma_{1},\sigma_{2}\), and start at a corner of \(P\). The segments \(\gamma^{\prime}_{1},\gamma^{\prime}_{2}\) comprise the remainder of the non-horizontal sides of \(P^{\prime}\) (and in particular, have the same length). Identify \(\gamma^{\prime}_{1}\) to \(\gamma^{\prime}_{2}\) by a translation, and identify \(\sigma^{\prime}_{1},\sigma^{\prime}_{2}\) to \(\sigma_{1},\sigma_{2}\) by a translation so that each \(\sigma^{\prime}_{i}\) is attached to the \(\sigma_{j}\) with the opposite orientation. Let \(M^{\prime}\) be the translation surface corresponding to this presentation. It is clear that \(M^{\prime}\) has the required properties.
Proof of Proposition 8.3.: The proof is by induction on \(\sum a_{i}\).
**Base of induction:** The base case is the stratum \(\mathcal{H}(a_{1},0^{s})\), that is, one singular point (removable or non-removable) of order \(a_{1}\), and some number \(s\geq 1\) of removable singular points. In this case we take a surface in \(\mathcal{H}(a_{1})\) which is made of one horizontal cylinder. We label the singular point by \(\xi_{1}\) and place additional removable singular points \(\xi_{2},\dots,\xi_{s+1}\) in the interior of the cylinder, at different heights (so that the resulting surface has no horizontal saddle connections between distinct singularities) and so that \(\xi_{i}\) and \(\xi_{j}\) are on opposite sides of a cylinder.
**Inductive step:** Suppose \(\mathcal{H}^{\prime}=\mathcal{H}(a_{1},\dots,a_{k})\) is our stratum, where at least two of the singularities are non-removable. Let \(p^{\prime},q^{\prime}\) be labels of singular points for surfaces in \(\mathcal{H}^{\prime}\), corresponding to indices \(i\neq j\). To simplify notation assume \(i=1,j=2\). There are three cases to consider: \(a_{i}=a_{j}=0\), or one of \(a_{i},a_{j}\) are positive, or both are positive.
If \(a_{i}=a_{j}=0\) then by assumption \(k\geq 4\). We take a cylinder \(C\) on a fully stably completely periodic surface \(M\) in \(\mathcal{H}=\mathcal{H}(a_{1},\dots,\hat{a}_{i},\dots,\hat{a}_{j},\dots,a_{k})\). The notation \(\hat{a}_{i}\) means that the symbol should be ignored; that is on a stratum of the same genus with \(k-2\geq 2\) singular points obtained by removing two removable singular points. We place two singular points marked \(p^{\prime},q^{\prime}\) in the interior of \(C\) at different heights. If \(a_{i}>0\) and \(a_{j}=0\) is zero we take a fully stably periodic surface \(M\) in \(\mathcal{H}(a_{1},\dots,a_{i}-1,\dots,\hat{a}_{j},\dots,a_{k})\), find a cylinder \(C\) on \(M\) whose boundary component is made of saddle connections starting and ending at \(\xi_{i}\), place a marked point labeled \(\xi_{j}\) in the interior of \(C\). If \(a_{i}\) and \(a_{j}\) are both positive we use the induction hypothesis to find a surface \(M\in\mathcal{H}(a_{1},\dots,a_{i}-1,\dots,a_{j}-1,\dots,a_{k})\) with a cylinder whose boundary components see \(\xi_{i}\) and \(\xi_{j}\), and we perform the surgery in Lemma 8.4 to this cylinder.
**Lemma 8.5** (Two surgeries involving genus two surfaces).: _Let \(\mathcal{H}=\mathcal{H}(b_{1},\dots,b_{k})\) be a stratum of translation surfaces and let \(M\in\mathcal{H}\) have a horizontal cylinder \(C\), with circumference \(c\). Let \(p\) and \(q\) be singular points with order \(b_{i},b_{j}\) respectively, such that one boundary component of \(C\) only sees singularity \(p\) and the other only sees singularity \(q\). Then for any \(w_{1},w_{2}>0\) there exists \(M^{\prime}\in\mathcal{H}^{\prime}=\mathcal{H}(b_{1},\dots,b_{i}+2,\dots,b_{j}+2,\dots,b_{k})\) which has three cylinders \(C_{1},C_{2},C_{3}\) with circumferences \(c+w_{1}+w_{2},w_{1}\) and \(w_{2}\) respectively. The complements \(M\smallsetminus C\) and \(M^{\prime}\smallsetminus(C_{1}\cup C_{2}\cup C_{3})\) are isometric by an isometry preserving the labels of singular points, and \(C_{1},C_{2},C_{3}\) all have one boundary component that sees only \(p\), and another that sees only \(q\). Thus, if \(M\) is fully stably periodic so is \(M^{\prime}\). Moreover, if the \(b_{i}\) are all even, so that
\(\mathcal{H}^{\prime}\) has even and odd spin components, we can choose \(M^{\prime}\) to be in either the even or odd connected component._
Proof.: Once again we encourage the reader to consult Figures 3 and 4.
In Lemma 8.4 we made a slit in \(M\), running through \(P\) from top to bottom, and glued in a torus with a slit. In this case we make an identical slit, this time gluing in a genus two surface with a slit. This surface is presented in Figures 3 and 4 as made up of three rectangles. It is straightforward to check that \(M^{\prime}\in\mathcal{H}^{\prime}\) and that it has cylinders satisfying the desired properties. It remains to check the final assertion about the parity of the spin structure.
Figure 4. Second option for \(M^{\prime}\), with a different spin.
Figure 3. First option for \(M^{\prime}\) in Lemma 8.5. Attaching the subsurface on the right increases the genus by \(2\). Unlabeled edges are attached to \(M^{\prime}\smallsetminus(C_{1}\cup C_{2}\cup C_{3})\).
Recall from [KZ, eqn. (4)] that where defined, the spin structure of a surface \(M\) of genus \(g\) can be computed as follows. Let \(\alpha_{i},\beta_{j}\) (where \(1\leq i,j\leq g\)) be a symplectic basis for \(H_{1}(M)\), realized explicitly as smooth curves on \(M\). This means that all of these curves are disjoint, except for \(\alpha_{i}\) and \(\beta_{i}\) which intersect once. For each curve \(\gamma\), let \(\operatorname{ind}(\gamma)\) be the turning index, that is the total number of circles made by the tangent vector to \(\gamma\), as one goes around \(\gamma\). The parity of \(M\) is then the parity of the integer \(\sum_{i=1}^{g}(1+\operatorname{ind}(\alpha_{i}))(1+\operatorname{ind}(\beta_ {i}))\). It is shown in [KZ] that this number is well-defined (independent of the choice of the symplectic basis) when all the singular points have even order.
Suppose \(M\) has genus \(g\) and is equipped with a symplectic basis. Since any non-separating simple closed curve can be completed to a symplectic basis, we can assume that \(\alpha_{1}\) is the core curve of \(C\), and the other curves in the basis do not intersect the saddle connection from \(p\) to \(q\) passing through \(C\). We construct a symplectic basis for \(M^{\prime}\) in both cases, by modifying \(\alpha_{1}\), keeping \(\alpha_{2},\dots,\alpha_{g},\beta_{1},\dots,\beta_{g}\), and adding new curves \(\alpha_{g+1},\alpha_{g+2},\beta_{g+1},\beta_{g+2}\). The modified curves are shown in Figures 5, 6, and the reader can easily check that these new curves still form a symplectic basis, and that these two choices add two numbers of different parities to the spin structure.
Note that in Proposition 8.2 we care about all connected components of strata. We need to record some information about the classification of connected components of strata, due to Kontsevich and Zorich. A translation surface is _hyperelliptic_ if it admits an involution which acts on absolute homology as \(-\mathrm{Id}\) (see [FM] or [KZ, SS2.1] for more details). A connected component of a stratum is _hyperelliptic_ if all surfaces in the component are hyperelliptic.
**Proposition 8.6** ([KZ], Theorems 1 & 5 and Corollary 5 of Appendix B).: _Let \(\mathcal{H}(a_{1},\dots,a_{k})\) be a stratum with \(a_{i}>0\) for all \(i\). The following holds:_
* \(\mathcal{H}\) _has three connected components in the following cases:_
* \(k=1,a_{1}=2g-2,g\geq 4\)
Figure 5. Modifying the symplectic basis. Gluings as in Figure 3.
_._ * \(k=2\)_,_ \(a_{1}=a_{2}=g-1\)_,_ \(g\geq 5\) _is odd. One is hyperelliptic, and the two non-hyperelliptic strata are distinguished by the spin invariant._
* \(\mathcal{H}\) _has two connected components in the following cases:_
* _All of the_ \(a_{i}\) _are even,_ \(g\geq 4\)_, and either_ \(k\geq 3\) _or_ \(a_{1}>a_{2}\)_. The components are distinguished by their spin._
* \(a_{1}=a_{2}\) _and_ \(g\) _is either_ \(3\) _or is even. One of the components is hyperelliptic and the other is not. When_ \(g=3\) _the hyperelliptic component is even, and the other one is odd._
* \(\mathcal{H}\) _is connected in all other cases._
Proof of Proposition 8.2.: The proof will be case-by-case. Here are the cases:
1. \(\mathcal{H}(1,1)\).
2. All the \(a_{i}\) are nonzero and \(\mathcal{H}\) is connected.
3. All the \(a_{i}\) are nonzero and \(\mathcal{H}\) has two connected components distinguished by spin.
4. All the \(a_{i}\) are nonzero and \(\mathcal{H}\) has two connected components distinguished by hyperellipticity.
5. All the \(a_{i}\) are nonzero and \(\mathcal{H}\) has three connected components.
6. Some of the \(a_{i}\) are zero.
**Case (i).** There is just one connected component and the desired surface is a \(Z\)-shaped surface, with three horizontal cylinders \(C_{1},C_{2},C_{3}\) of circumferences \(c_{1},c_{1}+c_{3},c_{3}\), where \(C_{1},C_{3}\) are simple. We put all of the removable singular points in the interior of \(C_{3}\), and choose choose \(c_{1},c_{3}\) so that \(c_{1}/(c_{1}+c_{3})\notin\mathbb{Q}\). It is clear that with these choices the conditions are satisfied.
**Case (ii).** The stratum \(\mathcal{H}\) is connected, and we have at least two singularities of positive order. So with no loss of generality that they are labelled \(1\) and \(2\). The result follows from Lemma 8.4, applied to a surface in \(\mathcal{H}(a_{1}-1,a_{2}-1,a_{3},\dots,a_{k})\), and taking \(w\notin c\mathbb{Q}\), so that \(w/(c+w)\notin\mathbb{Q}\).
Figure 6. Modifying the symplectic basis, second case. Gluings as in Figure 4. Note the change in the rotation number of \(\beta_{g+2}\).
**Case (iii).** We apply the surgery in Lemma 8.5, with \(w_{1}/w_{2}\notin\mathbb{Q}\). Namely if \(p\) and \(q\) are labelled \(i,j\), we let \(b_{i}=a_{i}-2\), \(b_{j}=a_{j}-2\) and \(b_{\ell}=a_{\ell}\) for \(\ell\neq i,j\).
**Case (iv).** There are two connected components. One is hyperelliptic, one is not. This means that \(a_{1}=a_{2}\) and either \(g=3\) (in which case \(a_{1}=a_{2}=2\)) or \(g\geq 4\) is even (in which case \(a_{1}=a_{2}=g-1\)). In this case we give explicit surfaces, one in each connected component. The first surface (the \(\mathcal{H}(2,2)\) case is shown in Figure 7) is a'staircase' surface made of gluing \(2g\) rectangles to each other. The rectangles are labelled \((k,B)\) and \((k,T)\) for \(k=1,\ldots,g\). The top (respectively, bottom) of \((k,B)\) is glued to the bottom (resp., top) of \((k,T)\) for \(k=1,\ldots,g\), and the left (resp., right) of \((k,T)\) is glued to the right (resp., left) of \((k+1,B)\) for \(k=1,\ldots,g-1\). The horizontal sides of \((1,B)\) are glued to each other, as are the horizontal sides of \((g,T)\). This surface is hyperelliptic since it has a hyperelliptic involution rotating each rectangle around its midpoint, and this involution swaps the singularities (see [KZ, Remark 3]). The second surface is obtained as follows. We first construct a hyperelliptic surface in \(\mathcal{H}(a_{1}-2,a_{2}-2)\) as in the previous paragraph. Then we perform the surgery described in Lemma 8.5. The resulting surface has a horizontal cylinder intersecting three vertical cylinders, and thus, by [Li, Lemma 2.1], is not hyperelliptic. See Figure 8 for an example in \(\mathcal{H}(2,2)\). In both of these constructions there are no restrictions on the sidelengths of the rectangles, and we can easily arrange that two of the circumferences are incommensurable.
Figure 7. A surface in \(\mathcal{H}^{hyp}(2,2)\).
**Case (v).** In this case \(a_{1}=a_{2}=g-1\) for \(g\geq 5\) odd. Applying the argument in Case (iii), we obtain the required surfaces in the odd and even connected components. To obtain the required surface in the hyperelliptic component we use the'staircase surface' describe in Case (iv).
**Case (vi).** Assume with no loss of generality that the removable singularities are labelled \(k^{\prime}+1,\ldots,k\) for some \(k^{\prime}\geq 2\), and let \(\mathcal{H}^{\prime}=\mathcal{H}(a_{1},\ldots,a_{k^{\prime}})\). Note that the singularities \(p\) and \(q\) have label in \(\{1,\ldots,k^{\prime}\}\). Apply the preceding considerations to obtain a surface in \(\mathcal{H}^{\prime}\) with the required cylinders. By examining the proof in all preceding case one sees that the number of horizontal cylinders on this surface is at least three, that is there is at least one cylinder \(C_{3}\) which is distinct from the cylinders \(C_{1},C_{2}\), and we modify \(M\) by adding \(k-k^{\prime}\) in general position in the interior of \(C_{3}\), to obtain the desired surface.
## 9. Zero entropy
In this section we prove the following result:
**Theorem 9.1**.: _Let \(\mathcal{H}\) be a stratum for which \(\dim Z>0\), let \(z\in Z\smallsetminus\{0\}\), and let \(\mu\) be a probability measure on \(\mathcal{H}\) such that \(\operatorname{Rel}_{z}(q)\) is defined for \(\mu\)-a.e. \(q\). Assume that \(\mu\) is \(\operatorname{Rel}_{z}\)-invariant and ergodic, and assume in addition that_
\[\text{there is }t_{n}\to\infty\text{ so that }(g_{-t_{n}})_{*}\mu\text{ converges to a probability measure} \tag{9.1}\]
_(in the weak-* topology). Then the entropy of \(\operatorname{Rel}_{z}\) acting on \((\mathcal{H},\mu)\) is zero._
Figure 8. A surface in \(\mathcal{H}^{nonhyp}(2,2)\).
For the proof of Theorem 9.1, we will need an estimate showing that points stay close to each other for times up to \(L\), provided their initial distance is polynomially small (as a function of \(L\)). To make this precise we will use the sup-norm Finsler metric on \(\mathcal{H}\), which was introduced by Avila, Gouezel and Yoccoz [AGY] and whose definition we now recall. For \(q_{0},q_{1}\) belonging to the same connected component of a stratum \(\mathcal{H}\), we write
\[\mathrm{dist}(q_{0},q_{1})=\inf_{\gamma}\int_{0}^{1}\|\gamma^{\prime}(t)\|_{ \gamma(t)}dt, \tag{9.2}\]
where \(\gamma:[0,1]\to\mathcal{H}_{\mathrm{m}}\) ranges over all \(C^{1}\) curves with \(\gamma(0)\in\pi^{-1}(q_{0}),\ \gamma(1)\in\pi^{-1}(q_{1})\), and \(\|\cdot\|_{\mathbf{q}}\) is a pointwise norm on the tangent space to \(\mathcal{H}_{\mathrm{m}}\) at \(\mathbf{q}\), identified via the developing map with \(H^{1}(S,\Sigma;\mathbb{R}^{2})\). Below, balls, diameters of sets, and \(\varepsilon\)-neighborhoods of sets will be defined using this metric. We can now state our estimate.
**Proposition 9.2**.: _Let \(\mathcal{H}\) be a stratum of translation surfaces with at least two singularities, let \(Z\) be its real Rel space, let \(z_{0}\in Z\), and let \(T\) be the map of \(\mathcal{H}\) defined by applying \(\mathrm{Rel}_{z_{0}}\) (where defined). Then for every compact subset \(K\subset\mathcal{H}\), there is \(L_{0}>0\), such that if \(q\in\mathcal{H},\ L\in\mathbb{N},\ L>L_{0},\) satisfy_
\[q\in K\quad\text{and}\ \ g_{-\ell}q\in K,\quad\text{where }\ell\stackrel{{ \mathrm{def}}}{{=}}2\log L, \tag{9.3}\]
_then the maps \(T,\ldots,T^{L}\) are all defined on \(B\left(q,\frac{1}{L^{5}}\right)\), and we have_
\[\max_{j=1,\ldots,L}\mathrm{diam}\left(T^{j}\left(B\left(q,\frac{1}{L^{5}} \right)\right)\right)\to_{L\to\infty}0.\]
We have made no attempt to optimize the power \(5\) in this statement.
Our proof of Proposition 9.2 will use some properties of the sup-norm metric. They are proved in [AGY], see also [AG] and [CSW, SS2]. Our notation will follow the one used in [CSW].
**Proposition 9.3**.: _The following hold:_
1. _For all_ \(q_{0},q_{1}\) _and all_ \(t\in\mathbb{R}\)_,_ \(\mathrm{dist}(g_{t}q_{0},g_{t}q_{1})\leq e^{2|t|}\mathrm{dist}(q_{0},q_{1}).\)__
2. _The metric dist is proper; that is, for any fixed basepoint_ \(q_{0}\)_, the map_ \(q\mapsto\mathrm{dist}(q,q_{0})\) _is proper. In particular, the_ \(\varepsilon\)_-neighborhood of a compact set is pre-compact, for any_ \(\varepsilon>0\)_._
3. _The map_ \(\mathbf{q}\mapsto\|\cdot\|_{\mathbf{q}}\) _is continuous, and hence bounded on compact sets. This means that for any compact_ \(K\subset\mathcal{H}_{\mathrm{m}}\) _there is_ \(C>0\) _such that for any_ \(\mathbf{q}_{0},\mathbf{q}_{1}\) _in_ \(K\)_, the norms_ \(\|\cdot\|_{\mathbf{q}_{0}},\|\cdot\|_{\mathbf{q}_{1}}\) _are bi-Lipschitz equivalent with constant_ \(C\)_._
4. _The infimum in (_9.2_) is actually a minimum, that is attained by some curve_ \(\gamma\)_._
With these preparations, we can give the
Proof of Proposition 9.2.: Write \(B\stackrel{{\mathrm{def}}}{{=}}B\left(q,\frac{1}{L^{5}}\right), \ A\stackrel{{\mathrm{def}}}{{=}}g_{-\ell}(B)\) (note that \(A\) and \(B\) both depend on \(L\) and \(q\) but we suppress this from the notation). Let \(K^{\prime}\) be the \(1\)-neighborhood of \(K\), which is a pre-compact subset of \(\mathcal{H}\) by Proposition 9.3(b). Since \(\mathrm{diam}(B)\leq\frac{2}{L^{5}}\), Proposition 9.3(a) implies that
\[\mathrm{diam}(A)\leq\frac{2}{L^{3}}. \tag{9.4}\]
It follows from (9.3) that \(A\cap K\neq\varnothing\) and therefore \(A\subset K^{\prime}\). Since
\[\max_{j=1,\dots,L}\|je^{-\ell}z_{0}\|\leq\frac{1}{L}\|z_{0}\|\to_{L\to\infty}0, \tag{9.5}\]
for all large enough \(L\) (depending on \(K^{\prime}\)) we have that \(\operatorname{Rel}_{je^{-\ell}z_{0}}(q^{\prime})\) is defined for \(q^{\prime}\in K^{\prime}\). Since \(q_{1}^{\prime}\operatorname{\stackrel{{\mathrm{def}}}{{=}}} \operatorname{Rel}_{je^{-\ell}z_{0}}\circ g_{-\ell}(q_{1})\) is defined for \(q_{1}\in B\), we have from (2.3) that \(T^{j}(q_{1})=\operatorname{Rel}_{jz_{0}}(q_{1})=g_{\ell}(q_{1}^{\prime})\) is also defined. This proves that the maps \(T,T^{2},\dots,T^{L}\) are all defined on \(B\).
Furthermore, this computation shows that \(T^{j}(B)=g_{\ell}(\operatorname{Rel}_{je^{-\ell}z_{0}}(A))\), and so by Proposition 9.3(a), it suffices to show that
\[L^{2}\cdot\operatorname{diam}\left(\operatorname{Rel}_{je^{-\ell}z_{0}}(A) \right)\to_{L\to\infty}0.\]
Taking into account (9.4) and (9.5), it suffices to show that for any compact \(K^{\prime}\) there are positive \(\varepsilon,C\) such that for any \(q_{0},q_{1}\in K^{\prime}\) with \(\operatorname{dist}(q_{0},q_{1})<\varepsilon\), and any \(z\in Z\) with \(\|z\|<\varepsilon\), we have
\[\operatorname{dist}(\operatorname{Rel}_{z}(q_{0},q_{1}))\leq C\operatorname{ dist}(q_{0},q_{1}). \tag{9.6}\]
Informally, this is a uniform local Lipschitz estimate for the family of maps defined by small elements of \(Z\).
To see (9.6), let \(\varepsilon_{1}\) be small enough so that for any \(q\in K^{\prime}\), the ball \(B(q,2\varepsilon_{1})\) is contained in a neighborhood which is evenly covered by the map \(\pi:\mathcal{H}_{\mathrm{m}}\to\mathcal{H}\), and let \(C\) be a bound as in Proposition 9.3(c), corresponding to the compact set which is the \(2\varepsilon_{1}\)-neighborhood of \(K^{\prime}\). Let \(\varepsilon<\varepsilon_{1}\) so that for any \(z\in Z\) with \(\|z\|<\varepsilon\) and any \(q\in\mathcal{H}\), \(\operatorname{dist}(q,\operatorname{Rel}_{z}(q))<\varepsilon_{1}\). If \(\operatorname{dist}(q_{0},q_{1})<\varepsilon\) then the path \(\gamma\) realizing their distance (see Proposition 9.3(d)) is contained in a connected component \(\mathcal{V}\) of \(\pi^{-1}\left(B(q_{0},\varepsilon_{1})\right)\). Let
\[\bar{\gamma}:[0,1]\to H^{1}(S,\Sigma;\mathbb{R}^{2}),\quad\bar{\gamma}(t) \operatorname{\stackrel{{\mathrm{def}}}{{=}}}\operatorname{dev}( \gamma(t))-\operatorname{dev}(\gamma(0)),\]
let
\[\gamma_{1}\operatorname{\stackrel{{\mathrm{def}}}{{=}}} \operatorname{Rel}_{z}\circ\gamma,\]
and analogously define
\[\bar{\gamma}_{1}:[0,1]\to H^{1}(S,\Sigma;\mathbb{R}^{2}),\quad\bar{\gamma}_{1} (t)\operatorname{\stackrel{{\mathrm{def}}}{{=}}}\operatorname{dev} (\gamma_{1}(t))-\operatorname{dev}(\gamma_{1}(0)).\]
By choice of \(\varepsilon\) and \(\varepsilon_{1}\), the curve \(\gamma_{1}\) also has its image in \(\mathcal{V}\). Since \(\operatorname{Rel}_{z}\) is expressed by \(\operatorname{dev}|_{\mathcal{V}}\) as a translation map, the curves \(\bar{\gamma},\bar{\gamma}_{1}\) are identical maps. When computing \(\operatorname{dist}(\operatorname{Rel}_{z}(q_{0}),\operatorname{Rel}_{z}(q_{ 1}))\) via (9.2), an upper bound is given by computing the path integral along the curve \(\gamma_{1}\). We compare this path integral along \(\gamma_{1}\), with the path integral along \(\gamma\) giving \(\operatorname{dist}(q_{0},q_{1})\). In these two integrals, for any \(t\), the tangent vectors \(\gamma^{\prime}(t),\gamma_{1}^{\prime}(t)\) are identical elements of \(H^{1}(S,\Sigma;\mathbb{R}^{2}))\) for all \(t\), but the norms are evaluated using different basepoints. Since these basepoints are all in the \(2\varepsilon_{1}\)-neighborhood of \(K^{\prime}\), by choice of \(C\), we have \(\|\gamma_{1}^{\prime}(t)\|_{\gamma_{1}(t)}\leq C\|\gamma^{\prime}(t)\|_{\gamma(t)}\) for all \(t\). This implies (9.6).
We now list a few additional results we will need. The first is the following weak Besicovitch-type covering Lemma, for balls of equal size.
**Proposition 9.4**.: _For any compact \(K\subset\mathcal{H}\) there are positive \(N,r_{0}\) so that for any \(r\in(0,r_{0})\), for any \(G\subset K\) the collection \(\mathcal{C}\operatorname{\stackrel{{\mathrm{def}}}{{=}}}\{B(q,r) :q\in G\}\) contains \(N\) finite
subcollections \(\mathcal{F}_{1},\ldots,\mathcal{F}_{N}\) satisfying \(G\subset\bigcup_{i=1}^{N}\bigcup\mathcal{F}_{i},\) and each collection \(\mathcal{F}_{i}\) consists of disjoint balls._
Proof.: The argument is standard, we sketch it for lack of a suitable reference.
We first claim that given a compact \(K\) and \(r\in(0,r_{0})\), there is \(N\) so that the largest \(r\)-separated subset of any ball of radius \(2r\), has cardinality at most \(N\). Indeed, this property is true for Euclidean space by a simple volume argument, and is invariant under biLipschitz maps. Thus the claim holds by Proposition 9.3(c).
We now inductively choose the \(\mathcal{F}_{i}\). Let \(\mathcal{F}_{1}\) be a maximal collection of disjoint balls of radius \(r\) with centers in \(G\). For \(i\geq 2\), suppose \(\mathcal{F}_{1},\ldots,\mathcal{F}_{i-1}\) have been chosen, let \(G_{i}\operatorname{\,{=}\,}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
For items (1) and (2) see e.g. [Gl, Thms. 14.35 & 15.12] or [ELW, Chaps. 2 & 3]. Item (3) is left as an exercise (see [ELW, Pf. of Thm. 2.2]).
Proof of Theorem 9.1.: We assume that the entropy \(h=h_{\mu}(T)\) satisfies \(h>0\), and we will derive a contradiction. Using Proposition 9.6(3), we choose a partition \(\mathcal{P}=\{P_{i}\}_{i=1}^{k}\) so that \(\mu(\partial P_{i})=0\) for each \(i\) and \(h_{\mu}(T,\mathcal{P})>\frac{h}{2}\). Choose \(K\) compact so that \(\mu(K)>\frac{3}{4}\) and
\[\limsup_{t\to\infty}\mu\left(g_{t}(K)\right)>\frac{3}{4}. \tag{9.8}\]
A compact set with this property exists by the nondivergence assumption (9.1). Let \(N\) and \(r_{0}\) be as in Proposition 9.4, for this choice of \(K\). Using the Shannon-McMillan-Breiman theorem, let \(L_{0}\) be large enough so that for all \(L>L_{0}\), the set
\[W\stackrel{{\mathrm{def}}}{{=}}\left\{q:\frac{-\log(\mu(P_{L}( q)))}{L}\leq\frac{h}{2}\right\}\]
satisfies
\[\mu(W)<\frac{1}{6N}. \tag{9.9}\]
Our goal will be choose some \(L>L_{0}\) for which we have a contradiction to (9.9).
Below we will simplify notation by writing \(B\left(q,\frac{1}{L^{5}}\right)\) as \(B_{q,L}\) or simply as \(B\). Let
\[G_{L}\stackrel{{\mathrm{def}}}{{=}}\left\{q:\mu\left(B_{q,L} \cap W\right)>\frac{\mu(B_{q,L})}{2}\right\}.\]
We will show below that
\[\text{there are arbitrarily large $L$ for which $\mu(K\cap G_{L})>\frac{1}{3}$.} \tag{9.10}\]
We first explain why (9.10) leads to a contradiction with (9.9). Let \(L>L_{0}\) be large enough so that \(\operatorname{diam}(B)\leq\frac{2}{L^{5}}<r_{0}\) and \(\mu(K\cap G_{L})>\frac{1}{3}\), and let
\[\mathcal{C}\stackrel{{\mathrm{def}}}{{=}}\left\{B_{q,L}:q\in G_ {L}\right\}.\]
By Proposition 9.4, there is a subcollection \(\mathcal{F}\subset\mathcal{C}\), consisting of disjoint balls, so that
\[\mu\left(K\cap G_{L}\cap\bigcup\mathcal{F}\right)\geq\frac{1}{N}\,\mu(K\cap G _{L})>\frac{1}{3N}.\]
Then we have
\[\mu(W)\geq\mu\left(W\cap\bigcup\mathcal{F}\right)=\sum_{B\in\mathcal{F}}\mu( W\cap B)>\sum_{B\in\mathcal{F}}\frac{\mu(B)}{2}\geq\frac{\mu\left(\bigcup \mathcal{F}\right)}{2}\geq\frac{1}{6N},\]
where the equality follows from the disjointness of \(\mathcal{F}\) and the strict inequality follows from the definitions of \(G_{L}\) and \(\mathcal{C}.\) This gives the desired contradiction to (9.9).
It remains to show (9.10). Choose \(\varepsilon>0\) so that
\[21\,\varepsilon\,\log(k)<\frac{h}{2}. \tag{9.11}\]
Given any \(L_{1}\) let \(L>L_{1}\) and let \(X_{0}=X_{0}(L)\subset K\) such that \(\mu(X_{0})\geq\frac{1}{2}\), and so that for any \(q\in X_{0}\) we have (9.3). Such \(L\) and \(X_{0}\) exist by (9.8). Using Lemma 9.5 we can take \(L\) large enough so that
\[\mu\left(X_{1}\right)>\frac{99}{100},\quad\text{ where }X_{1}\stackrel{{ \mathrm{def}}}{{=}}\left\{q:\mu\left(B_{q,L}\right)>k^{- \varepsilon L}\right\}, \tag{9.12}\]
and by making \(L\) even larger we can assume that
\[k^{-10\varepsilon L}<\frac{1}{2}. \tag{9.13}\]
Now choose \(r>0\) so that
\[\mu(V)<\varepsilon,\quad\text{where }\,V\mathop{=}^{\text{\rm def}}\,\left\{y: \operatorname{dist}\left(y,\bigcup_{i=1}^{k}\partial P_{i}\right)<r\right\}. \tag{9.14}\]
This is possible because \(\mu\left(\bigcup_{i}\partial P_{i}\right)=0.\)
We claim that
\[\mu(X_{2})>\frac{2}{5},\quad\text{ where }X_{2}\mathop{=}^{\text{\rm def}}\, \left\{q\in X_{0}:|\{0\leq i\leq L:T^{i}q\in V\}|<10\varepsilon L\right\}. \tag{9.15}\]
To see this, define
\[E\mathop{=}^{\text{\rm def}}\{q:|\{0\leq i\leq L:T^{i}q\in V\}|\geq 10 \varepsilon L\},\]
and let \(\mathbf{1}_{V}\) denote the indicator function of \(V\). Using (9.14), and since \(\mu\) is \(T\)-invariant,
\[\varepsilon L>L\mu(V)=\sum_{i=1}^{L}\int\mathbf{1}_{V}\left(T^{i}q\right)d\mu \geq 10\varepsilon L\mu(E).\]
Dividing through by \(10L\varepsilon\) we have \(\mu(E)<\frac{1}{10}\), giving (9.15).
Let
\[\beta\mathop{=}^{\text{\rm def}}k^{-20\varepsilon L},\quad\text{ and write }\quad\mathcal{P}^{(L)}\mathop{=}^{\text{\rm def}}\,\bigvee_{i=1}^{L}T^{-i}( \mathcal{P}).\]
For each \(q\) we let \(\mathcal{P}^{(L,B)}\) be the elements of \(\mathcal{P}^{(L)}\) which intersect \(B=B_{q,L}\), and partition \(\mathcal{P}^{(L,B)}\) into two subcollections defined by
\[\mathcal{P}^{(q,L)}_{\text{\rm big}}\mathop{=}^{\text{\rm def}}\,\left\{P\in \mathcal{P}^{(L,B)}:\mu(P)\geq\beta\mu(B)\right\}\quad\text{and}\quad\mathcal{ P}^{(q,L)}_{\text{\rm small}}\mathop{=}^{\text{\rm def}}\mathcal{P}^{(L,B)} \smallsetminus\mathcal{P}^{(q,L)}_{\text{\rm big}}.\]
We claim that if \(q\in X_{2}\) then
\[\mu\left(B\cap\bigcup\mathcal{P}^{(q,L)}_{\text{\rm big}}\right)>\left(1-k^{- 10\varepsilon L}\right)\,\mu\left(B\right)\stackrel{{\eqref{eq: p_1}}}{{>}}\frac{\mu(B)}{2}. \tag{9.16}\]
To see this, we note that for \(q\) satisfying the conclusion of Proposition 9.2, the cardinality of \(\mathcal{P}^{(L,B)}\) is at most \(k^{|\{0\leq i\leq L\,:\,T^{i}q\in V\}|}\). Indeed, for such \(q\), whenever \(T^{i}q\notin V\), \(T^{i}\left(B\right)\) is contained in one of the \(P_{i}\) (and for the other \(i\) we use the obvious bound that \(T^{i}q\in V\), \(T^{i}\left(B\right)\) could intersect all of the \(P_{i}\)). For \(q\in X_{2}\) we also have that \(\beta^{-1/2}\geq k^{|\{0\leq i\leq L\,:\,T^{i}q\in V\}|}\), and this implies that
\[\mu\left(B\cap\bigcup\mathcal{P}^{(q,L)}_{\text{\rm small}}\right)<\beta^{-1/ 2}\beta\mu(B)=k^{-10\varepsilon L}\mu(B),\]
and this proves (9.16).
If \(q\in X_{1}\cap X_{2}\) and \(q^{\prime}\in B_{q,L}\cap\bigcup\mathcal{P}^{(q,L)}_{\text{\rm big}}\) then we have
\[\mu(P_{L}(q^{\prime}))\geq\beta\mu(B_{q,L})\geq k^{-21\varepsilon L},\]
and this implies via (9.11) that \(q^{\prime}\in W.\) This and (9.16) shows that \(X_{1}\cap X_{2}\subset G_{L}\). Thus
\[\mu(G_{L})\geq\mu(X_{1}\cap X_{2})\geq\frac{2}{5}-\frac{1}{100}>\frac{1}{3}, \tag{9.17}\]
and we have shown (9.10).
Proof of Theorem 1.5.: Denote by \(T\) the map defined by \(\operatorname{Rel}_{z_{0}}\) (where defined). Since \(m_{\mathcal{L}}\) is \(G\)-invariant, it is rotation invariant, and thus \(m_{\mathcal{L}}\)-a.e. \(q\) has no horizontal saddle connections. In particular for such \(q\), \(\operatorname{Rel}_{z}(q)\) is defined for all \(z\in Z\).
Assume first that \(m_{\mathcal{L}}\) is ergodic. By \(G\)-invariance of \(m_{\mathcal{L}}\) we have (9.1), so the hypotheses of Theorem 9.1 are satisfied for \(\mu=m_{\mathcal{L}}\). Now suppose \(m_{\mathcal{L}}\) is not ergodic, and let \(\mu=\int_{\operatorname{Prob}(X)^{T}}\nu\ d\theta\) be the ergodic decomposition of \(\mu\), where \(\theta\) is a probability measure on \(\operatorname{Prob}(X)^{T}\) such that \(\theta\)-a.e. \(\nu\) is ergodic for \(T\). By Proposition 9.6(2), it suffices to show that the entropy of \(\nu\) is zero for \(\theta\)-a.e. \(\nu\), and thus we only need to show that assumption (9.1) holds for \(\theta\)-a.e. \(\nu\). This follows from the \(g_{t}\)-invariance of \(m_{\mathcal{L}}\). Indeed, by invariance and regularity of \(m_{\mathcal{L}}\), for any \(\varepsilon>0\) there exists a compact \(K\), so that for all \(t\), \(m_{\mathcal{L}}(g_{t}(K))=m_{\mathcal{L}}(K)>1-\varepsilon^{2}\). Thus for every \(t\),
\[\theta(\{\nu:g_{-t*}\nu(K)\geq 1-\varepsilon\})\geq 1-\varepsilon.\]
Thus for any \(\varepsilon>0\) there is \(K\) so that the set of \(\nu\) for which \((g_{-t_{i}})_{*}\nu(K)\geq 1-\varepsilon\) for a sequence \(t_{i}\to\infty\), has \(\theta\)-measure at least \(1-\varepsilon\). Since \(\varepsilon\) was arbitrary, we have (9.1) for \(\theta\)-a.e. \(\nu\).
|
2303.14498 | Visual-Tactile Sensing for In-Hand Object Reconstruction | Tactile sensing is one of the modalities humans rely on heavily to perceive
the world. Working with vision, this modality refines local geometry structure,
measures deformation at the contact area, and indicates the hand-object contact
state.
With the availability of open-source tactile sensors such as DIGIT, research
on visual-tactile learning is becoming more accessible and reproducible.
Leveraging this tactile sensor, we propose a novel visual-tactile in-hand
object reconstruction framework \textbf{VTacO}, and extend it to
\textbf{VTacOH} for hand-object reconstruction. Since our method can support
both rigid and deformable object reconstruction, no existing benchmarks are
proper for the goal. We propose a simulation environment, VT-Sim, which
supports generating hand-object interaction for both rigid and deformable
objects. With VT-Sim, we generate a large-scale training dataset and evaluate
our method on it. Extensive experiments demonstrate that our proposed method
can outperform the previous baseline methods qualitatively and quantitatively.
Finally, we directly apply our model trained in simulation to various
real-world test cases, which display qualitative results.
Codes, models, simulation environment, and datasets are available at
\url{https://sites.google.com/view/vtaco/}. | Wenqiang Xu, Zhenjun Yu, Han Xue, Ruolin Ye, Siqiong Yao, Cewu Lu | 2023-03-25T15:16:31Z | http://arxiv.org/abs/2303.14498v1 | # Visual-Tactile Sensing for In-Hand Object Reconstruction
###### Abstract
Tactile sensing is one of the modalities humans rely on heavily to perceive the world. Working with vision, this modality refines local geometry structure, measures deformation at the contact area, and indicates the hand-object contact state. With the availability of open-source tactile sensors such as DIGIT, research on visual-tactile learning is becoming more accessible and reproducible. Leveraging this tactile sensor, we propose a novel visual-tactile in-hand object reconstruction framework **VIacO**, and extend it to **VIacOH** for hand-object reconstruction. Since our method can support both rigid and deformable object reconstruction, no existing benchmarks are proper for the goal. We propose a simulation environment, VT-Sim, which supports generating hand-object interaction for both rigid and deformable objects. With VT-Sim, we generate a large-scale training dataset and evaluate our method on it. Extensive experiments demonstrate that our proposed method can outperform the previous baseline methods qualitatively and quantitatively. Finally, we directly apply our model trained in simulation to various real-world test cases, which display qualitative results. Codes, models, simulation environment, and datasets are available at [https://sites.google.com/view/vtaco/](https://sites.google.com/view/vtaco/).
## 1 Introduction
Human beings have a sense of object geometry by seeing and touching, especially when the object is in manipulation and undergoes a large portion of occlusion, where visual information is not enough for the details of object geometry. In such cases, vision-based tactile sensing is a good supplement as a way of proximal perception. In the past, few vision-based tactile sensors were commercially available or open-source, so the visual-tactile sensing techniques could not be widely studied. Previous works [27, 34] on in-hand object reconstruction either studied rigid objects or were limited to simple objects with simple deformation.
Vision-based tactile sensors [6, 15, 30, 33] can produce colorful tactile images indicating local geometry and deformation in the contact areas. In this work, we mainly work with DIGIT [15] as it is open-source for manufacture and is easier to reproduce sensing modality. With tactile images, we propose a novel **V**isual-**T**actile in-hand **O**bject reconstruction framework named **VTacO**. VTacO reconstructs the object geometry with the input of a partial point cloud observation and several tactile images. The tactile and object features are extracted by neural networks and fused in the Winfing Number Field (WNF) [13], and the object shape is extracted by Marching Cubes algorithm [17]. WNF can represent the object shape with open and thin structures. The poses of tactile sensors can be determined either by markers attached or by hand kinematics. By default, VTacO assumes the tactile sensor poses can be obtained independently, but we also discuss how to obtain the tactile sensor poses alongside the object with hand pose estimation. The corresponding method is named **VTacOH**.
With tactile information, we can enhance pure visual information from three aspects: (1) **Local geometry refinement**. We use tactile sensing as proximal perception to complement details of local geometry. (2) **Deformation at contact area**. Objects, even those we consider rigid, can undergo considerable deformation given external forces exerted by hand. (3) **Hand-object contact state**. Tactile sensors indicate whether the hand is in contact with the object's surface. To demonstrate such merits, we conduct the object reconstruction tasks in both rigid and non-rigid settings.
Since obtaining the ground truth of object deformation in the real world is hard, we first synthesize the training data from a simulator. DIGIT has an official simulation implementation, TACTO [29]. However, it is based on pybullet [5], which has limited ability to simulate deformable objects. Thus, we implement a tactile simulation environment **VT-Sim** in Unity. In VT-Sim, we generate hand poses with Grasplt! [18], and simulate the deformation around the contact area with an XPBD-based method. In the simulation, we can easily obtain depth image, tactile image, DIGIT pose, and object WNF as training samples for both rigid and non-rigid objects.
To evaluate the method, we compare the proposed visual-tactile models with its visual-only setting, and the previous baseline 3D Shape Reconstruction from Vision and Touch (3DVT) [23]. Extensive experiments show that our method can achieve both quantitative and qualitative improvements on baseline methods. Besides, since we make the tactile features fused with winding number prediction, we can procedurally gain finer geometry reconstruction results by incrementally contacting different areas of objects. It can be useful for robotics applications [8, 22]. Then, we directly apply the model trained with synthesis data to the real world. It shows great generalization ability. We summarize our contributions as follows:
* A visual-tactile learning framework to reconstruct an object when it is being manipulated. We provide the object-only version VTacO, and the hand-object version VTacOH.
* A simulation environment, VT-Sim, which can generate training samples. We also validate the generalization ability of the models trained on the simulated data to the real-world data.
## 2 Related Works
Vision-based Tactile SensorsTactile sensors have been studied by robotics and material communities for decades. As a result, different approaches to mimic human tactile sensing are developed. Among them, the most relevant approaches are the vision-based tactile sensors [6, 15, 30, 33]. These methods typically use a gel to contact the object, and a camera sensor to observe the deformation through the layer. If the gel is calibrated, the deformation corresponds with a force so that we can obtain the contact force by observing the tactile images. Among these sensors, DIGIT [15] is the one with the structure most suitable for the human hand. The fact that it is attached to fingers rather than wrapping around makes it aligned with human fingertip structures.
Vision-based tactile sensors, by nature, have the merits of high-density, multi-DOF force sensing and a data format of image, which is suitable for neural network processing. These features make them beneficial for the computer vision community. However, it could be hard to re-implement most tactile sensors without open-source circuit design and drive programs. These essential components are necessary for the development of visual-tactile research. As an open-source hardware platform, DIGIT [15] takes the first step toward addressing this bottleneck problem.
Visual-Tactile Sensing for Object ReconstructionVisual information usually provides global and coarse shape information in object reconstruction with visual-tactile sensing, while tactile information refines local geometry. [23, 25, 31].
Among these works, [23] is most relevant to our method. It jointly processes visual and tactile images to estimate global and local shapes. However, it is limited to watertight objects due to its SDF-based shape modeling approach. Besides this limitation, previous works [8, 25, 31] are also limited to static object settings. To our knowledge, none of the previous works considers hand-object interaction and thus ignores the deformation during manipulation.
In-hand Object ReconstructionEarlier works usually consider interacting with rigid [19] or articulated objects
[28]. Most of them require prior knowledge about the objects. [27] studied jointly tracking a deformable object and hand during interaction, with required knowledge about object model and texture.
With the development of deep learning recently, the learning algorithms [10, 4, 11, 26] now have the potential to generalize to unseen objects. Among them, InteractionFusion [34] which also studies the deformation during manipulation is the most relevant one to our method. However, it adds a requirement for tracking the object from its rest state to model contact deformation.
## 3 Method
### Overview
We consider the problem of 3D reconstruction of in-hand objects from a visual-tactile perspective, with which we propose **VTacO**, a novel deep learning-based visual-tactile learning framework.
In this setting, we assume the visual input is a 3D point cloud \(\mathcal{P}\) of an in-hand object converted from a depth image. The tactile input \(\mathcal{T}=\{T_{i}\}_{i=1}^{N}\) contains \(N\) tuples of tactile readings obtained from the sensors, where \(T_{i}=\{(I_{i},\mathbf{p}_{i})\}\), \(I_{i}\) is the RGB tactile image, and \(\mathbf{p}_{i}\) is the 6DOF sensor pose.
The object shape is modeled by the winding number field (WNF), a variant of the signed distance field (SDF). Thus, we can also learn the object representation similar to the SDF-based pipelines [20]. The point cloud \(\mathcal{P}\in\mathbb{R}^{N_{p}\times 3}\) is converted to a global feature \(\mathcal{F}_{p}\in\mathbb{R}^{d_{p}}\) through a feature extractor \(\mathbf{f}_{p}(\cdot)\), where \(N_{p}\) is point number and \(d_{p}\) is the feature dimension. The tactile image \(I_{i}\) is forwarded to another feature extractor \(\mathbf{f}_{t}(\cdot)\) to obtain local feature \(\mathcal{F}_{t}\in\mathbb{R}^{d_{t}}\), where \(d_{t}\) is the feature dimension. Then, we sample \(M\) positions \(\mathcal{X}=\{\mathbf{X}_{j}=(x_{j},y_{j},z_{j})\}_{j=1}^{M}\) in the 3D space, and fuse \(\mathcal{F}_{p}\) to each position, \(\mathcal{F}_{t}\) to the near positions according to the sensor pose \(\mathbf{p}\). The final feature will be passed to a point-wise decoder to predict the winding number value, which will be used for extracting the shape by Marching Cubes algorithm. The details will be discussed in Sec. 3.2
Considering the DIGIT sensor does not provide the sensor pose \(\mathbf{p}\), we provide two different options to obtain \(\mathbf{p}\) in the real world, namely by attaching markers or inferring from hand kinematics. For the latter option, we propose a pipeline **VTacOH** to jointly estimate the object shape, hand, and sensor pose. It will be discussed in Sec. 3.3.2.
An overview of our approach is illustrated in Fig 3.
### Visual-Tactile Object Reconstruction, **VTacO**
**Winding Number Field, WNF** We use WNF to model the object, it is proposed by [13] which can implicitly represent a certain surface of an object. Unlike how SDF models space occupancy, WNF tracks the solid angles of the surface. Generally, given a Lipschitz surface \(S\) and a reference point \(p\in\mathbb{R}^{3}\), if \(p\) is inside \(S\), the winding number equals 1, and 0 if \(p\) outside. The winding number can also be used to model open, non-manifold surfaces and thus can model fine local geometry like thin structures, holes _etc_. More details of WNF can be referred to [13]. The difference between WNF and SDF is illustrated in Fig. 2. However, the explicit surface can also be extracted by Marching Cubes algorithm from WNF.
**Visual Feature Encoder.** Given the input point cloud \(\mathcal{P}\), we adopt a 3D UNet-based volume encoder as the feature extractor \(\mathbf{f}_{p}(\cdot)\). After six fully connected layers, the point cloud features go through a 3D-UNet [3] with four layers of both downsampling and upsampling, and we acquire the global feature \(\mathcal{F}_{p}\in\mathbb{R}^{d_{p}}\) for every position in the space. We discuss different visual feature encoder choices in Sec. 5.5.
**Tactile Feature Encoder.** We extract the feature \(\mathcal{F}_{t_{i}}\in\mathbb{R}^{d_{t}}\) from the tactile images \(I_{i}\) with \(\mathbf{f}_{t}(\cdot)\). Though we can use different backbones for \(\mathbf{f}_{t}(\cdot)\), in practice, we adopt ResNet-18 [12].
**Position Sampling & Feature Fusion.** Then, we sample \(M\) positions in 3D space to query the WNF state later through the decoder. We first try to find the positions relevant to tactile observations.
For each sensor \(i\), we assume its sensor pose \(\mathbf{p}_{i}\) is known individually. As we are aware that the tactile sensor like DIGIT itself cannot measure its pose in the real world, we will discuss the solution later (Sec. 3.3). Then, we estimate the depth of every pixel in each sensor's observation \(I_{i}\) using a simple 2D UNet \(U_{I}(\cdot)\) with 3-layer CNN as an encoder. The relationship between the tactile pixel and the depth is associated with the gel deformation, which can be considered as a simple linear mapping. The physics behind it will be discussed in supplementary materials. Thus, the estimated depth represents contact geometry. We transform it from tactile sensor space to visual sensor space. In this way, we have \(M_{1}\) points in 3D space which are relevant to
Figure 2: Object in **WNF** and **SDF** representation. For an object, the on-body surface is near 1, the off-body area is near 0 and the opening region is around 0.5. While SDF considers only the inside (\(<0\)) and outside (\(>0\)).
tactile images.
Then, we uniformly sample \(M_{2}\) points in the 3D space, together with the \(M_{1}\) positions, giving us \(M=M_{1}+M_{2}\) positions to query.
For each position in \(M_{1}\), we fuse the location \(X_{j}\) with visual feature \(\mathcal{F}_{p}\), and tactile feature \(\mathcal{F}_{t}\). And in \(M_{2}\), we substitute the tactile feature with all-0 vectors. We discuss different fusion strategies in Sec. 5.5. After fusing, we have position-wise feature \(\mathcal{F}\in\mathbb{R}^{d}\), where \(d\) is the dimension of the fused features.
**WNF Decoder.** Finally, we use a 5-layer MLP \(f_{wnf}(\cdot)\) as a decoder to predict the winding number \(w\) for each query, with \(w=f_{wnf}(\mathcal{F})\).
**Loss Function** We train the whole pipeline in two parts.
First, we train \(U_{I}\) with pixel-wise \(L_{1}\) loss. We can obtain the ground truth of the depth images for tactile sensors in simulation, but cannot obtain them in the real world. Thus, when dealing with real-world images, we directly apply the model learned in the simulation.
Then, we jointly train \(\mathbf{f}_{p}\), \(\mathbf{f}_{t}\), \(\mathbf{f}_{wnf}\) with \(L_{1}\) loss to supervise the winding number.
\[\mathcal{L}_{w}=|w-w^{*}|. \tag{1}\]
### Sensor Pose Estimation
As aforementioned, the DIGIT sensor cannot measure its pose in the real world. To address this, we provide two options: attaching markers and inferring from hand kinematics.
#### 3.3.1 Estimating Sensor Pose By Markers
Following the practice in pose estimation, we print ArUco tags [9] and attach them to fingertips. Then, we estimate the fingertip poses by estimating the pose of tags. We adopt OpenCV [1] to implement the pose estimation. It is a common practice, thus we leave the details in the supplementary materials.
#### 3.3.2 Estimating Sensor Pose With Hand Kinematics
Since the input \(\mathcal{P}\) contains a view of the in-hand object, it allows us to reconstruct the hand alongside the object. We can infer the poses of tactile sensors based on the prior that they are attached to the fingertips. We can infer the poses with hand kinematics. To jointly learn visual-tactile hand-object reconstruction, we introduce **VTacOH**.
Hand-Sensor Pose Estimation.To reconstruct the hand, we adopt the parametric MANO model [21] which represents the hand shape with \(\beta\in\mathbb{R}^{10}\), and the pose with
Figure 4: Estimating the fingertips’ pose from the attached ArUco markers.
Figure 3: **VTacO** and **VTacOH**. The input is a set of point clouds and tactile images obtained by tactile sensors. We first compute the global point cloud information with a volume encoder and generate a local feature with a tactile feature encoder. For the position sampling, in **VTacO**, we predict the local point cloud with a U-Net from the tactile signals and project them to the visual camera space. And in **VTacOH** we reconstruct the hand by encoding MANO parameters and query positions around the local point clouds generated from the sensors on hand tips. For \(M\) sampled positions we first fuse the global and local features, then we estimate the winding number with the WNF decoder of several MLPs. For object reconstruction, we conduct marching cubes algorithm.
\(\theta\in\mathbb{R}^{51}\), which contains the 6DOF wrist joint position and rotation and axis-angle rotation for 15 joints. Based on the MANO model, we regard a tactile sensor as a fixed extension adhered to the tip. The relative pose between the sensors and tips should remain constant. By recording the local transformation between the sensors and the tip positions, we can easily obtain the sensor pose after having the positions of the tips \(\hat{J}_{h}^{tip}\in\mathbb{R}^{5\times 3}\) through a differentiable MANO layer with \(\theta\) and \(\beta\).
Suppose the hand tips positions \(\hat{\mathbf{J}}_{h}^{tip}\) are predicted, then we have the translation \(\mathbf{t}_{h}\in\mathbb{R}^{5\times 3}\) from the hand tips to the DIGIT sensor, giving us predicted DIGIT positions \(\hat{\mathbf{p}}=\hat{\mathbf{J}}_{h}^{tip}+\mathbf{t}_{h}\)
**Tactile Position Sampling with Hand Prediction.** It is known that hand pose prediction can be inaccurate due to the error in neural network regression, which also influences the accuracy of the tactile sensor poses. Therefore, the tactile position sampling mentioned in Sec. 3.2 cannot be directly applied. Here, we present an easy but effective way to handle such circumstances. For every inferred DIGIT position \(\hat{\mathbf{p}}_{i}\), we simply query the sphere centered in the local point clouds generated from the tactile sensors with a radius \(r=0.1\). The feature fusion remains the same.
**Loss Function** In VTacOH, we will add additional supervision for hand reconstruction. It follows the common practice of MANO hand estimation by constraining the prediction of axis-angle of the hand joints with \(L_{2}\) loss and we obtain:
\[\mathcal{L}_{hand}=||\theta-\theta^{*}||_{2}^{2} \tag{2}\]
## 4 VT-Sim
We adopt a sim-to-real transfer paradigm to learn the visual-tactile learning task since obtaining information about object deformation in the real world is hard. And recently, many simulation environments have proved to be effective on softbody object simulation with Unity [7, 32].
We present **VT-Sim**, a simulation environment based on Unity to produce training samples of hand-object interaction with ground truths of WNF, visual depth images, tactile signals, and sensor poses effectively.
The dataset synthesis process is divided into two parts: **Hand-pose Acquisition** and **Interaction Simulation**.
**Hand-pose Acquisition** We use GraspIt! [18] and a 20-DOF hand model from GraspIt! assets to generate the grasp poses for the selected object models, and save the configuration of hand and objects after the grasp search is over. Since the MANO hand is slightly different from the hand we used to obtain the poses, we use a simple retargeting approach: We calculate the tip positions from the saved configuration of the 20-DOF hand model by forward kinematics. Then we align the tips between the 20-DOF hand and the tips of the MANO hand with sensors and apply inverse kinematics to obtain the poses.
**Interaction Simulation** After obtaining the poses of MANO hands, sensors, and objects, we replay the configurations in Unity with collision. For the rigid object, we add the Rigidbody property and a mesh collider to it. For the soft body object, we model them with Obi [24], an XPBD-based physics engine. When we move the MANO hand with sensors to the retargeted pose, the hand sensor will collide with the object and forms the grasp. If the object is soft around the contact area, the contact region will be deformed.
**Recording Setup** We transplant the implementation of the DIGIT camera from TACTO [29] to obtain tactile images. Besides, we set visual cameras on 8 different viewpoints to generate RGB-D images. The point clouds can be converted from the RGB-D images with camera poses. We utilize the library igl [14] to generate the winding number field from the object meshes.
Finally, through one VT-Sim step, we obtain a full data point for training, which includes input point cloud \(\mathcal{P}\), tactile signals \(\mathcal{T}=\{T_{i}\}_{i=1}^{N}\) for every tactile sensor, hand poses \(\theta^{*}\) and winding number \(w(x)\) for every \(x\in\mathbb{R}^{3}\).
## 5 Experiments
### Implementation
We set \(N_{p}=3000\) as the input point cloud \(\mathcal{P}\) converted from the depth image, and randomly sample 100,000 posi
Figure 5: **VT-sim** example. The DIGIT sensors are attached to the finger tips. When grasping the object, the sensor gel applies forces and collides with the surface mesh, and the sensor camera captures images. The above demonstrates the RGB-D images from a certain view and the difference between tactile images with or without the light in the sensor.
tions in the space with 20,000 on the surface of the object, and subsample to \(M=2048\) for the WNF prediction. An RGB tactile image and depth obtained in VT-Sim have the size of \(300\times 200\). During the training, we set the dimension of global and local features \(d_{p}=d_{t}=32\). We set the resolution of voxelization from the visual feature encoder \(D=64\).
We first train the depth and DIGIT pose prediction part with batch size 12 and \(LR=1e-4\) with Adam optimizer for 100 epochs and use the pre-trained parameters for VTacO training. For the training of VTacO, we set the batch size as 6 and \(LR=2e-4\) with Adam optimizer to train on the whole simulated dataset for 100 epochs, and finetune on every type of object with batch size 4 and \(LR=1e-4\) for 50 to 100 epochs according to different types. For the VTacOH model, we train the hand reconstruction in 100 epochs with batch size 16 and \(LR=1e-4\) with Adam optimizer, then apply the model parameters on feature embedding and train the whole VTacOH with the same hyperparameters and process as VTacO. The overall time for training VTacO and VTacOH on RTX TITAN XP are around 24 hours.
### Datasets
We use two object model benchmarks which provide full model meshes and texture for our reconstruction task. **YCB Objects Models** YCB benchmark [2] contains 77 object models, with 600 RGBD images from different angles of cameras and their calibration information. For each example model, it includes textured meshes generated with both Poisson reconstruction and volumetric range image integration, a point cloud of the object generated by all the viewpoints. For our experiments, we select 6 object models in the class **Box** for generating a simulated dataset.
**AKB-48 Objects Models** AKB-48 benchmark [16] is a large-scale articulated object knowledge base that consists of 2,037 real-world 3D articulated object models of 48 categories. For every object, it provides textured models in both whole and part meshes, and a full point cloud from all viewpoints. In both simulated and real experiments, we select 5 to 10 objects in the following 4 categories: **Bottle**, **Foldingrack**, **Lock**, **Scissor**.
**Simulated Dataset** With VT-Sim, we can generate adequate datasets based on two object model benchmarks. We consider the categories **Bottle** and **Box** as deformable objects with stiffness 0.2 and 0.3, and others as rigid. For every object, we first generate 100 hand poses from **Handpose Acquisition** in VT-Sim, and select 10 to 40 poses that can generate at least one successful touch from five fingertips' Digits. With 8 different views from the RGB-D camera in Unity, we generate 8 point cloud sets from the depth image for every successful grasp. The overall dataset contains more than 10000 data points from 5 categories of object models, with every category, varying from 1000 to 3000. We split the dataset into 7500 for training, 1500 for testing, and 1000 for validation and visualization.
### Metrics
We adopt several metrics to evaluate our method quantitatively: (1) Intersection over Union (IoU) over object volumes. (2) Chamfer Distance (CD), and (3) Earth Mover's Distance (EMD) between predicted vertices of mesh and the ground truth;
### Results
We conduct plenty of experiments to show the capability of VTacO. With the above metrics, we compare our results with the visual-tactile baseline 3D Shape Reconstruction from Vision and Touch (denoted "3DVT") [23]. Meanwhile, since "3DVT" didn't study the reconstruction of deformable objects, we only compare the 3 non-deformable categories: **Foldingrack**, **Lock**, **Scissor**. We also report the object reconstruction result for VTacOH setting. Due to the accurate hand pose estimation, the results of VTacOH are only slightly worse than the VTacO where the sensor poses are the ground truth. The results are shown in Tab. 1. We can see that our method outperforms others in most categories in the metrics. Qualitative results are demonstrated in Fig. 6. As illustrated in both quantitative and qualitative results, VTacO has better reconstruction results. We speculate that our outperformance is first due to the point-wise approach we conducted, and WNF being a continuous function concerning positions in the space. More importantly, the Feature fusion method can better demonstrate the contact area and the subtle texture of the surfaces. In Sec. 5.5 we will elaborate more to demonstrate the significant effects after we introduce the tactile sensing, and to prove the effectiveness of our choice of encoder and feature fusion strategy.
ture has been discovered through DIGIT sensors, and the category **Lock** also has better performances in the upper area, after we touch and detect its upper mesh.
**Procedural Tactile Embedding.** Our method allows us to procedurally gather the tactile embeddings and predict better reconstruction results without retraining the network. As shown in Fig. 1, after gradually touching the object several times, the reconstruction result becomes better and better. For the rigid objects, the textures and structures become clearer when conducting more grasp, such as the gap structure of the folding rack. For deformable objects such as **Bottles**, the object geometry will be changed due to the deformation caused by the contact. Thanks to the contact geometry prediction, we can first recover the deformed geometry to its original shape and then reasons the structures to be completed. To demonstrate the deformable regions are indeed reconstructed, we also illustrate the unrecovered results in supplementary materials. Such property can also be used for reconstructing plastic object with sustainable deformation.
**Visual Feature Encoder Choices.** We mainly compare two types of Visual Feature Encoder: **Multi-Plane Encoder** and **Volume Encoder**. **Multi-Plane Encoder** mainly project the input point cloud onto three orthogonal planes (i.e., plane \(xy\), \(xz\), or \(yz\)), and by using 2D UNet to encode the plane features and adding three features together, we obtain the multi-plane feature \(\mathcal{F}_{p}^{plane}\in\mathbb{R}^{d_{p}}\) as our global feature for sample positions.
The quantitative result in Tab. 2 shows that the Volume Encoder outperforms in all metrics of all categories. This is mainly because the global feature is considered 3D information and fused with the local feature point-wisely, and the voxelization process fits the prediction of WNF, which indicates the position of each voxel relative to the surface of the object.
**Fusion Strategies.** We conduct experiments on two simple kinds of global and local feature fusion strategies: **concatenation** and **addition**. With concatenation, the final fused feature dimension \(d=3+64\), while \(d=3+32\) for the addition operation. Results in Tab. 3 show that the method **addition** has a much better performance compared to the others. We speculate that since we mark all-0 local feature vectors for the sample positions that are not near the sensor, the addition can correctly combine the two kinds of feature, while the fusion feature after the concatenation process would contain too many zeros, making it hard to predict the WNF for the decoders.
### Real-world Experiment
We validate the performance of learned VTacO in the real-world data. We select 3 unseen **Box**, and 3 unseen **Bottle** to demonstrate the deformable object reconstruction. We also select 3 unseen **Lock** categories to demonstrate the
\begin{table}
\begin{tabular}{c||c|c|c|c|c|c|c} \hline \hline Metrics & Method & bottle & box & foldingrack & lock & scissor & mean \\ \hline \multirow{4}{*}{IoU} & 3D Vision and Touch & * & * & 0.746 & 0.468 & 0.691 & 0.622 \\ & Ours (Vision only) & 0.884 & 0.934 & **0.837** & 0.877 & 0.763 & 0.857 \\ & Ours (VTacO) & **0.887** & **0.950** & 0.782 & **0.916** & **0.777** & **0.860** \\ & Ours (VTacOH) & 0.886 & 0.947 & 0.765 & 0.911 & 0.772 & 0.855 \\ \hline \multirow{4}{*}{CD} & 3D Vision and Touch & * & * & **0.242** & 2.631 & 3.206 & 1.268 \\ & Ours (Vision only) & 1.109 & 0.459 & 1.579 & 1.140 & 7.549 & 1.472 \\ & Ours (VTacO) & **0.936** & **0.305** & 1.360 & **0.932** & **0.894** & **0.798** \\ & Ours (VTacOH) & 0.948 & 0.312 & 1.432 & 0.955 & 0.945 & 0.916 \\ \hline \multirow{4}{*}{EMD} & 3D Vision and Touch & * & * & 0.052 & 0.175 & 0.081 & 0.090 \\ & Ours (Vision only) & **0.054** & 0.028 & **0.026** & 0.083 & 0.110 & 0.051 \\ & Ours (VTacO) & 0.059 & **0.018** & 0.028 & **0.012** & **0.052** & **0.028** \\ & Ours (VTacOH) & 0.056 & 0.024 & 0.027 & 0.035 & 0.068 & 0.042 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative results on YCB and AKB Objects. This table illustrates a numerical comparison among 3DVT, pure vision methods, and our approach VTacO. We measure IoU (The larger the better), Chamfer Distance (\(\times 100\), the smaller the better), and Earth Mover’s Distance (The smaller the better). CD and EMD are measured at 2048 points. * indicates that 3DVT has no clear settings for deformable objects: Bottle and Box.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline Metrics & Encoder & bottle & box & foldingrack \\ \hline \multirow{2}{*}{IoU} & Multi-Plane & 0.851 & 0.945 & 0.767 \\ & Volume & **0.887** & **0.950** & **0.782** \\ \hline \multirow{2}{*}{CD} & Multi-Plane & 1.296 & 0.732 & **0.940** \\ & Volume & **0.936** & **0.305** & 1.360 \\ \hline \multirow{2}{*}{EMD} & Multi-Plane & **0.052** & 0.030 & 0.029 \\ & Volume & 0.059 & **0.018** & **0.028** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Measured metrics with **Multi-plane** and **Volume encoder**. The Volume encoder performs better in most categories.
procedural rigid reconstruction on real-world data. To note, the selected objects are not used in training from simulation. We adopt an Intel realsense L515 camera mounted on a fixed tripod to capture the depth observation. In the real world, we attach two DIGIT sensors to the hand and detect the sensor poses according to markers attached to the tips. Since we observe noise and shadows in real sensor readings, we augment the tactile images in simulation to match real RGB distribution by adding noise and adjusting contrast. We filter out the point cloud around the in-hand object manually and use it as input. In Fig. 7, we illustrate examples for **Box** and **Bottle** category, as for the **Lock** category, we demonstrate them in supplementary materials.
## 6 Conclusion and Future Works
In this work, we propose a novel visual-tactile in-hand object reconstruction framework, VTacO, and its extended version, VTacOH. Our proposed frameworks can reconstruct both rigid and non-rigid objects. The visual-tactile fusion strategy allows the framework to reconstruct the object geometry details incrementally. In the future, we are interested to leverage such features in robotics experiments to better estimate the object geometry in manipulation.
## Acknowledgement
This work was supported by the National Key R&D Program of China (No.2021ZD0110704), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), Shanghai Qi Zhi Institute, and Shanghai Science and Technology Commission (21511101200).
Figure 6: Qualitative example on **3DVT**, **Pure vision method**, **VTacO** and **VTacOH**. The upper demonstration shows our good performance on deformable objects. The results lower compare the visualization with different methods. We can clearly see that our method has a better qualitative reconstruction effect. In comparison with pure vision-based approaches and VTacO, we prove that the introduction of tactile signals improves the results significantly.
Figure 7: Real world examples. The deformation of the box and bottle has been better reconstructed with the introduction of tactile sensors in comparison with the pure-vision strategy.
\begin{table}
\begin{tabular}{c|c|c} \hline \hline Metrics & Fusion & bottle \\ \hline \multirow{2}{*}{IoU} & Addition & **0.9518** \\ & Concat & 0.9339 \\ \hline \multirow{2}{*}{CD} & Addition & 0.305 \\ & Concat & **0.114** \\ \hline \multirow{2}{*}{EMD} & Addition & **0.018** \\ & Concat & 0.028 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Quantitative results with different fusion strategies. |
2305.10570 | Numerical simulations of atmospheric quantum channels | Atmospheric turbulence is one of the lead disturbance factors for free-space
quantum communication. The quantum states of light in such channels are
affected by fluctuating losses characterized by the probability distribution of
transmittance (PDT). We obtain the PDT for different horizontal links via
numerical simulations of light transmission through the atmosphere. The results
are compared with analytical models: the truncated log-normal distribution, the
beam-wandering model, the elliptic-beam approximation, and the model based on
the law of total probability. Their applicability is shown to be strongly
dependent on the receiver aperture radius. We introduce an empirical model
based on the Beta distribution, which is in good agreement with numerical
simulations for a wide range of channel parameters. However, there are still
scenarios where none of the above analytical models fits the numerically
simulated data. The numerical simulation is then used to analyze the
transmission of quadrature-squeezed light through free-space channels. | M. Klen, A. A. Semenov | 2023-05-17T21:08:51Z | http://arxiv.org/abs/2305.10570v2 | # Numerical simulations of atmospheric quantum channels
###### Abstract
Atmospheric turbulence is one of the lead disturbance factors for free-space quantum communication. The quantum states of light in such channels are affected by fluctuating losses characterized by the probability distributions of transmittance (PDT). We obtain the PDT for different scenarios via numerical simulations of light transmission through the atmosphere. The results are compared with analytical models: truncated log-normal distribution, beam-wandering model, elliptic-beam approximation, and the model based on the law of total probability. Their applicability is shown to be strongly dependent on the receiver aperture radius. We introduce an empirical model based on the beta distribution, which is in good agreement with numerical simulations for a wide range of channel parameters. However, there are still scenarios where none of the above analytical models fits the numerically-simulated data. The numerical simulation is then used to analyze the transmission of quadrature-squeezed light through free-space channels.
## I Introduction
Quantum communication is a rapidly developing field with a wide range of applications. One of its main advantages is the ability to establish secure links between remote parties, for a review see Refs. [1; 2; 3; 4]. It can also be used in many other applications, such as quantum digital signature [5], connecting quantum devices with quantum teleportation [6; 7], or entanglement swapping protocols [8], to name just a few.
Optical radiation fields serve as natural carriers of quantum information for communication channels. Quantum light can be distributed through optical fibers or free space. The latter offers several practical advantages, including the ability to establish satellite-mediated global quantum communication, communication through hard-to-access regions, and communication with moving parties. Various implementations of free-space channels have been reported in literature for horizontal (see, e.g., Refs. [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19]) and vertical (see, e.g., Refs. [20; 21; 22]) links, including experiments involving satellites [23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36].
Atmospheric turbulence is one of the major obstacles for free-space quantum communication. Its effect on the optical fields is described by methods [37; 38; 39] developed in classical atmospheric optics. The corresponding techniques differ depending on the degrees of freedom of light and the types of measurements involved in the protocol. For example, protocols dealing with optical angular momentum [40; 41; 42] require a proper description of the spatial structure of the modes [43; 44; 45; 46; 47]. Another type of protocols and experiments involving photocounting [48; 49], polarization analysis [9; 12; 50; 51], homodyne detection [10; 52; 53; 54; 11; 54], and so on, demand a proper description of the fluctuating losses in atmospheric channels. In this case, the light mode at the transmitter can be prepared, for example, as a pulsed Gaussian beam. For discrete-variable protocols, using propagation of single photons in each direction and/or measurements with long detection times, fluctuations of losses (scintillation) can be neglected [55; 56; 57].
As it has been discussed in Ref. [58], fluctuating losses can be described by the quantum-state input-output relation,
\[P_{\rm out}\left(\alpha\right)=\int\limits_{0}^{1}{\rm d}\,\eta\frac{1}{\eta} P_{\rm in}\left(\frac{\alpha}{\sqrt{\eta}}\right)\mathcal{P}(\eta). \tag{1}\]
Here \(P_{\rm out}(\alpha)\) and \(P_{\rm in}(\alpha)\) are the Glauber-Sudarshan P functions [59; 60] at channel output and input, respectively, \(\eta\) is the channel transmittance, and \(\mathcal{P}(\eta)\) is the probability distribution of transmittance (PDT). Obviously, the PDT is the main characteristic of quantum channels. Equation (1) has similar form in both quantum and classical optics and thus the PDT can be obtained from purely classical considerations.
The mentioned approach has been successfully applied to analyze quantum-state transfer through free-space channels, see Ref. [61] for a brief summary. For example, quadrature squeezing [53; 62], Bell nonlocality [50; 51], Gaussian entanglement [63; 64], and higher-order nonclassical and entanglement phenomena [65; 66] have been analyzed in the context of their transfer through atmospheric channels. Also the input-output relation (1) has been applied to analyze quantum communication protocols such as the decoy state protocol [67; 68], continuous variable quantum key distribution protocols [52; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79], continuous-variable quantum teleportation [80; 81; 82], etc.
A rigorous description of free-space quantum channels requires knowledge of the PDT. Random fluctuations of the refractive index due to atmospheric turbulence lead to random wandering of the beam centroid, fluctuations of the beam-spot shape, etc. These effects in turn lead to fluctuations in the intensity of the beam passing through the receiver aperture.
Several analytical models of the PDT have been proposed. One of them focuses on the effect of beam wandering [62]. Furthermore, the elliptic-beam model [83] involves beam-spot distortions approximated by Gaussian elliptic beams with randomly-oriented semi-axes. In
particular, this model shows that beam-spot distortions can be effectively described by the truncated log-normal distribution considered earlier in Refs. [48; 49]. Assuming that beam wandering and broadening are statistically independent, the law of total probability can be employed to derive the PDT [68].
All analytical models have a common drawback: they are applicable for limited values of channel parameters such as turbulence strength, propagation distance, receiver aperture size, and beam parameters. It is usually difficult to determine which model to use for given values of the channel parameters. In this paper, we address this problem by providing numerical simulations of the PDT and comparing the results with analytical models. We use the sparse-spectrum formulation [84; 85; 86] of the phase-screen method [87; 88; 89]. We also numerically check statistical properties of beams after passing through the atmosphere: Gaussianity of the distribution of the beam-centroid position, correlation between beam deflection and beam shape, etc. These properties are directly related to the applicability of analytical models.
It is important to note that the phase-screen method cannot be considered a substitute for experimental investigations, since it involves approximations. Nevertheless, it is free of assumptions about the beam shape and its statistical characteristics made in the analytical models of the PDT. Thus, our results can be useful to determine their applicability. Numerically simulated data can also be used directly to analyze quantum light transmission in atmospheric channels for a variety of scenarios. Taking the quadrature squeezing as an instance, we demonstrate how numerical analysis can be employed to study the transmission of nonclassical phenomena through free-space channels.
The rest of the paper is organized as follows. In Sec. II we introduce preliminary information on the propagation of Gaussian beams through the turbulent atmosphere, the sparse-spectrum formulation of the phase-screen method, and analytical models. An empirical analytical model based on the beta distribution, which shows good agreement with the numerically-simulated data, is introduced in Sec. III. Numerical simulations of atmospheric channels and comparison of the obtained numerically-simulated data with analytical models are considered in Sec. IV. In Sec. V the statistical properties of beams after passing through the turbulent atmosphere are analyzed. An application of the discussed method to a description of the transmission of quadrature squeezing through free-space channels is considered in Sec. VI. Summary and concluding remarks are given in Sec. VII. In Supplemental Material [90] one can find numerically-simulated data for different channels, an interactive visualization tool, and the corresponding Python 3 codes.
## II Preliminaries
In this section, we briefly introduce preliminary results needed for our consideration. Firstly, we discuss the fluctuating channel transmittance \(\eta\). Secondly, we briefly review the sparse-spectrum formulation of the phase-screen method used for our numerical simulations. Finally, we review the existing analytical models of the PDT.
### Channel transmittance
The distribution of a light beam through the turbulent atmosphere along the \(z\)-axis can be described by the paraxial equation [38] for the field amplitude \(u(\mathbf{r};z)\),
\[2ik\frac{\partial u(\mathbf{r};z)}{\partial z}+\Delta_{\mathbf{r}}u(\mathbf{r} ;z)+2k^{2}\delta n(\mathbf{r},z)u(\mathbf{r};z)=0, \tag{2}\]
where \(k\) is the wave number, \(\delta n(\mathbf{r},z)\) is a fluctuating part of the index of refraction of air, \(\mathbf{r}=\left(x\;\;y\right)^{T}\) is the vector of transverse coordinates. The Gaussian beams are defined by the boundary conditions of this equation,
\[u(\mathbf{r};0)=\sqrt{\frac{2}{\pi W_{0}^{2}}}\exp\Bigl{[}-\frac{\mathbf{r}^{ 2}}{W_{0}^{2}}-\frac{ik}{2F_{0}}\mathbf{r}^{2}\Bigr{]}. \tag{3}\]
Here \(W_{0}\) and \(F_{0}\) are the beam spot radius and the wavefront radius at the transmitter, respectively. For our purposes we will use two types of initial conditions. The first corresponds to the case of \(F_{0}=+\infty\). In the absence of atmosphere, this beam is collimated, i.e. it is minimally divergent, for a propagation distance much shorter than the Rayleigh length, \(z\ll z_{\mathrm{R}}=kW_{0}^{2}/2\). The second corresponds to \(F_{0}=z_{\mathrm{ap}}\), where \(z_{\mathrm{ap}}\) is the channel length. Such a beam is considered to be geometrically focused, cf. Ref. [91], because in the absence of atmosphere it has a minimum beam spot radius in the plane of the receiver aperture, \(z=z_{\mathrm{ap}}\), given \(W_{0}\) and \(z_{\mathrm{ap}}\).
Random fluctuations of the index of refraction in the case of isotropic homogeneous turbulence are described by the correlation function, the Markovian approximation for which reads
\[\langle\delta n(\mathbf{r}_{1};z_{1})\delta n(\mathbf{r}_{2};z_{2 })\rangle\] \[\qquad=\int_{\mathbb{R}^{2}}d^{2}\mathbf{\kappa}\Phi_{n}(\mathbf{\kappa}; \kappa_{z}\!\!=\!\!0)e^{i\mathbf{\kappa}\cdot(\mathbf{r}_{1}-\mathbf{r}_{2})} \delta(z_{1}-z_{2}), \tag{4}\]
where \(\Phi_{n}(\mathbf{\kappa};\kappa_{z})\) is the turbulence spectrum. We will use it in the modified von Karman-Tatarskii form given by
\[\Phi_{n}(\mathbf{\kappa};\kappa_{z})=\frac{0.033C_{n}^{2}\exp\left[-\left(\frac{ \kappa\ell}{2\pi}\right)^{2}\right]}{\left[\kappa^{2}+L_{o}^{-2}\right]^{11/6}}. \tag{5}\]
Here \(C_{n}^{2}\) is the index-of-refraction structure constant characterizing the local strength of the turbulence, \(\ell_{o}\) and
\(L_{o}\) are inner and outer scales of turbulence corresponding to minimum and maximum sizes of the turbulence eddies, respectively, and \(\kappa=\sqrt{\mathbf{\kappa}^{2}+\kappa_{z}^{2}}\).
The beam intensity is defined as \(I(\mathbf{r},z)=|u(\mathbf{r};z)|^{2}\). As discussed in Refs. [68, 62, 63], the transmittance is given by integration of the field intensity over the aperture opening \(\mathcal{A}\) in the receiver aperture plane \(z=z_{\rm ap}\),
\[\eta=\int_{\mathcal{A}}d^{2}\mathbf{r}I(\mathbf{r};z_{\rm ap}). \tag{6}\]
Fluctuations of \(\delta n(\mathbf{r};z)\) lead to fluctuations of \(I(\mathbf{r};z)\) and consequently to randomization of \(\eta\). Moreover, this consideration does not take into account deterministic losses caused by absorption and scattering, which should be included in the final results.
Next, we consider the second- and fourth-order correlation functions given by
\[\Gamma_{2}(\mathbf{r};z)=\left\langle I(\mathbf{r};z)\right\rangle, \tag{7}\]
and
\[\Gamma_{4}(\mathbf{r}_{1},\mathbf{r}_{2};z)=\left\langle I(\mathbf{r}_{1};z)I (\mathbf{r}_{2};z)\right\rangle, \tag{8}\]
respectively. These functions can be used to obtain various statistical properties of the light beam passing through the turbulent atmosphere. For instance, the first two moments of the channel transmittance can be evaluated as
\[\left\langle\eta\right\rangle=\int_{\mathcal{A}}d^{2}\mathbf{r}\Gamma_{2}( \mathbf{r};z) \tag{9}\]
and
\[\left\langle\eta^{2}\right\rangle=\int_{\mathcal{A}}d^{2}\mathbf{r}_{1}\int_ {\mathcal{A}}d^{2}\mathbf{r}_{2}\Gamma_{4}(\mathbf{r}_{1},\mathbf{r}_{2};z). \tag{10}\]
Other applications of these functions needed for analytical models of the PDT will be discussed Sec. II.3.
### Sparse-spectrum model for phase-screen method
According to Eq.(6), in order to sample the channel transmittance \(\eta\), one must first sample the field amplitude \(u(\mathbf{r};z)\). This can be accomplished using the sparse-spectrum model for the phase-screen method [84, 85, 86]. The method relies on the fact that given the field amplitude \(u(\mathbf{r},z_{m-1})\) at the point \(z=z_{m-1}\) one can find an approximate solution to the paraxial equation (2) at the point \(z=z_{m}>z_{m-1}\) with an accuracy up to the second order of \(l=z_{m}-z_{m-1}\), cf. Ref. [87],
\[u(\mathbf{r};z_{m})\approx e^{-\frac{il}{4k}\Delta\star}e^{-i\phi(\mathbf{r}) }e^{-\frac{il}{4k}\Delta\star}u(\mathbf{r};z_{m-1}). \tag{11}\]
Here
\[\phi(\mathbf{r})=k\int\limits_{z_{m-1}}^{z_{m}}dz\,\delta n(\mathbf{r},z) \tag{12}\]
is referred to as the phase screen. As follows from Eq. (4), the correlation function of the latter is
\[\left\langle\phi(\mathbf{r}_{1})\phi(\mathbf{r}_{2})\right\rangle=\int_{ \mathbb{R}^{2}}d^{2}\mathbf{\kappa}\Phi_{\phi}(\mathbf{\kappa})e^{i\mathbf{\kappa}\cdot( \mathbf{r}_{1}-\mathbf{r}_{2})}, \tag{13}\]
where
\[\Phi_{\phi}(\mathbf{\kappa})=2\pi lk^{2}\Phi_{n}(\mathbf{\kappa};0) \tag{14}\]
is the power spectral density of the phase screens. Equation (11) implies that propagation between the points \(z_{m-1}\) and \(z_{m}\) is reduced to vacuum propagation up to the midpoint of this interval, followed by the phase increment by \(\phi(\mathbf{r})\), and finally vacuum propagation again.
The phase screen method can be outlined as follows. Firstly, the propagation distance is divided into \(M\) slabs \([z_{m-1},z_{m}]\), where \(m=1\dots M\), \(z_{0}=0\) and \(z_{M}=z_{\rm ap}\). Secondly, the phase screens are sampled at the midpoints of these slabs. Finally, the vacuum propagation of the field amplitude \(u(\mathbf{r},z)\) is simulated successively between these points and the \(\mathbf{r}\)-dependent phase is incremented on them.
The vacuum propagation can be simulated using the Fourier transform of the field amplitude. In the sparse-spectrum model [84, 85, 86], the phase screens are expanded into \(N\) harmonics, such that
\[\phi(\mathbf{r})\approx\mathrm{Re}\sum_{n=1}^{N}a_{n}e^{i\mathbf{\kappa}_{n}\cdot \mathbf{r}}. \tag{15}\]
The sampled values of \(\phi(\mathbf{r})\) are obtained via sampling the expansion coefficients \(a_{n}\) and the random vectors \(\mathbf{\kappa}_{n}\). Their statistics should satisfy Eq. (13). This can be achieved if two conditions are fulfilled. First, the coefficients \(a_{n}\) are normally distributed with
\[\left\langle a_{n}\right\rangle=0,\left\langle a_{n}a_{m}\right\rangle=0,\left \langle a_{n}a_{m}^{*}\right\rangle=s_{n}\delta_{nm}. \tag{16}\]
Second, the values of \(s_{n}\) and the random vectors \(\mathbf{\kappa}_{n}\) are chosen such that
\[\sum_{n=1}^{N}s_{n}p_{n}(\mathbf{\kappa})=2\Phi_{\phi}(\mathbf{\kappa}), \tag{17}\]
where \(p_{n}(\mathbf{\kappa}_{n})\) are the probability distributions for the random vectors \(\mathbf{\kappa}_{n}\).
Let us split the domain of \(\mathbf{\kappa}\) into \(N\) rings, \(K_{n-1}<\kappa<K_{n}\), where
\[K_{n}=K_{\rm min}\exp\left[\frac{n}{N}\ln\left(\frac{K_{\rm max}}{K_{\rm min}} \right)\right], \tag{18}\]
\(n=0\dots N\) and \(K_{\rm min}\) and \(K_{\rm max}\) are the inner and outer bounds of the spectral domain. We define \(p_{n}(\mathbf{\kappa}_{n})\) such that it has zero values outside the corresponding ring. Within each ring, the phase is assumed to be homogeneously distributed. In this case, Eq. (17) is reduced to the form
\[s_{n}p_{n}(\mathbf{\kappa}_{n})=2\Phi_{\phi}(\mathbf{\kappa}_{n}). \tag{19}\]
Integrating this expression inside the ring, we get
\[s_{n}=2\pi\int\limits_{K_{n-1}}^{K_{n}}d\kappa\kappa\Phi_{\phi}(\kappa). \tag{20}\]
The probability distributions for the vector \(\mathbf{\kappa}_{n}\) are given by \(p_{n}(\mathbf{\kappa}_{n})=2s_{n}^{-1}\Phi_{\phi}(\mathbf{\kappa}_{n})\). We approximate it by sampling the absolute values of \(\mathbf{\kappa}_{n}\) as
\[\kappa_{n}=\sqrt{K_{n-1}^{2}+\xi_{n}\left(K_{n}^{2}-K_{n-1}^{2}\right)}, \tag{21}\]
where \(\xi_{n}\in[0,1]\) is the uniformly distributed random variable, and by independent sampling the polar angle of \(\mathbf{\kappa}_{n}\) as a uniformly distributed random variable in the domain \([0,2\pi]\).
### Analytical and semi-analytical models
#### ii.3.1 Truncated log-normal model
The long-normal distribution is a widely used model that describes light passing through atmospheric channels [48; 49; 13]. In the domain of its applicability, this distribution should be truncated, since the channel efficiency cannot exceed the value of one. The normalized truncated log-normal PDT is given by
\[\mathcal{P}(\eta;\mu,\sigma)=\frac{1}{\mathcal{F}(1)}\frac{1}{\sqrt{2\pi}\eta \sigma}\exp\Bigl{[}-\frac{(\ln\eta+\mu)^{2}}{2\sigma^{2}}\Bigr{]}, \tag{22}\]
where \(\mathcal{F}(x)\) is the cumulative distribution function of the log-normal distribution. The parameters \(\mu\) and \(\sigma\) are related to the first and second moments of the transmittance, cf. Eqs. (9) and (10), respectively, as
\[\mu=\mu\left(\langle\eta\rangle,\langle\eta\rangle^{2}\right)\approx-\ln\left[ \frac{\langle\eta\rangle^{2}}{\sqrt{\langle\eta^{2}\rangle}}\right], \tag{23}\]
\[\sigma^{2}=\sigma\left(\langle\eta\rangle,\langle\eta\rangle^{2}\right)\approx \ln\left[\frac{\langle\eta^{2}\rangle}{\langle\eta\rangle^{2}}\right]. \tag{24}\]
Thus, the truncated log-normal model is completely characterized by two moments of the transmittance, i.e. by the information contained in the field correlation functions (7) and (8).
#### ii.3.2 Beam-wandering model
Here we briefly sketch the beam-wandering model considered in Ref. [62]. In this model it is assumed that the beam intensity at the receiver-aperture plane has a Gaussian shape, but \(W_{0}\) is replaced with the short-term beam-spot radius \(W_{\text{ST}}\),
\[W_{\text{ST}}^{2}=W_{\text{LT}}^{2}-4\sigma_{\text{bw}}^{2}. \tag{25}\]
Here \(W_{\text{LT}}\), defined as
\[W_{\text{LT}}^{2}=4\int_{\mathbb{R}^{2}}d\mathbf{r}x^{2}\Gamma_{2}(\mathbf{r} ;z), \tag{26}\]
is the long-term beam-spot radius, and
\[\sigma_{\text{bw}}^{2}=\int_{\mathbb{R}^{4}}d\mathbf{r}_{1}d\mathbf{r}_{2}x_{ 1}x_{2}\Gamma_{4}(\mathbf{r}_{1},\mathbf{r}_{2};z) \tag{27}\]
is the variance of a beam-centroid coordinate. Assuming that the beam-centroid position is normally distributed, an analytical form for the PDT is derived in Ref. [62], see also Appendix A. The basic assumption behind this model is that atmospheric turbulence results mostly in beam wandering.
#### ii.3.3 Elliptic-beam model
The main idea behind the elliptic-beam model [83] is to approximate the beam intensity by the elliptic Gaussian form
\[I(\mathbf{r};z_{\text{ap}})=\frac{2}{\pi\sqrt{\det\mathbf{S}}}\exp\Bigl{[}-2( \mathbf{r}-\mathbf{r}_{0})^{\text{T}}\mathbf{S}^{-1}(\mathbf{r}-\mathbf{r}_{0 })\Bigr{]}. \tag{28}\]
Here
\[\mathbf{r}_{0}=\int_{\mathbb{R}^{2}}d^{2}\mathbf{r}\,\mathbf{r}\,I(\mathbf{r} ;z_{\text{ap}}) \tag{29}\]
and
\[\mathbf{S}=4\int_{\mathbb{R}^{2}}d^{2}\mathbf{r}\left[(\mathbf{r}-\mathbf{r}_ {0})(\mathbf{r}-\mathbf{r}_{0})^{\text{T}}\right]I(\mathbf{r};z_{\text{ap}}) \tag{30}\]
are the beam-centroid position and the spot-shape matrix, respectively. As it has been discussed in Ref. [83], all parameters of the elliptic-beam model can be approximately related to the field correlation functions (7) and (8); see also Appendix B.
In this paper, along with the analytical approximation considered in Ref. [83], we will use a semi-analytical technique to obtain the elliptic-beam PDT. This method is free from assumptions about the statistical characteristics of the elliptic-beam model. In particular, this technique involves the following steps: (i) the values of the field intensity \(I(\mathbf{r};z_{\text{ap}})\) are sampled using the sparse-spectrum model for the phase-screen method; (ii) the sampled values of \(\mathbf{r}_{0}\) and \(\mathbf{S}\) are obtained using Eqs. (29) and (30); (iii) these values are successively substituted into Eqs. (28) and (6) to obtain the sampled values of the transmittance \(\eta\); (iv) the elliptic-beam PDT is reconstructed from the sampled values of \(\eta\). This semi-analytical approach is not considered a practical tool for obtaining the PDT. Rather, it is used solely to assess the general applicability of the elliptic-beam model, without relying on analytical approximations of the model parameters.
The model based on the law of total probability
As discussed in Ref. [83], the beam-spot distortion leads to the log-normal shape of the PDT. At the same time, beam wandering leads to the log-negative Weibull distribution [62]. The model based on the law of total probability [68] employs these observations and combines them into a single PDT that accounts for two effects simultaneously. The corresponding function is given by
\[\mathcal{P}(\eta)=\int_{\mathbb{R}^{2}}d^{2}\mathbf{r}_{0}\mathcal{P}(\eta|\mathbf{r}_ {0})\rho(\mathbf{r}_{0}). \tag{31}\]
Here \(\rho(\mathbf{r}_{0})\) is the Gaussian probability distribution for a beam-centroid coordinate with variance (27). The function
\[\mathcal{P}(\eta|\mathbf{r}_{0})=\mathcal{P}(\eta;\mu_{r_{0}},\sigma_{r_{0}}) \tag{32}\]
is the truncated log-normal PDT, cf. Eq. (22), where \(\mu_{r_{0}}=\mu\left(\langle\eta\rangle_{r_{0}},\langle\eta^{2}\rangle_{r_{0}}\right)\) and \(\sigma_{r_{0}}=\sigma\left(\langle\eta\rangle_{r_{0}},\langle\eta^{2}\rangle_ {r_{0}}\right)\) are obtained from Eqs. (23) and (24), respectively. The conditional moments \(\langle\eta\rangle_{r_{0}}\) and \(\langle\eta^{2}\rangle_{r_{0}}\) are defined similarly to (9) and (10), respectively. However, in this case the field correlation functions are replaced by \(\Gamma_{2}^{(c)}(\mathbf{r};z)=\left\langle I^{(c)}(\mathbf{r};z)\right\rangle\) and \(\Gamma_{4}^{(c)}(\mathbf{r}_{1},\mathbf{r}_{2};z)=\left\langle I^{(c)}( \mathbf{r}_{1};z)I^{(c)}(\mathbf{r}_{2};z)\right\rangle\), where
\[I^{(c)}(\mathbf{r};z)=I(\mathbf{r}+\mathbf{r}_{0};z) \tag{33}\]
is the beam intensity function with the origin of the coordinate system placed at the position \(\mathbf{r}_{0}\) of the beam centroid.
There are two ways to obtain the conditional moments \(\langle\eta\rangle_{r_{0}}\) and \(\langle\eta^{2}\rangle_{r_{0}}\). The first requires direct integration of the field correlation functions \(\Gamma_{2}^{(c)}(\mathbf{r};z)\) and \(\Gamma_{4}^{(c)}(\mathbf{r}_{1},\mathbf{r}_{2};z)\). This leads to an involved integration of the analytic expressions for these functions, which in turn are often expressed in terms of multiple integrals. However, given the numerically-simulated data, these moments can be evaluated with standard numerical integration. The PDT obtained with such a semi-analytical technique can be used for a general analysis of the model applicability. The second possibility is the analytical approximation based on the assumption of weak beam wandering, see Ref. [68] and Appendix C. An analysis of the corresponding PDT will give us information about the suitability of this approximation.
## III Beta-distribution model
In this section we introduce an empirical model based on the beta distribution [92]. The choice of such an ansatz is motivated by several reasons. Firstly, this distribution is naturally defined in the domain \(\eta\in[0,1]\). Secondly, it can be parameterized by the two moments \(\langle\eta\rangle\) and \(\langle\eta^{2}\rangle\). Finally, this distribution has a highly variable shape depending on the parameter values. We will show that the beta distribution model fits the results of numerical simulations better than other analytical models for a wide range of channel parameters.
The PDT for the beta-distribution model reads
\[\mathcal{P}(\eta;a,b)=\frac{1}{B(a,b)}\eta^{a-1}(1-\eta)^{b-1}. \tag{34}\]
Here \(B(a,b)\) is the beta function. The parameters \(a\) and \(b\) are given by
\[a =a\left(\langle\eta\rangle,\langle\eta^{2}\rangle\right)=\frac{ \langle\eta\rangle^{2}-\langle\eta\rangle^{3}-\langle\eta\rangle\langle\eta^{ 2}\rangle}{\langle\eta^{2}\rangle}, \tag{35}\] \[b =b\left(\langle\eta\rangle,\langle\eta^{2}\rangle\right)=a\left( \langle\eta\rangle,\langle\eta^{2}\rangle\right)\left(\frac{1}{\langle\eta \rangle}-1\right). \tag{36}\]
These expressions parameterize the PDT in terms of \(\langle\eta\rangle\) and \(\langle\eta^{2}\rangle\).
In many practical scenarios, the beta PDT can be considered as a reasonable alternative to the truncated log-normal distribution. This implies that one can reformulate the model based on the law of total probability such that \(\mathcal{P}(\eta|\mathbf{r}_{0})\) in Eq. (31) is replaced with the beta PDT as
\[\mathcal{P}(\eta|\mathbf{r}_{0})=\mathcal{P}(\eta;a_{r_{0}},b_{r_{0}}), \tag{37}\]
where \(a_{r_{0}}=a\left(\langle\eta\rangle_{r_{0}},\langle\eta^{2}\rangle_{r_{0}}\right)\) and \(b_{r_{0}}=b\left(\langle\eta\rangle_{r_{0}},\langle\eta^{2}\rangle_{r_{0}}\right)\). In this case, the conditional moments \(\langle\eta\rangle_{r_{0}}\) and \(\langle\eta^{2}\rangle_{r_{0}}\) are defined in the same way as it is discussed in Sec. II.3.4.
## IV Numerical simulations
In this section we will simulate the PDTs for various channels and compare the results with the analytical models considered in Sec. II.3 and Sec. III. We will focus on three channels classified by the value of the Rytov parameter \(\sigma_{R}^{2}=1.23C_{n}^{2}k^{7/6}z_{\rm ap}^{11/6}\), see Ref. [91]. This parameter characterizes the overall impact of optical turbulence on the entire channel, taking into account both the local turbulence strength characterized by \(C_{n}^{2}\) and the channel length \(z_{\rm ap}\). We will consider three scenarios: weak impact of optical turbulence with \(\sigma_{\rm R}^{2}<1\), moderate impact of optical turbulence with \(\sigma_{\rm R}^{2}=1\ldots 10\), and strong impact of optical turbulence with \(\sigma_{\rm R}^{2}>10\), as shown in Table 1. The numerically-simulated data, an interactive tool for their visualization, and the corresponding code can be accessed in Supplemental Material [90]. It should be noted that our simulations do not include deterministic losses caused by absorption and scattering. For practical applications of our results, these losses should be additionally taken into account. In a typical scenario, they can be considered on the order of \(0.1\) dB/km, see Ref. [93].
In order to quantify the differences between the analytical/semi-analytical and numerical PDTs, we will use the Kolmogorov-Smirnov (KS) statistic (see, e.g.,
Ref. [94]), given by
\[D_{M}=\sup_{\eta}\left|F_{M}(\eta)-F(\eta)\right|. \tag{38}\]
Here \(F(\eta)=\int_{0}^{\eta}d\eta^{\prime}\mathcal{P}(\eta^{\prime})\) is the cumulative probability distribution of transmittance corresponding to the analytical model, \(F_{M}(\eta)=M^{-1}\sum_{i=1}^{M}\theta(\eta-\eta_{i})\) is the empirical probability distribution obtained from \(M\) sampled transmittances \(\eta_{i}\), and \(\theta(\eta)\) is the Heaviside step function. If an analytical or semi-analytical model presumes samples of transmittance, \(F(\eta)\) is replaced by the empirical probability distribution obtained from the corresponding sampled values. The cases \(D_{M}=0\) and \(D_{M}=1\) correspond to complete equivalence and maximum discrepancy, respectively, between the numerically-simulated data and analytical models.
We have simulated data for three channels with the technique described in Sec. II.2. For the channels with weak and moderate impact of optical turbulence, we use a spatial grid with 512 points along one axis. For the channel with strong impact of optical turbulence, we use a spatial grid with 1024 points. The spatial grid steps are 0.3 mm, 0.4 mm and 1.2 mm for the channels with weak, moderate and strong impact of optical turbulence, respectively. The number of spectral rings is \(N=1024\). The inner and outer bounds of the spectrum are \(K_{\rm min}=1/(15L_{0})\) and \(K_{\rm max}=2/\ell_{0}\), respectively. The number of phase screens is 10 for the channels with weak and moderate impact of optical turbulence and 15 for the channel with strong impact of optical turbulence. The number of samples, \(M\), is 410771, 333766, and 205089 for the channels with weak, moderate, and strong impacts of optical turbulence, respectively.
### Channel with weak impact of optical turbulence
In this section, we consider a 1 km channel with weak local turbulence, as detailed in the first column of Table 1. These parameter values result in a weak impact of optical turbulence over the entire channel. For \(F_{0}=+\infty\), the Rayleigh length is \(z_{\rm R}=kW_{0}^{2}/2\approx 1.553\) km, which is greater than the channel length \(z_{\rm ap}\)=1 km. Therefore, in the absence of the atmosphere, this beam can be approximately considered collimated. When \(F_{0}=z_{\rm ap}\), the beam-spot radius at the receiver aperture plane is minimized in the absence of the atmosphere for a given value of \(W_{0}\) and \(z_{\rm ap}\). Such a beam can be considered geometrically focused, as discussed in Ref. [91].
Firstly, consider the collimated beam. The dependence of the KS statistic on the aperture radius is shown in Fig. 1. Evidently, the beam-wandering and elliptic-beam models show the worst agreement with the numerically-simulated data. This result is counterintuitive since the beam-spot distortion is expected to be small for weak impact of optical turbulence. However, our simulations have shown that the contribution of this effect is still significant compared to beam wandering. The beta-distribution model shows the best agreement with the numerically simulated data for the considered channel. This model has a simple analytical form, cf. Eq. (34), and can be easily used for analytical considerations of various nonclassical effects for quantum light distributed in such channels. An example of the numerically-simulated PDT and different analytical PDTs can be seen in Fig. 2.
\begin{table}
\begin{tabular}{l c c c} Parameter & Weak & Moderate & Strong \\ \hline Rytov parameter \(\sigma_{\rm R}^{2}\) & 0.2 & 1.5 & 15 \\ Structure constant & 5\(\times 10^{-15}\) & 1.5\(\times 10^{-14}\) & 5\(\times 10^{-15}\) \\ \(C_{n}^{2}\) (m\({}^{-2/3}\)) & & & \\ Outer scale \(L_{0}\) (m) & 10\({}^{3}\) & 10\({}^{3}\) & 10\({}^{3}\) \\ Inner scale \(\ell_{0}\) (m) & 6\(\times 10^{-3}\) & 10\({}^{-3}\) & 10\({}^{-3}\) \\ Channel length \(z_{\rm ap}\) (km) & 1 & 1.6 & 10.2 \\ Beam-spot radius at the transmitter, \(W_{0}\) (cm) & & & \\ Wavelength & 809 & 809 & 808 \\ \(\lambda=2\pi/k\) (nm) & & & \\ \end{tabular}
\end{table}
Table 1: Parameter values for channels with weak, moderate, and strong impact of optical turbulence considered in this paper.
Figure 1: The KS statistics, quantifying the differences between the numerically-simulated and analytical PDTs, are shown for the beam-wandering model (W), the elliptic-beam model (E), the truncated log-normal model (L), the model based on the law of total probability with the approximation of weak beam wandering (\(\rm T_{L}\)), and the beta-distribution model (B) for the channel with weak impact of optical turbulence and \(F_{0}=+\infty\).
Secondly, we consider the geometrically focused beam. The corresponding KS statistic as a function of aperture radius is shown in Fig. 3. The best result for small values of the aperture radius is demonstrated by the total probability model based on the beta distribution. In this domain, the approximation of weak beam wandering gives a result very close to the semi-analytical models. For the remaining domain, this approximation no longer works. However, the model based on the law of total probability without the assumption of weak beam wandering still shows the best agreement with the numerically simulated data, as can be seen from the KS statistic for the corresponding semi-analytical model. The elliptic-beam model, while showing good agreement with the numerically simulated data in the global minimum domain for the KS statistic, has a small discrepancy between the semi-analytical and analytical models. The best choice among the analytical models for large values of the aperture radius is given by the beta distribution model.
The numerically simulated and analytical PDTs for the geometrically focused beam are shown in Fig. 4. It is interesting to note that the shape of the numerically-simulated PDT is similar to the PDT for the elliptic-beam model, but for different values of the parameters. Consequently, the experimental or numerically simulated data can still be fitted by the elliptic-beam and beam-wandering models. However, the estimated values of the channel parameters obtained with such fitting are biased. In general, the modes of the elliptic-beam PDTs are less/greater than the modes of the numerically simulated PDTs for aperture radii less/greater than the value corresponding to the minimum in the KS statistic.
It is worth noting that geometrical focusing significantly improves the channel characteristics under the considered conditions. This means that the effects of diffraction are still more significant than the effects of turbulence for this channel. In particular, the mean transmittance \(\langle\eta\rangle\) is larger for \(F_{0}=z_{\rm ap}\) than for \(F_{0}=+\infty\).
Figure 4: The numerically-simulated (N), beta (B), elliptic-beam (E), truncated log-normal (L), and based on the law of total probability with beta distribution (T\({}_{\rm B}\)) PDTs are shown for the channel with weak impact of optical turbulence and \(F_{0}=z_{\rm ap}\).
Figure 3: The KS statistics, quantifying the differences between the numerically-simulated and analytical PDTs, are shown for the beam-wandering model (W), the elliptic-beam model (E), the truncated log-normal model (L), the model based on the law of total probability with the truncated long-normal (T\({}_{\rm L}\)) and beta (T\({}_{\rm B}\)) distributions, and the beta-distribution model (B) for the channel with weak impact of optical turbulence and and \(F_{0}=z_{\rm ap}\). Solid and dashed lines correspond to analytical and semi-analytical models, respectively.
Figure 2: The numerically-simulated (N), beta-distribution (B), beam-wandering (W), and truncated log-normal (L) PDTs are shown for the channel with weak impact of optical turbulence and \(F_{0}=+\infty\).
### Channel with moderate impact of optical turbulence
The channel considered in this section, see the third column in Table 1, is characterized by the same beam parameters as the channel considered in Sec. IV.1. However, its length is 1.6 km. Such a channel has been implemented in the city of Erlangen (Germany), see Refs. [52, 16, 54]. In contrast to the previous case, here we consider stronger local turbulence, resulting in a moderate impact on the light beam. Since the propagation distance is approximately equal to the Rayleigh length, the corresponding beam is significantly divergent for \(F_{0}=+\infty\). Therefore, the geometrically focused beam, \(F_{0}=z_{\text{ap}}\), is the optimal choice for given \(W_{0}\) and \(z_{\text{ap}}\).
The corresponding KS statistics are shown in Fig. 5. The best agreement with the numerically-simulated data is achieved by the model based on the law of total probability with the beta distribution, but without the assumption of weak beam wandering. This fact becomes clear from the KS statistic for the corresponding semi-analytical model. The beta-distribution model shows the best agreement among the analytical models for almost the entire range of aperture radius values. An exception are the beam-wandering and elliptic-beam models, which show the best result in a range with aperture radius close to the long-term beam-spot radius \(W_{\text{LT}}\). The latter models may still be capable to fit numerically-simulated data for a wider range of aperture radii. However, this may lead to a biased estimation of the turbulence parameters.
Examples of the numerically simulated PDTs are shown in Fig. 6 for \(F_{0}=z_{\text{ap}}\) and \(F_{0}=+\infty\). For the considered aperture radius \(R_{\text{ap}}=1.9\) cm, the PDT shapes do not resemble the beam-wandering and/or elliptic-beam PDT. We also consider a scenario with the beam-tracking procedure; see, e.g., Ref. [95]. The idea is to reduce the signal noise caused by beam wandering. A typical setup consists of a beacon laser at the receiver side and a position-sensitive sensor at the transmitter side. The latter is used for fast control of the source to align the beam centroid with the aperture center. The transformation of the PDT due to the partial and complete mitigation of the beam wandering has been considered in Ref. [68]. The PDTs for the perfectly tracked beams are shown in Fig. 6. These distributions describe only the beam-spot distortion. One can see that in this case, contrary to the assumptions in the model based on the law of total probability, these PDTs do not agree with the truncated log-normal distributions. It is also clear that the beam tracking procedure leads to an increase in the PDT modes. This implies that the channel characteristics are significantly improved with beam tracking.
### Channel with strong impact of optical turbulence
In this section, we consider a 10.2 km channel with relatively weak strength of local turbulence, see the fourth column in Table 1. The propagation distance plays a cru
Figure 6: The numerically-simulated PDTs for the channel with the moderate impact of optical turbulence are shown. The cases \(F_{0}=z_{\text{ap}}\) and \(F_{0}=+\infty\) are given for the untracked (solid lines) and perfectly tracked (dot-dashed lines) beams.
Figure 5: The KS statistic, quantifying differences between the numerically-simulated and analytical/semi-analytical PDTs, are shown for the beam-wandering model (W), the elliptic-beam model (E), the truncated log-normal model (L), the model based on the law of total probability with the truncated long-normal (T\({}_{\text{L}}\)) and beta (T\({}_{\text{B}}\)) distribution, and the beta-distribution model (B) for the channel with moderate impact of optical turbulence and \(F_{0}=z_{\text{ap}}\). Solid and dashed lines correspond to analytical and semi-analytical models, respectively.
cial role in such a scenario, thus the overall impact of optical turbulence becomes strong. The Rayleigh length for the considered beam spot radius is close to \(z_{\rm ap}\). This implies that the beam with \(F_{0}=+\infty\) is significantly divergent. Therefore, it is reasonable to focus with the geometrically focused beam, \(F_{0}=z_{\rm ap}\).
The KS statistics, quantifying the differences between the numerically simulated data and the analytical/semi-analytical models for the considered channel, are shown in Fig. 7. The general conclusions are similar to the case of moderate impact of optical turbulence: the beam-wandering and elliptic-beam models show a good agreement with the numerically simulated data for the aperture radius close to the long-term beam-spot radius \(W_{\rm LT}\); outside this range, the best result is usually given by the beta-distribution model. However, there is an important exception: For small apertures, the best result is given by the truncated long-normal model and the model based on the law of total probability with the truncated log-normal distribution. The KS statistic for the semi-analytical models based on the law of total probability demonstrates the best result for large apertures.
The numerical PDT and the beta PDT for different values of the aperture radius are shown in Fig. 8. Obviously, the PDT skewness changes with increasing aperture--this property is inherent for all considered channels. For small apertures, the PDT mode is left-centered, such that its shape is close to the truncated log-normal distribution or the model based on the law of total probability. This can be explained by a leading role of the beam-spot distortion. For larger apertures, the finite size of the beam becomes important. As a consequence, the PDT resembles the distributions for the beam-wandering and/or elliptic-beam models, which are right-centered. An interesting feature appears in the intermediate stage, see for example \(R_{\rm ap}=14\) cm. In this case, both phenomena, the beam wandering and the beam-spot distortion, contribute concurrently to the transmittance fluctuations. As a result, the corresponding PDT has two modes.
## V Statistical characteristics of beam parameters
The analytical and semi-analytical models considered in the previous sections are based on assumptions about the statistical properties of the beam parameters. Firstly, the beam-wandering model, elliptic-beam model, and the model based on the law of total probability all assume that the beam-centroid position follows a two-dimensional Gaussian distribution. Secondly, the model based on the law of total probability includes the assumption that the beam-centroid position is statistically independent of the beam shape. Finally, the elliptic-beam model supposes a particular shape of the probability distribution for the semi-axes of the ellipse, forming the beam shape in the Gaussian approximation. In this section, we will verify these assumptions through numerical
Figure 7: The KS statistics are shown for the beam-wandering model (W), the elliptic-beam model (E), the truncated log-normal model (L), the model based on the law of total probability with the truncated long-normal (T\({}_{\rm L}\)) and beta (T\({}_{\rm B}\)) distribution, and the beta-distribution model (B) for the channel with strong impact of optical turbulence and \(F_{0}=z_{\rm ap}\). Solid and dashed lines correspond to analytical and semi-analytical models, respectively.
Figure 8: The numerically-simulated (N), truncated log-normal (L), beta (B), and beam-wandering (W) PDTs for the channel with strong impact of optical turbulence and \(F_{0}=z_{\rm ap}\) are shown for different values of the aperture radius.
simulations.
### Gaussianity of the beam-centroid position
Here we verify that the beam-centroid position follows a two-dimensional Gaussian distribution. To accomplish this, we calculate the beam centroid position using Eq. (29) for each sampling event and analyze the corresponding statistical data. Note that \(x_{0}\) and \(y_{0}\) are statistically independent. Therefore, it is sufficient to check whether a single coordinate follows a Gaussian distribution.
To characterize non-Gaussianity, we will use skewness and excess kurtosis [96]. The corresponding values for different channels are given in Table 2. The largest discrepancy with the assumption of Gaussian distribution for the beam centroid position is shown by the channel with strong impact of optical turbulence. However, even in this case, the skewness and excess kurtosis are small. This implies that the numerically-simulated data demonstrate a good agreement with Gaussian distributions in all cases. In Fig. 9 we compare numerically-simulated data with the corresponding Gaussian distribution for the case, where we find maximal absolute values of the skewness and excess kurtosis. As can be seen, the deviation from the Gaussian distribution is negligible even for such a scenario.
### Statistical contribution of beam wandering
In this section we analyze the contribution of beam wandering into the PDT shape. This can be characterized by correlations between the beam-deflection distance \(r_{0}=|\mathbf{r}_{0}|\) and the transmittance \(\eta\). Non-zero values of the corresponding Pearson correlation coefficient,
\[S(r_{0},\eta)=\frac{\left\langle\Delta r_{0}\Delta\eta\right\rangle}{\sqrt{ \left\langle\Delta r_{0}^{2}\right\rangle\left\langle\Delta\eta^{2}\right\rangle}}, \tag{39}\]
indicates a contribution of beam wandering into the PDT.
We analyze each sampling event and find the corresponding values of \(\eta\) and \(r_{0}\) for it by using Eqs. (6) and (29), respectively. With this set
\begin{table}
\begin{tabular}{l c c c} \hline \hline Impact of & Wave-front & Skewness & Excess kurtosis \\ turbulence & radius \(F_{0}\) & & kurtosis \\ \hline Weak & \(z_{\text{ap}}\) & 0.00074 & -0.00084 \\ & \(+\infty\) & 0.0061 & -0.0031 \\ Moderate & \(z_{\text{ap}}\) & 0.0029 & -0.024 \\ & \(+\infty\) & -0.012 & -0.026 \\ Strong & \(z_{\text{ap}}\) & 0.013 & -0.10 \\ & \(+\infty\) & 0.0046 & -0.088 \\ \hline \end{tabular}
\end{table}
Table 2: The skewness and excess kurtosis for \(\Theta_{\text{c}}\) and \(\Theta_{\text{s}}\) in the case of channels with weak, moderate, and strong impact of optical turbulence, as it is given in Table 1.
Figure 10: The Pearson correlation coefficient (39) is shown as a function of aperture radius in the units of the corresponding long beam-spot width. \(W\), \(M\), and \(S\) correspond to the channels with weak, moderate, and strong impacts of optical turbulence, respectively, see Table 1. The indexes \(+\infty\) and \(z_{\text{ap}}\) indicate the cases with \(F_{0}=+\infty\) and \(F_{0}=z_{\text{ap}}\), correspondingly.
Figure 9: A histogram obtained from the numerically-simulated data, demonstrating the distribution of the beam-centroid coordinate \(x_{0}\) in the case of channel with strong impact of optical turbulence [cf. Table 1] and \(F_{0}=z_{\text{ap}}\), is compared with the corresponding Gaussian distribution (dashed line).
estimate the Pearson correlation coefficient. The results are shown in Fig. 10 as a function of the aperture radius in the units of the corresponding long beam-spot width, \(R_{\text{ap}}/W_{\text{LT}}\). Obviously, the contribution of beam wandering in the PDT strictly depends on the aperture radius. The maximum anticorrelations appear for \(R_{\text{ap}}\approx 0.5W_{\text{LT}}\) for all channels. Also channels with weak impact of optical turbulence and with \(F_{0}=+\infty\) show relatively smaller anticorrelations compared to other cases.
Strong anticorrelations shown by the Pearson coefficient (39) indicate the need to explicitly account for the contribution of beam wandering such as it is done in the elliptic-beam model and in the model based on the law of total probability. This conclusion is also supported by the KS statistics considered in Sec. IV: The model based on the law of total probability--at least in the semi-analytical form--is in a reasonable agreement with the numerically-simulated data in all the cases except for the case of the channel with weak impact of optical turbulence and \(F_{0}=+\infty\).
### Correlations between beam shape and beam deflection
The model based on the law of total probability [68] includes the assumption that the beam shape is independent of the beam deflection. This implies that \(I^{(c)}(\mathbf{r})\) in Eq. (33) is statistically independent on \(\mathbf{r}_{0}\). This assumption will be verified in this section.
A straightforward way to do this is to study the correlations between the beam deflection \(r_{0}\) and the transmittance of the perfectly tracked beam obtained via integration similar to Eq. (6) but for \(I^{(c)}(\mathbf{r})\). For each sampling event, we find the corresponding beam-centroid position \(\mathbf{r}_{0}\) using Eq. (29), place the aperture center there, and calculate the corresponding transmittance \(\eta\) using Eq. (6). The result is the sample set of two random variables, \(r_{0}=|\mathbf{r}_{0}|\) and \(\eta\), which we use to estimate the Pearson correlation coefficient similar to Eq. (39). If it differs from zero, then the assumption of statistical independence between the beam shape and the beam deflection fails.
Values of this coefficient as a function of aperture radius are shown in Fig. 11. In the most cases the transmittance of the perfectly tracked beam and the beam-deflection distance are only weakly anticorrelated at least in the Gaussian approximation. Relatively strong anticorrelations occur only in the case of channels with strong impact of optical turbulence and the large aperture radii.
We also consider another way to verify statistical independence between the beam-deflection distance and the beam shape, which does not depend on the aperture. We characterize the beam shape using the tangential beam-spot width \(W_{r}\) as depicted in Fig. 12. For each sampling event, we find the beam-centroid position \(\mathbf{r}_{0}=(x_{0},y_{0})\) and the corresponding angle \(\chi\) that defines the direction of the vector \(\mathbf{r}_{0}\) to the \(x\)-axis, such that \(\tan\chi=x_{0}/y_{0}\). In the coordinate frame \((x_{r},y_{r})\), oriented to the beam
Figure 11: The Pearson correlation coefficient, defined similarly to Eq. (39) but with the transmittance of the perfectly tracked beam, is shown as a function of aperture radius in the units of the corresponding long beam-spot width. Lines are marked such as in Fig. 10.
Figure 12: The tangential beam-spot width \(W_{r}\) is depicted. The beam centroid is deflected to the point \(\mathbf{r}_{0}=(x_{0},y_{0})\). The angle \(\chi\) defines the direction of the vector \(\mathbf{r}_{0}\) to the \(x\)-axis.
\begin{table}
\begin{tabular}{l c c} \hline \hline Impact of turbulence & \(F_{0}=+\infty\) & \(F_{0}=z_{\text{ap}}\) \\ \hline Weak & 0.0090 & 0.037 \\ Moderate & 0.092 & 0.16 \\ Strong & 0.30 & 0.35 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The Pearson correlation coefficient between the beam-deflection distance \(r_{0}\) and the radial beam width \(W_{r}\), cf. Eq. (42), for the channels with weak, moderate, and strong impact of optical turbulence, as it is given in Table 1.
centroid,
\[x_{r} =x\cos\chi+y\sin\chi, \tag{40}\] \[y_{r} =-x\sin\chi+y\cos\chi, \tag{41}\]
we calculate \(S_{x_{r}x_{r}}\) similarly to Eq. (30), and the corresponding beam width as \(W_{r}=\sqrt{S_{x_{r}x_{r}}}\). From the set of sampling events, we obtain the Pearson correlation coefficient between \(W_{r}\) and \(r_{0}=(x_{0}^{2}+y_{0}^{2})^{1/2}\),
\[S(r_{0},W_{r})=\frac{\langle\Delta r_{0}\Delta W_{r}\rangle}{\sqrt{\langle \Delta r_{0}^{2}\rangle\langle\Delta W_{r}^{2}\rangle}}, \tag{42}\]
characterizing statistical correlations between the beam centroid and the beam shape. The results for different channels are shown in Table 3. They demonstrate that these statistical correlations can be significant for channels with moderate and strong impact of optical turbulence. However, as seen above, for a wide range of aperture radii this correlation does not significantly affect the transmittance values.
### Statistical distribution of semi-axes, forming elliptic beams
Let us consider the semi-axes of the ellipse, forming the beam shape in the Gaussian approximation. These quantities can be defined as eigenvalues of the matrix \(\mathbf{S}\), cf. Eq. (30),
\[W_{1,2}^{2}=\frac{1}{2}\left[S_{xx}+S_{yy}\pm\sqrt{(S_{xx}-S_{yy})^{2}+4S_{xy }^{2}}\right],\]
such that \(W_{1}\) is assigned the eigenvalue with the sign "\(+\)" or "-" if \(S_{xy}\geq 0\) or \(S_{xy}\leq 0\), respectively. As it is assumed in the elliptic-beam model [83], the quantities
\[\Theta_{1,2}=\ln W_{1,2}^{2}/W_{0}^{2} \tag{43}\]
are distributed according to a bivariate Gaussian distribution. This assumption is verified in this section.
For each sampling event we numerically calculate the matrix \(\mathbf{S}\) according to Eq. (30) and the corresponding values of \(\Theta_{1}\) and \(\Theta_{2}\). The corresponding coarse-grained scatter plot for the channel with strong impact of optical turbulence and \(F_{0}=z_{\mathrm{ap}}\) is shown in Fig. 13. It is compared with the covariance ellipse, given by the equation
\[\sum_{i,j=1}^{2}\left(\Theta_{i}-\langle\Theta_{i}\rangle\right)\Sigma_{ij}^{ -1}\big{(}\Theta_{j}-\langle\Theta_{j}\rangle\big{)}=4, \tag{44}\]
where \(\Sigma_{ij}=\langle\Delta\Theta_{i}\Delta\Theta_{j}\rangle\) is the covariance matrix for the variables \(\Theta_{1}\) and \(\Theta_{2}\). Obviously, the corresponding distribution differs significantly from the Gaussian distribution, especially due to the pronounced minimum for \(\Theta_{1}=\Theta_{2}\). Another important observation is that the Pearson correlation coefficient for these quantities is small--the variance ellipse almost resembles a circle. Consequently, \(\Theta_{1}\) and \(\Theta_{2}\) can be considered as uncorrelated in the second order.
In order to characterize non-Gaussianity, we first rotate the coordinate system \((\Theta_{1},\Theta_{2})\) to another one, \((\Theta_{\mathrm{(c)}},\Theta_{\mathrm{(s)}})\), as
\[\Theta_{\mathrm{(c)}} =\frac{1}{\sqrt{2}}\left(\Theta_{1}+\Theta_{2}\right), \tag{45}\] \[\Theta_{\mathrm{(s)}} =\frac{1}{\sqrt{2}}\left(\Theta_{1}-\Theta_{2}\right). \tag{46}\]
Since the probability distribution function is symmetrical with respect to the axis \(\Theta_{\mathrm{(c)}}\), the quantities \(\Theta_{\mathrm{(c)}}\) and \(\Theta_{\mathrm{(s)}}\) should be uncorrelated. Second, we calculate the skewness and excess kurtosis for these quantities in order to conclude about non-Gaussianity of the probability distribution for \(\Theta_{1}\) and \(\Theta_{2}\). The results are summarized in
\begin{table}
\begin{tabular}{c c c c} Impact of & Wave-front & Skewnesses & Excess kurtosises \\ turbulence & radius \(F_{0}\) & for \(\Theta_{\mathrm{(c)}}\) and \(\Theta_{\mathrm{(s)}}\) & for \(\Theta_{\mathrm{(c)}}\) and \(\Theta_{\mathrm{(s)}}\) \\ \hline \multirow{4}{*}{Weak} & \(z_{\mathrm{ap}}\) & \(2.4\times 10^{-1}\) & \(1.8\times 10^{-1}\) \\ & & \(1.4\times 10^{-3}\) & \(-8.4\times 10^{-1}\) \\ & \(+\infty\) & \(-1.0\times 10^{-1}\) & \(3.9\times 10^{-3}\) \\ & & \(1.9\times 10^{-4}\) & \(-1.0\) \\ & \(z_{\mathrm{ap}}\) & \(2.6\times 10^{-1}\) & \(1.2\times 10^{-1}\) \\ Moderate & & \(-2.3\times 10^{-3}\) & \(-8.4\times 10^{-1}\) \\ & \(+\infty\) & \(1.5\times 10^{-1}\) & \(-9.9\times 10^{-2}\) \\ & & \(2.4\times 10^{-3}\) & \(-1.0\) \\ & \(z_{\mathrm{ap}}\) & \(3.5\times 10^{-1}\) & \(2.8\times 10^{-1}\) \\ Strong & & \(2.3\times 10^{-3}\) & \(-7.5\times 10^{-1}\) \\ & \(+\infty\) & \(3.8\times 10^{-1}\) & \(2.2\times 10^{-1}\) \\ & & \(3.6\times 10^{-3}\) & \(-7.6\times 10^{-1}\) \\ \hline \end{tabular}
\end{table}
Table 4: The skewness and excess kurtosis for \(\Theta_{\mathrm{(c)}}\) and \(\Theta_{\mathrm{(s)}}\) in the case of channels with weak, moderate, and strong impacts of optical turbulence, as it is given in Table 1.
Figure 13: The coarse-grained scatter plot for \(\Theta_{1}\) and \(\Theta_{2}\) in the elliptic-beam model for the channel with strong impact of optical turbulence and \(F_{0}=z_{\mathrm{ap}}\). The dashed line corresponds to the covariance ellipse, described by Eq. (44).
## VI Application: Transmission of Quadrature-Squeezed Light
Quadrature squeezing is a fundamental example of nonclassical phenomena exhibited by quantum light when measuring a field quadrature \(\hat{x}=2^{-1/2}(\hat{a}+\hat{a}^{\dagger})\), where \(\hat{a}\) is the field annihilation operator [97; 98; 99]. The quadrature variances can be expressed as
\[\left\langle\Delta\hat{x}^{2}\right\rangle=\frac{1}{2}+\left\langle:\Delta\hat {x}^{2}:\right\rangle, \tag{47}\]
where the term \(1/2\) corresponds to the quadrature variance of the vacuum state, and \(\left\langle:\Delta\hat{x}^{2}:\right\rangle\) represents the normal-ordered quadrature variance. A negative value for \(\left\langle:\Delta\hat{x}^{2}:\right\rangle\) indicates that the quadrature variance is smaller than its vacuum-state value.
A scheme of quadrature measurement with homodyne detection for the light passing through the turbulent atmosphere has been proposed and implemented in Refs. [10; 11]. In this case, the local oscillator is transmitted in orthogonally-polarized but the same spatial mode as the signal. Since the depolarization effect of the atmosphere is negligible (see, e.g., Ref. [37]), the transmittance of the local oscillator and the signal are strongly correlated. This correlation provides an opportunity to control the value of channel transmittance and postselect the events with high transmittance. Such a scheme has been implemented in Ref. [16] and theoretically analyzed in Refs. [83; 53].
As it has been shown in Ref. [53], the normal-ordered quadrature variance at the receiver site, \(\left\langle:\Delta\hat{x}^{2}:\right\rangle_{\text{out}}\), is related to the normal-ordered quadrature variance at the transmitter site, \(\left\langle:\Delta\hat{x}^{2}:\right\rangle_{\text{in}}\), as
\[\left\langle:\Delta\hat{x}^{2}:\right\rangle_{\text{out}}=\left\langle\eta \right\rangle\left\langle:\Delta\hat{x}^{2}:\right\rangle_{\text{in}}+\left \langle\Delta T^{2}\right\rangle\left\langle\hat{x}\right\rangle_{\text{in}}^ {2}, \tag{48}\]
where \(T=\sqrt{\eta}\) is the transmission coefficient. It is important to note that the constant losses on absorption and scattering in the atmosphere as well as the losses of optical system should be accounted for in the PDT or in the numerically-simulated data. For the squeezed vacuum state one has \(\left\langle\hat{x}\right\rangle_{\text{in}}=0\) and only the first term in Eq. (48) plays the role. The mean transmittance for the postselected signal is given by
\[\left\langle\eta\right\rangle=\frac{1}{\mathcal{F}(\eta_{\text{min}})}\int \limits_{\eta_{\text{min}}}^{1}d\eta\,\eta\,\mathcal{P}\left(\eta\right), \tag{49}\]
where \(\eta_{\text{min}}\) is the minimal value of the postselected efficiency (the postselection threshold) and \(\mathcal{F}(\eta_{\text{min}})=\int_{\eta_{\text{min}}}^{1}d\eta\,\eta\, \mathcal{P}\left(\eta\right)\) is the exceedance. The same quantity can be evaluated from the numerically-simulated data as
\[\left\langle\eta\right\rangle=\frac{1}{N_{\text{min}}}\sum_{\left\{\eta_{i} \geq\eta_{\text{min}}\right\}}\eta_{i}, \tag{50}\]
where \(\eta_{i}\) are sampled values of the transmittance values and \(N_{\text{min}}\) is the number of events with \(\eta_{i}\geq\eta_{\text{min}}\).
We consider quadrature squeezing for the channels listed in Table 1. A typical example of the dependence of the squeezing parameter on the postselection threshold \(\eta_{\text{min}}\) is shown in Fig. 14 for the channel with moderate impact of optical turbulence and \(F_{0}=z_{\text{ap}}\). A property common to all the examples considered is that the elliptic-beam and beam-wandering models give a significant discrepancy with the numerically-simulated data in the case of a small postselection threshold \(\eta_{\text{min}}\). This is due to the fact that these models are not bound to the value of the first moment \(\left\langle\eta\right\rangle\). Another approach [17; 83] assumes an estimation of the channel parameters by the method of moments based on the beam-wandering and elliptic-beam PDTs. This leads to good agreement with experimental (numerically-simulated) data. However, this method leads to biased values of the estimated parameters of channels.
Another interesting observation from Fig. 14 is a slight discrepancy between the squeezing parameters obtained from the truncated log-normal distribution and the numerically-simulated data at low postselection thresholds. This discrepancy arises due to the influence of the tails on the first two moments of the transmittance, leading to an error in Eqs. (23) and (24). The semi-analytical
Figure 14: The squeezing parameter at the receiver as a function of the postselection threshold \(\eta_{\text{min}}\), evaluated with the numerically-simulated data (N), the elliptic-beam approximation (E), the truncated log-normal distribution (L), and the beta-distribution model (\(B\)), are shown for the channel with moderate impact of optical turbulence, cf. Table 1, and \(F_{0}=z_{\text{ap}}\). The initial squeezing at the transmitter site is -3 dB. The constant losses are 0.38 dB. The receiver aperture radius is \(R_{\text{ap}}=0.43W_{\text{LT}}=0.019\) cm.
models based on the law of total probability also exhibit a small discrepancy due to the correlations between beam shape and beam deflection discussed in Sec. V.3. However, analytical models based on the law of total probability with the approximation of small beam wandering do not exhibit this discrepancy since this approximation assumes a strong constraint on the first two moments of the transmittance.
The beta-distribution model shows the best agreement with the numerically-simulated data. The corresponding PDT is strictly constrained to the first two moments of the transmittance, making this model works well for small values of the postselection threshold. However, this model also gives acceptable results for larger values of \(\eta_{\mathrm{min}}\).
The dependencies of the squeezing parameters on the aperture radius are shown in Fig. 15 for different values of the postselection threshold \(\eta_{\mathrm{min}}\). The behavior of these parameters is generally predictable. For instance, a small value of the aperture radius may require a large value of the postselection threshold to achieve an acceptable value of the squeezing parameter. However, as the aperture radius approaches the long-term beam-spot radius, the postselection becomes less significant.
## VII Conclusions
In conclusion, we have presented numerical simulations of the PDT for atmospheric optical channels. Depending on the Rytov parameter values, we consider scenarios with weak, strong, and moderate impacts of optical turbulence. We perform numerical simulations using the sparse-spectrum model for the phase-screen method and compare our results with existing analytical models, including the truncated log-normal distribution, beam-wandering model, elliptic-beam approximation, and the model based on the law of total probability. Additionally, we compare our results with the presented here empirical model based on the beta distribution and a corresponding modification of the model based on the law of total probability. We have also tested nontrivial statistical assumptions on beam parameters at the receiver underlying analytical models of the PDT.
An important conclusion drawn from this study is that the applicability of analytical models depends strongly on the aperture radius. At the same time, the dependence on the turbulence strength has a significantly less effect on this applicability. For instance, both the beam-wandering and elliptic-beam models demonstrate good agreement with numerically-simulated data for aperture radii of the order or slightly exceeding the long-term beam-spot radius. In fact, the elliptic-beam model may fit numerically-simulated data well within a wider range of aperture-radius values. However, such a fitting may result in biased estimations of turbulence parameters.
The empirical beta-distribution model presented here, as well as the corresponding model based on the law of total probability, has shown the best agreement with numerically-simulated data over a wide range of aperture radii. The beta-distribution model has a simple analytical form, and its parameters depend solely on the first and second moments of the transmittance. It should be noted that in practical scenarios, determining the range of applicability of the beam-wandering and elliptic-beam models with precision may be difficult. Therefore, the use of the beta-distribution model or the corresponding model based on the law of total probability may still be relevant, even if the aperture radius is expected to fall within this range. Considering this, the beta-distribution model can be regarded as highly applicable for theoretical research on atmospheric quantum channels.
We have applied our results to consider the transfer of quadrature squeezing through atmospheric channels. Proper experimental design allows for monitoring the actual transmittance value and postselecting events with high transmittance. The resulting squeezing depends on the mean transmittance for the postselected data. However, the beam-wandering and elliptic-beam models usually do not accurately approximate \(\langle\eta\rangle\), resulting in a worse agreement with numerically-simulated data for small values of the postselection threshold compared to other models. Therefore, in scenarios where the accuracy of both \(\langle\eta\rangle\) and \(\langle\eta^{2}\rangle\) is crucial, the beta-distribution model proves to be a more reliable solution. It demonstrates a better agreement with the numerically-simulated data, particularly for small minimum values of the postselected transmittance, compared to the beam-wandering and elliptic-beam models.
This work was supported by the National Research
Figure 15: The squeezing parameter at the receiver as a function of the aperture radius \(R_{\mathrm{ap}}\) is shown for different values of the postselection thresholds \(\eta_{\mathrm{min}}\). The channel parameters, the squeezing at the transmitter and the constant losses are the same as in Fig. 14.
Foundation of Ukraine trough the project 2020.02/0111 "Nonclassical and hybrid correlations of quantum systems under realistic conditions". The authors also thank D. Vasylyev and M. Bohmann for insightful discussions.
## Appendix A Beam-wandering PDT
As it has been shown in Ref. [62], the beam-wandering PDT is given by the log-negative Weibull distribution, i.e.
\[\mathcal{P}(\eta)= \frac{R^{2}}{\sigma_{\rm bw}^{2}\eta\,\vartheta(2/W_{\rm ST})} \left(\ln\frac{\eta_{0}}{\eta}\right)^{\frac{2}{\vartheta(2/W_{\rm ST})}-1}\] \[\times\exp\left[-\frac{R^{2}}{2\sigma_{\rm bw}^{2}}\left(\ln \frac{\eta_{0}}{\eta}\right)^{\frac{2}{\vartheta(2/W_{\rm ST})}}\right], \tag{10}\]
for \(\eta\in[0,\eta_{0}]\) and \(\mathcal{P}(\eta)=0\) otherwise. Here
\[\eta_{0}=1-\exp\left(-2\frac{R_{\rm ap}^{2}}{W_{\rm ST}^{2}}\right) \tag{11}\]
is the maximum transmittance,
\[R(\zeta)=R_{\rm ap} \tag{12}\] \[\times\left[\ln\!\left(2\frac{\eta_{0}}{1-\exp\!\left[-R_{\rm ap }^{2}\zeta^{2}\right]\mathrm{I}_{0}\!\left(R_{\rm ap}^{2}\zeta^{2}\right)} \right)\right]^{-1}\]
is the scale parameters, and
\[\vartheta(\zeta) =2R_{\rm ap}^{2}\zeta^{2}\frac{e^{-R_{\rm ap}^{2}\zeta^{2}}\, \mathrm{I}_{1}\!\left(R_{\rm ap}^{2}\zeta^{2}\right)}{1-\exp\!\left[-R_{\rm ap }^{2}\zeta^{2}\right]\mathrm{I}_{0}\!\left(R_{\rm ap}^{2}\zeta^{2}\right)}\] \[\times\left[\ln\left(2\frac{\eta_{0}}{1-\exp\!\left[-R_{\rm ap}^{ 2}\zeta^{2}\right]\mathrm{I}_{0}\!\left(R_{\rm ap}^{2}\zeta^{2}\right)} \right)\right]^{-1} \tag{13}\]
is the shape parameter. In these equations, \(R_{\rm ap}\) is the aperture radius, \(W_{\rm ST}^{2}\) is the short term, cf. Eq. (25), \(\sigma_{\rm bw}^{2}\) is the beam-wandering variance, cf. Eq. (27), and \(\zeta=2/W_{\rm ST}\).
## Appendix B Analytical approximation for the elliptic-beam model
In order to formulate the analytical approximation used in the elliptic-beam model [83], we first need an expression for the transmittance of the elliptical beam (28) through the aperture of the radius \(R_{\rm ap}\), as a function of the vector \(\mathbf{v}=\left(x_{0}\;\;y_{0}\;\;\Theta_{1}\;\;\Theta_{2}\right)\) and the angle \(\phi\) between the axis \(x\) and the ellipse semi-axis \(W_{1}\). Here \(x_{0}\) and \(y_{0}\) are the beam-centroid coordinates, \(\Theta_{1,2}\) are related to the length of the ellipse semi-axes, \(W_{1,2}\), as given by Eq. (43). This transmittance reads
\[\eta=\widetilde{\eta}_{0}\exp\left\{-\left[\frac{r_{0}}{R\left(\frac{2}{W_{\rm eff }(\phi-\chi)}\right)}\right]^{\vartheta\left(\frac{2}{W_{\rm eff}(\phi-\chi)} \right)}\right\}, \tag{14}\]
where \(r_{0}=|\mathbf{r}_{0}|\) is the beam-deflection distance, the scale parameter \(R(\zeta)\) and the shape parameter \(\vartheta(\zeta)\) are given by Eqs. (12) and (13), respectively,
\[\widetilde{\eta}_{0} =1\!-\!\mathrm{I}_{0}\left(R_{\rm ap}^{2}\frac{W_{1}^{2}-W_{2}^{2 }}{W_{1}^{2}W_{2}^{2}}\right)e^{-R_{\rm ap}^{2}\frac{W_{1}^{2}+W_{2}^{2}}{W_{ 1}^{2}W_{2}^{2}}}\] \[-2\!\left[\!1\!-\!e^{-\frac{R_{\rm ap}^{2}}{2\left(\frac{1}{W_{1 }}-\frac{1}{W_{2}}\right)}^{2}}\right] \tag{15}\] \[\times\exp\left[-\left\{\frac{R_{\rm ap}\frac{(W_{1}+W_{0})^{2}}{ |W_{1}^{2}-W_{2}^{2}|}}{R\left(\frac{1}{W_{1}}-\frac{1}{W_{2}}\right)}\right\} ^{\vartheta\left(\frac{1}{W_{1}}-\frac{1}{W_{2}}\right)}\right],\]
is the maximum transmittance for the elliptic beam,
\[W_{\rm eff}^{2}(\phi-\chi)=4R_{\rm ap}^{2}\Big{[}\mathcal{W} \!\left(\!\frac{4R_{\rm ap}^{2}}{W_{1}W_{2}}e^{2R_{\rm ap}^{2}\left(\frac{1}{W_ {1}^{2}}+\frac{1}{W_{2}^{2}}\right)}\right. \tag{16}\] \[\left.\times\left.e^{R_{\rm ap}^{2}\left(\frac{1}{W_{1}^{2}}- \frac{1}{W_{2}^{2}}\right)\cos\!\left(2\phi-2\chi\right)}\right)\right]^{-1}\!,\]
\(\chi\) is the angle that defines the direction of the vector \(\mathbf{r}_{0}\) to the \(x\)-axis, cf. Fig. 12, \(\mathrm{I}_{n}(x)\) is the modified Bessel function, and \(\mathcal{W}\) is the Lambert function.
The elliptic-beam PDT is given by
\[\mathcal{P}\left(\eta\right)= \frac{2}{\pi}\int_{\mathbb{R}^{4}}\mathrm{d}^{4}\,\mathbf{v}\int _{0}^{\pi/2}\!\mathrm{d}\,\phi\,\rho_{G}(\mathbf{v}|\mathbf{\mu},\Sigma)\delta \left[\eta\!-\!\eta\left(\mathbf{v},\phi\right)\right]. \tag{17}\]
Here \(\eta\left(\mathbf{v},\phi\right)\) is the transmittance defined by Eq. (14) as a function of random parameters and \(\rho_{G}(\mathbf{v}|\mathbf{\mu},\Sigma)\) is the Gaussian probability density function of the vector \(\mathbf{v}\) with the mean \(\mathbf{\mu}\) and the covariance matrix \(\Sigma\). This matrix is clearly divided into two diagonal blocks since \(\langle\Delta\mathbf{r}_{0}\Delta\Theta_{1,2}\rangle=0\) due to symmetry. The first block, related to the beam-centroid coordinates \(x_{0}\) and \(y_{0}\), reads \(\sigma_{\rm bw}^{2}\mathbb{I}_{2}\), where \(\mathbb{I}_{2}\) is the \(2\times 2\) unity matrix. The second blocks, consists of elements
\[\langle\Delta\Theta_{i}\Delta\Theta_{j}\rangle=\ln\left[1+\frac{\langle\Delta W_{i }^{2}\Delta W_{j}^{2}\rangle}{\langle W_{i}^{2}\rangle\langle W_{j}^{2} \rangle}\right]. \tag{18}\]
Two first components of the vector \(\mathbf{\mu}\) are zero in the case if \(\langle\mathbf{r}_{0}\rangle=0\). The two other components are given by
\[\langle\Theta_{i}\rangle=\ln\left[\frac{\langle W_{i}^{2}\rangle}{W_{0}^{2}} \left(1+\frac{\langle(\Delta W_{i}^{2})^{2}\rangle}{\langle W_{i}^{2}\rangle^{2 }}\right)^{-1/2}\right]. \tag{19}\]
The moments of \(W_{i}^{2}\) in Eqs. (18) and (19) are expressed in terms of field correlation functions \(\Gamma_{2}(\mathbf{r};z_{\rm ap})\) and \(\Gamma_{4}(\mathbf{r}_{1},\mathbf{r}_{2};z_{\rm ap})\) as
\[\langle W_{i}^{2}\rangle \!\!= 4\left[\int_{\mathbb{R}^{2}}\mathrm{d}^{2}\,\mathbf{r}\,x^{2} \Gamma_{2}(\mathbf{r};z_{\rm ap})\!-\!\langle x_{0}^{2}\rangle\right], \tag{20}\]
\[\langle\eta\rangle_{r_{0}}=\eta_{0}\exp\left[-8\,\delta_{ij}\langle x _{0}^{2}\rangle^{2}-\langle x_{0}^{2}\rangle\langle W_{i}^{2}\rangle\right], \tag{10}\] \[+\int\limits_{\mathbb{R}^{4}}\mathrm{d}^{4}\,\mathbf{r}\,\left[x_{ 1}^{2}x_{2}^{2}\left(4\delta_{ij}-1\right)-x_{1}^{2}y_{2}^{2}\left(4\delta_{ij }-3\right)\right]\] \[\times\,\Gamma_{4}(\mathbf{r}_{1},\mathbf{r}_{2};z_{\mathrm{ap}} )\,\Big{]}.\]
These moments can be calculated analytically, for instance, using the phase approximation of the Huygens-Kirchhoff method, as demonstrated in Ref. [83]. However, in this manuscript, we calculated these moments from the numerically-simulated data.
## Appendix C Approximation of weak beam-wandering for the model based on the law of total probability
In both variants of the model based on the law of total probability--those based on the truncated log-normal distribution, cf. Sec. II.3.4, and those based on the beta distribution, cf. Sec. III--one needs to evaluate the conditional moments of the transmittance, \(\langle\eta\rangle_{r_{0}}\) and \(\langle\eta^{2}\rangle_{r_{0}}\). In the approximation of weak beam wandering [68] these quantities are evaluated as
\[\langle\eta\rangle_{r_{0}}=\eta_{0}\exp\left[-\left(\frac{r_{0}}{R(2/W_{\mathrm{ ST}})}\right)^{\vartheta(2/W_{\mathrm{ST}})}\right], \tag{11}\]
\[\langle\eta^{2}\rangle_{r_{0}}=h_{0}^{2}\exp\left[-2\left(\frac{r_{0}}{R(2/W_{ \mathrm{ST}})}\right)^{\vartheta(2/W_{\mathrm{ST}})}\right], \tag{12}\]
where the scale parameter \(R(\zeta)\) and the shape parameter \(\vartheta(\zeta)\) are given by Eqs. (11) and (12), respectively. The efficiency \(\eta_{0}\) can be obtained from Eq. (10) if the field correlation function \(\Gamma_{2}(\mathbf{r};z_{\mathrm{ap}})\) is Gaussian. Otherwise, it is obtained as
\[\eta_{0}=\frac{\langle\eta\rangle}{\int\limits_{0}^{\infty}\mathrm{d}\,u\,u\,e ^{-\frac{u^{2}}{2}}\,e^{-\left(\frac{\sigma_{\mathrm{DW}}}{R(2/W_{\mathrm{ST} })}u\right)^{\vartheta(2/W_{\mathrm{ST}})}}}. \tag{13}\]
The parameter \(h_{0}\) is evaluated as
\[h_{0}^{2}=\frac{\langle\eta^{2}\rangle}{\int\limits_{0}^{\infty} \mathrm{d}\,u\,u\,e^{-\frac{u^{2}}{2}}\,e^{-2\left(\frac{\sigma_{\mathrm{DW}} }{R(2/W_{\mathrm{ST}})}u\right)^{\vartheta(2/W_{\mathrm{ST}})}}}. \tag{14}\]
In addition, this approximation ensures that two important statistical moments of transmittance, namely \(\langle\eta\rangle\) and \(\langle\eta^{2}\rangle\), retain their original definitions given by Eqs. (9) and (10), respectively.
|
2306.06124 | Unsupervised clustering of disturbances in power systems via deep
convolutional autoencoders | Power quality (PQ) events are recorded by PQ meters whenever anomalous events
are detected on the power grid. Using neural networks with machine learning can
aid in accurately classifying the recorded waveforms and help power system
engineers diagnose and rectify the root causes of problems. However, many of
the waveforms captured during a disturbance in the power system need to be
labeled for supervised learning, leaving a large number of data recordings for
engineers to process manually or go unseen. This paper presents an autoencoder
and K-means clustering-based unsupervised technique that can be used to cluster
PQ events into categories like sag, interruption, transients, normal, and
harmonic distortion to enable filtering of anomalous waveforms from recurring
or normal waveforms. The method is demonstrated using three-phase,
field-obtained voltage waveforms recorded in a distribution grid. First, a
convolutional autoencoder compresses the input signals into a set of lower
feature dimensions which, after further processing, is passed to the K-means
algorithm to identify data clusters. Using a small, labeled dataset, numerical
labels are then assigned to events based on a cosine similarity analysis.
Finally, the study analyzes the clusters using the t-distributed stochastic
neighbor embedding (t-SNE) visualization tool, demonstrating that the technique
can help investigate a large number of captured events in a quick manner. | Md Maidul Islam, Md Omar Faruque, Joshua Butterfield, Gaurav Singh, Thomas A. Cooke | 2023-06-08T04:41:34Z | http://arxiv.org/abs/2306.06124v1 | # Unsupervised clustering of disturbances in power systems via deep convolutional autoencoders
###### Abstract
Power quality (PQ) events are recorded by PQ meters whenever anomalous events are detected on the power grid. Using neural networks with machine learning can aid in accurately classifying the recorded waveforms and help power system engineers diagnose and rectify the root causes of problems. However, many of the waveforms captured during a disturbance in the power system need to be labeled for supervised learning, leaving a large number of data recordings for engineers to process manually or go unseen. This paper presents an autoencoder and K-means clustering-based unsupervised technique that can be used to cluster PQ events into categories like sag, interruption, transients, normal, and harmonic distortion to enable filtering of anomalous waveforms from recurring or normal waveforms. The method is demonstrated using three-phase, field-obtained voltage waveforms recorded in a distribution grid. First, a convolutional autoencoder compresses the input signals into a set of lower feature dimensions which, after further processing, is passed to the K-means algorithm to identify data clusters. Using a small, labeled dataset, numerical labels are then assigned to events based on a cosine similarity analysis. Finally, the study analyzes the clusters using the t-distributed stochastic neighbor embedding (t-SNE) visualization tool, demonstrating that the technique can help investigate a large number of captured events in a quick manner.
Convolutional autoencoder, data visualization, power quality, unsupervised clustering, waveform clustering.
## I Introduction
Power quality (PQ) events imply any variation in the voltage, current, or frequency supplied to a specific power system load. Modern-day electronic devices tend to be more sensitive to PQ disturbances than traditional loads, making PQ an important issue in maintaining a reliable power grid [1]. IEEE standard 1159 [2] defines seven broad categories of PQ events, each with several subcategories. The seven main categories are transients, short-duration variation, long-duration variation, voltage imbalance, waveform distortion, voltage fluctuation, and power frequency variation. When an anomalous event occurs, PQ meters may record voltage and current waveforms. To avoid the same event occurring again, it is of interest to utilities to analyze the recorded PQ event to assess its root causes so that corrective action may be taken to prevent more catastrophic damage or restore services faster. Such PQ data recordings are mostly unlabeled, meaning the cause of each recorded event is unknown, and each record includes event or non-event oscillography. As the waveform recordings are often performed periodically, and hundreds of anomalous events may occur within a network over a period of days, it can become overwhelming to identify the waveforms that require further analysis.
With technological advances in power system monitoring devices, power system data with high resolution and sophisticated signal information are now available, enabling researchers to utilize artificial intelligence and machine learning techniques for analyzing PQ events. As such, the use of machine learning algorithms for the analysis of PQ waveforms is an area in which a significant body of work already exists [3]. Authors in [4] proposed a compressed sensing and deep convolution network-based PQ disturbance classification. An artificial neural network (ANN) and decision tree-based PQ event classification utilizing time and frequency domain features were presented in [5]. Unsupervised methods have also been utilized by researchers focusing on anomaly detection [6, 7] and event localization [8]. Most of these works are either based on synthetic labeled data or PMU data. Very little work has been performed on distribution grid analytics, mainly due to the lack of field-obtained data. Authors in [9] utilized Ward and K-means clustering method to cluster voltage sag utilizing \(\mu\)PMU recordings. A comparative review of signal processing and AI based methods of analyzing PQ events in smartgrid is presented in [10]. The Electric Power Research Institute (EPRI) is in the process of constructing a PQ event waveform library [11], in which a large number of PQ event waveforms are to be consolidated for use in future research. As most of the waveforms in this library are unlabeled, it is desired to create a data-driven clustering method capable of grouping different event data into clusters and utilizing a handful of labeled data, indicating the event type of each cluster. The main contributions of this paper are:
* Development of an autoencoder and K-means-based unsupervised algorithm to cluster unlabeled field-obtained waveform data.
* Utilization of cosine similarity between cluster centers and a small number of labeled data to identify different PQ event labels for the clusters.
* An analysis of t-SNE visualization output to interpret the different clusters.
Utilizing these methods presents novelty first by addressing the need to isolate "interesting" events from recurring events.
Over time, this approach can be trained by PQ engineers to automatically prioritize data for further analysis based on the cluster in which the event is classified and whether the event has high similarity to other recorded events. Secondly, this paper helps to address the lack of research performed on field-collected data, as the majority of available sources utilize modeled or synthetic waveform data.
The remainder of the paper is organized as follows: section II presents the autoencoder and K-means-based clustering methodology, section III discusses clustering performances and cosine similarity-based event label identification, and finally, section IV concludes the paper.
## II Unsupervised clustering method
### _PQ event data preprocessing_
The EPRI waveform database consists of voltage and current waveform recordings obtained from different physical locations within a distribution system. These waveforms have been recorded periodically by PQ meters. The data library contains normal and PQ event data without any labels or identifying information attached. In this work, 5112 unlabeled voltage waveform recordings have been used. Of these waveforms, 250 waveforms of different categories have been labeled based on manual observation. Fig. 1 shows some of the recorded events from the manually labeled data.
The recorded waveforms are 16 cycles long, with 256 samples in each cycle. Thus, for three-phase voltage samples, total 4096\(\times\)3 points are recorded in a waveform capture. Some of the data were 8 cycles long, i.e, the swell event example in Fig. 1, and zero padding was performed to ensure that all waveform recordings had the same length. Per-unit scaling was applied to the data, and any data less than 8 cycles in length were removed. It should be noted that only voltage waveforms have been utilized as they better depict some of the PQ events like swell and sag. It is well known that dealing with high-dimensional data is a significant challenge for the application of clustering. Thus, applying dimensionality reduction and performing clustering in feature space instead of data space improves the clustering performance [12]. A deep convolutional autoencoder has been applied to these waveforms to learn the salient representation of data in a lower-dimensional feature space.
### _Feature space formation via deep convolutional autoencoder_
Autoencoders (AE) are a form of unsupervised representation learning in which the neural network learns to reconstruct an input signal at the network output [13]. There are two stages in an AE; in the first stage, the encoder learns to represent the input data in a latent space, and the decoder stage aims at reconstructing the input data from the latent space so that the reconstruction error is minimum. Fig. 2 shows the structure of the convolutional AE. In this work, convolutional AE has been applied to the input waveforms, and the encoder's output has been utilized as the feature space for the clustering algorithm. This method is known as under complete AE. Let, \(f_{W}(.)\) represent the encoder network and \(g_{U}(.)\) the corresponding decoder. The AE aims at minimizing the reconstruction mean squared error (MSE) between input, \(x\), and output:
\[\min_{W,U}\frac{1}{n}\sum_{i=1}^{n}\left\|g_{U}(f_{W}(x_{i}))-x_{i}\right\|_{ 2}^{2} \tag{1}\]
Where, for convolutional AE, the encoder and decoder network is described by:
\[f_{W}(x)= \sigma(x*W+b)\equiv h \tag{2}\] \[g_{U}(h)= \sigma(h*U+b^{\prime}) \tag{3}\]
Here, \(\sigma\) is a nonlinear activation function, h is the feature space, '\(*\)' is the convolution operator, W, U are the weights, and b, b' are biases corresponding to the encoder and decoder network. The AE learns the salient representation of the waveforms in feature space, \(h\), by minimizing the reconstruction error.
### _K-means clustering_
K-means is a well-known clustering algorithm that uses distance-based metrics to identify similar data samples [14]. Let the dataset, \(x=x^{(1)},...,x^{(m)}\), be represented by feature set \(h=h^{(1)},...,h^{(m)}\) in a d-dimensional Euclidean space \(\mathbb{R}^{d}\), in which the goal is to group the data into a few cohesive "clusters". Let \(a=a_{1},a_{2},....,a_{c}\) be the c cluster centers, and \(z=[z_{ik}]_{n\times c}\) where \(z_{ik}\in{0,1}\) indicates if \(x_{i}\in{k_{th}}\) cluster. The k-means algorithm thus tries to minimize the objective function \(J(z,A)=\sum_{i=1}^{n}\sum_{k=1}^{c}\left\|x_{i}-a_{k}\right\|^{2}\). The algorithm is iterated until boundary conditions are met. The cluster centers and memberships are updated respectively as:
\[a_{k}= \frac{\sum_{i=1}^{n}z_{ik}x_{i}}{\sum_{i=1}^{n}z_{ik}} \tag{4}\] \[z_{ik}= \begin{cases}1,&\text{if }\left\|x_{i}-a_{k}\right\|^{2}=\min_{1 \leq k\leq c}\left\|x_{i}-a_{k}\right\|^{2}\\ 0,&\text{otherwise}\end{cases} \tag{5}\]
Fig. 1: Voltage waveform recording from a distribution utility.
Here, \(\left\|x_{i}-a_{k}\right\|^{2}\) is the Euclidean distance between data point \(x_{i}\) and cluster center \(a_{i}\). It should be noted that although K-means is an unsupervised algorithm, it requires a priori information of the number of clusters. The number of clusters can be selected based on expert knowledge of the dataset or from some well-known cluster selection methods, such as the elbow method or silhouette score.
### _Parameter selection and tuning_
In the next part of this work, the three-phase voltage data after preprocessing was fed to the AE for feature extraction. The data samples were split in 70:15:15 ratio into training, validation, and test data. The AE is composed of five convolutional layers followed by five pooling layers and finally, the features are flattened to embed in 60 features. A dense layer is followed by three convolutional layers in the decoder part. Fig. 2(a) shows the training and validation loss with each epoch, and fig. 2(b) shows the reconstruction error for sample test data.
The overall clustering performance depends on both the reconstruction error in the AE and the clustering error in the K-means algorithm. While it is easier to embed the data into a higher dimension, K-means algorithm accuracy suffers from the high dimensionality of the feature space data. A grid search method was applied to find the best results, and a feature set of 60 yielded the overall best clustering performance. After encoding the data, principal component analysis (PCA) was applied to keep \(95\%\) variance in the feature set. This further reduces the feature set to 17 features and improves the performance of clustering. Although K-means is an unsupervised algorithm, it requires the cluster number as an input variable. In this paper, within-cluster-sum of squared errors (WSS), also known as 'distortion score', has been used as a metric to find the ideal number of clusters. WSS depicts the variability of samples within each cluster and is defined as:
\[WSS=\sum_{k=1}^{c}\sum_{x_{i}\in k}\left\|x_{i}-a_{k}\right\|^{2} \tag{6}\]
Based on the WSS score for different number of clusters (k), the suitable k can be detected by the elbow method. Fig. 4 shows that WSS changes rapidly for the low number of clusters and slows down for a higher k forming the elbow at k=8. However, the cluster number is a tunable parameter, and if the elbow is not well-defined, different cluster sizes near the elbow point should be explored. The K-means algorithm randomly initializes k cluster centroids and updates cluster members and cluster centroids based on the distance. The output of the algorithm is cluster numeric labels associated with each data input.
## III Results and discussion
Fig. 5 presents a three-dimensional scatter plot based on the output from the K-means algorithm. PCA was applied to
Fig. 3: Auto encoder learning
Fig. 2: Convolutional autoencoder structure
the feature set to reduce it to three major components to be able to visualize in a 3d plot. The data is segmented into eight clusters represented by numerical labels.
The cluster sizes are imbalanced, which depicts that the frequency of different power events in a distribution system is also imbalanced. Additionally, some different events share some common features; thus, it is highly unlikely to get completely separated clusters. Unlike supervised learning, there is no ideal metric for evaluating cluster performance in an unsupervised clustering method. Silhouette score [15] is often used as a metric to measure the integrity and quality of the clusters. Silhouette coefficient s(i) for a sample i is defined as:
\[s(i)=\frac{b(i)-a(i)}{max(b(i),a(i))} \tag{7}\]
Here, a(i) is the average distance between each point within a cluster, and b(i) is average inter-cluster distance or the average distance between all clusters, s(i)\(\in\)[-1,1], where a value close to 1 means higher similarity within cluster and good separation from neighbouring clusters. Fig. 6 presents silhouette analysis for the clustering performance with cluster sizes represented by thickness for each cluster label. The average silhouette score is 0.789, which represents cluster integrity and quality.
In addition to creating the clusters, it is highly desirable to identify the events corresponding to each cluster. Some of the labeled data in the repository have been utilized to identify the events. Manual observation of the cluster samples can be used to identify the different PQ events if there is no labeled data. Cosine similarity has been used to identify similarity among the cluster centroids and labeled data. Cosine similarity, \(c_{sim}\) is defined as:
\[c_{sim}=\frac{a_{k}\cdot x_{i}}{\left\|a_{k}\right\|\left\|x_{i}\right\|} \tag{8}\]
Algorithm 1 shows the process of assigning labels to clusters. Cosine similarity with cluster centroids from the K-means algorithm is calculated for each labeled sample. Then mean cosine similarity, \(c_{sim}\) for each event is calculated, and the cluster centroid with maximum \(c_{sim}\) identifies that event label.
```
1:for Cluster centers, \(a_{k}\in\{a_{1},a_{2},...,a_{c}\}\)do
2: similarity, s=[]
3:for\(x\in\{x_{1},...,x_{ik}\}\)do
4: Encode data, \(h_{i}=f_{w}(x_{i})\)
5: Apply PCA
6: Calculate \(c_{sim}\) = cosine similarity(\(a_{k},h_{i}\))
7: Append \(c_{sim}\) to s
8:endfor
9: Compute mean similarity, \(s_{mean}\)
10: Assign event label to \(a_{k}\), for \(max(s_{mean})\)
11:endfor
```
**Algorithm 1** Event assign to clusters via cosine similarity
In the labeled data, there were five types of event samples. Table I shows the average cosine similarity of each cluster center with the labeled data and probable events tagged with numerical labels associated with the clusters. The label assignment is affected by the accuracy and amount of labeled data.
Fig. 4: Cluster number selection by elbow method
Fig. 5: Scatterplot in 3d with K-means
Fig. 6: Silhouette analysis for K-means clustering
Insufficient data for an event can create ambiguous results. Also, there can be a high similarity between several events. In those cases, manual observation of some of the samples in the cluster can be performed to identify the correct associated event. From this analysis, it is evident that there could be some variation within each cluster. For example, there can be events that have both transient and sag characteristics, or there could be events of voltage swell with harmonics.
It provides further insights into the clusters. However, as t-SNE does not preserve distance or density, the cluster size, shape, or distances do not represent sample characteristics. Variations within clusters like single phase affected vs. two phases affected (cluster 3), sag with transient, sustained sag vs. temporary sag can be observed by investigating the sub-clusters. In addition, some incorrect cluster assignments can be observed, especially in cases where data with smaller lengths have been wrongly clustered as interruptions. A possible way to improve the performance would be to maintain the same length and format of data capturing. Ultimately, this method can be used to filter the massive amount of recorded PQ waveforms so that uninteresting data are removed. By clustering different events and using cosine similarity, it is envisioned that the operator could manually identify a small number of waveforms as "interesting" or "uninteresting," and the algorithm could respond by alerting the operator to the appearance of more "interesting" events on the system.
## IV conclusion
Unsupervised clustering can be a useful tool to visualize and identify data of interest in the context of power system data analysis. In this paper, an autoencoder-based unsupervised waveform clustering method has been explored to group a large number of unlabeled data into different clusters. The selection of cluster number and performance analysis has been discussed. Using some available labeled data, a cosine similarity-based cluster label assigning method has been presented to identify different PQ events. Finally, some visualization methods and cluster analysis have been discussed. The proposed framework would be instrumental in analyzing unlabeled waveform captures and aid power system engineers in taking appropriate actions for maintaining power quality.
|
2303.03682 | Stacking disorder and thermal transport properties of $α$-RuCl$_3$ | $\alpha$-RuCl$_3$, a well-known candidate material for Kitaev quantum spin
liquid, is prone to stacking disorder due to the weak van der Waals bonding
between the honeycomb layers. After a decade of intensive experimental and
theoretical studies, the detailed correlation between stacking degree of
freedom, structure transition, magnetic and thermal transport properties
remains unresolved. In this work, we reveal the effects of a small amount of
stacking disorder inherent even in high quality $\alpha$-RuCl$_3$ crystals.
This small amount of stacking disorder results in the variation of the magnetic
ordering temperature, suppresses the structure transition and thermal
conductivity. Crystals with minimal amount of stacking disorder have a
T$_N>$7.4\,K and exhibit a well-defined structure transition around 140\,K upon
cooling. For those with more stacking faults and a T$_N$ below 7\,K, the
structure transition occurs well below 140\,K upon cooling and is incomplete,
manifested by the diffuse streaks and the coexistence of both high temperature
and low temperature phases down to the lowest measurement temperature. Both
types of crystals exhibit oscillatory field dependent thermal conductivity and
a plateau-like feature in thermal Hall resistivity in the field-induced quantum
spin liquid state. However, $\alpha$-RuCl$_3$ crystals with minimal amount of
stacking disorder have a higher thermal conductivity that pushes the thermal
Hall conductivity to be closer to the half-integer quantized value. These
findings demonstrate a strong correlation between layer stacking, structure
transition, magnetic and thermal transport properties, underscoring the
importance of interlayer coupling in $\alpha$-RuCl$_3$ despite the weak van der
Waals bonding. | Heda Zhang, Michael A McGuire, Andrew F May, Joy Chao, Qiang Zheng, Miaofang Chi, Brian C Sales, David G Mandrus, Stephen E Nagler, Hu Miao, Feng Ye, Jiaqiang Yan | 2023-03-07T06:44:28Z | http://arxiv.org/abs/2303.03682v2 | # Stacking disorder and thermal transport properties of \(\alpha\)-RuCl\({}_{3}\)
###### Abstract
Van der Waals bonded layered materials are susceptible to stacking disorders, which can affect the physical properties of cleavable magnets, for example, \(\alpha\)-RuCl\({}_{3}\). The recently debated thermal transport properties highlight the importance of a careful investigation of stacking issue in \(\alpha\)-RuCl\({}_{3}\) and how it affects the physical properties. In this work, we performed a systematic investigation of the correlation between the layer stacking, structure transition, magnetic and thermal transport properties of different \(\alpha\)-RuCl\({}_{3}\) crystals with T\({}_{N}\) varying from 6 K to 7.6 K by measuring magnetic properties, specific heat, neutron single crystal diffraction, and thermal transport. A small population of stacking disorder suppresses T\({}_{N}\). \(\alpha\)-RuCl\({}_{3}\) crystals with a single T\({}_{N}\) in the range of 7.4 K-7.6 K have minimal amount of stacking disorder and show a well defined structure transition around 140 K upon cooling. For those crystals with T\({}_{N}\) below 7 K, the structure transition occurs well below 140 K upon cooling and is incomplete, manifested by the coexistence of both high temperature and low temperature phases down to the lowest measurement temperature. Diffuse streaks were observed for these crystals but not for those with T\({}_{N}\) above 7.4 K. For both types of crystals, oscillatory field dependent thermal conductivity and plateau-like feature in thermal Hall resistivity were observed in the field-induced disordered state that may be a quantum spin liquid. However, \(\alpha\)-RuCl\({}_{3}\) crystals with with minimal amount of stacking disorder have a higher thermal conductivity that pushes the thermal Hall conductivity to be close to the half-integer quantized value. Our results demonstrate a strong correlation between the layer stacking, structure transition, magnetic and thermal transport properties, and highlight the importance of interlayer coupling in \(\alpha\)-RuCl\({}_{3}\) despite the weak van der Waals bonding.
+
Footnote †: preprint: APS/123-QED
Shortly after the first experimental investigation of \(\alpha\)-RuCl\({}_{3}\) as the candidate material for Kitaev quantum spin liquid[1], it was realized that the stacking sequence of the RuCl\({}_{3}\) honeycomb layers can affect the magnetic properties [2]. This is typical for van der Waals bonded cleavable magnets where nowadays the stacking degree of freedom of the two dimensional structure units has been employed to engineer magnetic ground states [3]. The stacking of RuCl\({}_{3}\) honeycomb layers relative to each other leads to different proposed crystal structures including the monoclinic \(C\)2/_m_, the rhombohedral _R_\(\bar{3}\), and the trigonal \(P\)3\({}_{1}\)12. At room temperature, most studies reported a \(C\)2/\(m\) structure for \(\alpha\)-RuCl\({}_{3}\). Upon cooling, this monoclinic structure becomes unstable and around 150 K \(\alpha\)-RuCl\({}_{3}\) goes through a first order structure transition most likely to the rhombohedral _R_\(\bar{3}\) structure [4]. In early studies of \(\alpha\)-RuCl\({}_{3}\), more than one magnetic anomaly is typically observed in the temperature range 7-14 K [2; 5; 6; 7; 8]. The 14 K magnetic order was later confirmed to result from the \(C\)2/_m_ type stacking [2; 6; 9]. And it is believed that the mixing of different types of stacking gives rise to those magnetic anomalies in the temperature range 7 K-14 K.
With the intense materials synthesis effort, most \(\alpha\)-RuCl\({}_{3}\) crystals being studied nowadays have only single magnetic transition around T\({}_{N}\)=7 K. This has been employed as a simple and convenient criteria for a quick check of crystal quality. However, T\({}_{N}\) has been found to vary from 6 K to 8 K [10; 11; 12]. The origin of this discrepancy is still unknown. Also unknown is whether and how this magnetic order is correlated with the first order structure transition at high temperature.
Recently, the intensively debated thermal transport properties of \(\alpha\)-RuCl\({}_{3}\) calls for a thorough revisit to the materials issues of this fascinating compound [13]. The half-integer quantized thermal Hall conductance was believed to be one of the fingerprints for Majorana fermions of the fractionalized spin excitations in \(\alpha\)-RuCl\({}_{3}\). However, the thermal Hall conductance in the field-induced disordered or quantum spin liquid state was found to be sample dependent and the experimental observation of half quantized value requires using \(\alpha\)-RuCl\({}_{3}\) crystals with high longitudinal thermal conductivity [14; 15; 16; 17; 18; 19; 20]. The other intriguing experimental observation is the oscillatory features of thermal conductivity as a function of in-plane magnetic field. These oscillatory features were reproduced by different groups but the origin is under hot debate. While some believed this is an intrinsic character of the magnetic phase of T\({}_{N}\approx\)7 K and attributed the observed oscillations to quantum oscillations of putative charge-neutral fermions [21; 22], others believed that the oscillatory features have an extrinsic origin and
result from a sequence of field-induced magnetic phase transitions in crystals with stacking disorder [23; 24].
All these interesting but debated results and interpretations highlight the importance of understanding the materials issues in \(\alpha\)-RuCl\({}_{3}\). In this work, we performed a careful investigation of the correlation between the layer stacking, structure transition, magnetic and thermal transport properties of different \(\alpha\)-RuCl\({}_{3}\) crystals with T\({}_{N}\) varying from 6 K to 7.6 K. The amount of stacking disorder discussed in this work is far less than that in previous studies [2; 5; 6; 7; 8] and may not induce significant magnetic anomalies at 10-14 K in magnetic and specific heat measurements. Based on the characterizations, we categorize our \(\alpha\)-RuCl\({}_{3}\) crystals into two types. Type-I \(\alpha\)-RuCl\({}_{3}\) single crystals with T\({}_{N}\) above 7.2 K have minimal amount of stacking disorder and show a well defined structure transition around 140 K upon cooling. For type-II crystals with more stacking disorder and a lower T\({}_{N}\), the structure transition occurs below 140 K upon cooling and is incomplete, manifested by the coexistence of both high temperature and low temperature phases and the observation of diffuse streaks by neutron scattering below the structure transition. A small amount of stacking disorder suppresses T\({}_{N}\). For both types of crystals, oscillatory field dependent thermal conductivity and plateau like feature in thermal hall resistivity were observed in the widely believed field-induced quantum spin liquid state. However, \(\alpha\)-RuCl\({}_{3}\) single crystals with T\({}_{N}\) above 7.4 K have a higher thermal conductivity, which pushes the thermal Hall conductivity to be close to the half integer quantized value. Our results demonstrate a strong correlation between the layer stacking, structure transition, magnetic and thermal transport properties and highlight the importance of interlayer coupling in \(\alpha\)-RuCl\({}_{3}\).
## II Experimental details
\(\alpha\)-RuCl\({}_{3}\) used in this study were grown using two vapor transport techniques: self-selecting vapor growth with a small temperature gradient near the powder, and the conventional vapor transport technique with a large temperature gradient of 250\({}^{\circ}\)C. The former is employed to grow thicker crystals for neutron single crystal diffraction measurements and the growth details can be found elsewhere [25]. The latter yields millimeter sized single crystals ideal for measurements of magnetic, thermodynamic, and thermal transport properties and the growth details are reported previously[21]. High pure RuCl\({}_{3}\) powder synthesized by reacting RuO\({}_{2}\) powder with Al\({}_{2}\)O\({}_{3}\)-KCl salt[26] or purchased from Furuya Metals (Japan) was used in most growths and they are found to be comparable with respect to disorder control. For the growths with a lower magnetic ordering temperature, a small temperature gradient along the growth ampoule was used and those two temperatures were set up as 1075\({}^{\circ}\)C and 1025\({}^{\circ}\)C. We noticed that crystal quality and properties are sensitive to the total vapor pressure inside of the growth ampoule. This seems to be one of the critical factors for a reproducible growth of \(\alpha\)-RuCl\({}_{3}\) with controlled degree of stacking disorder.
Elemental analysis on cleaved surfaces was performed using the wavelength dispersive (WDS) spectroscopy techniques. WDS measurement was performed using a JEOL JXA-8200X electron microprobe analyzer instrument equipped with five crystal-focusing spectrometers for wavelength dispersive x-ray spectroscopy. Magnetic properties were measured with a Quantum Design (QD) Magnetic Property Measurement System in the temperature range 2.0 K\(\leq\)T\(\leq\)300 K. Specific heat data below 30 K were collected using a QD Physical Property Measurement System (PPMS). The temperature dependence of 0 0 \(l\) reflections was monitored on a flat surface by a PANalytical X'Pert Pro MPD powder x-ray diffractometer using Cu K\({}_{\alpha 1}\) radiation. An Oxford PheniX closed cycle cryostat was used to measure from 300 K to 20 K.
Neutron diffraction experiments were carried out using the single crystal diffractometer CORELLI [27] at SNS to study the crystal structure and the structural phase transition and to search for potential diffuse scattering arising from possible stacking disorder in crystals grown under different conditions. Single crystals around 60-200 miligrams are mounted inside a closed-cycle refrigerator with a base temperature of 6 K and the Mantidwork-bench package was used for data reduction and analysis. We define the momentum transfer Q in 3\(D\) reciprocal space in \(\AA^{-1}\) as \(Q=Ha^{*}+Kb^{*}+Lc^{*}\), in which \(H\), \(K\) and \(L\) are Miller indices, and \(a^{*}\), \(b^{*}\), \(c^{*}\) are the lattice vectors in reciprocal space.
## III Results
Figure 1 shows the temperature dependence of magnetization, specific heat, and 005 reflection illustrating the magnetic ordering and structure transition temperatures for two typical types of single crystals. We define the magnetic ordering temperature as the temperature where Cp(T) peaks. Type-I crystals have T\({}_{N}>\)7.2 K and show sharp anomalies at T\({}_{N}\) in both temperature dependence of magnetization and specific heat. Results of one crystal with T\({}_{N}\)=7.6 K (see Fig. 1a) are presented in this work as a representative of type-I crystals. For this crystal, the temperature dependence of high temperature magnetization (Fig. 1d) and 005 reflection suggest the structure transition occurs at about 140 K upon cooling (Fig. 1c) and about 170 K upon warming(Fig. 1e). Upon cooling below 140 K, the intensity of the peak sitting at 2\(\theta\)=85.15 degree decreases quickly upon further cooling and disappears around 130 K. Meanwhile, a peak sitting at 2\(\theta\)=85.62 degree appears and its intensity in
creases quickly upon cooling and saturates below about 120 K. Step-like features associated with the structure transition are observed in the high temperature magnetization at 140 K and 170 K upon cooling and warming, respectively. Similar features are observed for all \(\alpha\)-RuCl\({}_{3}\) crystals with T\({}_{N}\) above 7.2 K.
Type-II crystals (Fig. 1b) normally have a T\({}_{N}\) below 7 K and show broader transitions and weaker anomalies near T\({}_{N}\) in magnetization and specific heat. Physical properties of one crystal with T\({}_{N}\)=6.5 K were presented in this work as an example of type-II crystals. Above T\({}_{N}\), some weak features were observed in both magnetization and specific heat in (b), in sharp contrast to (a). The structure transition also shows anomalous features distinct from those for type-I crystals. Upon cooling, the structure transition of the crystal with T\({}_{N}\)=6.5 K (see Fig. 1f) occurs around 60 K and the transition is not complete which is well illustrated by the coexistence of reflections from both high temperature and low temperature phases to the lowest temperature 20 K of our x-ray diffraction measurement. Upon warming from 20 K, the phase coexistence persists until about 170 K above which only the high temperature monoclinic phase remains(Fig. 1h). Accordingly, a wide loop was observed in the temperature dependence of magnetization (Fig. 1g). It is interesting to note that for both \(\alpha\)-RuCl\({}_{3}\) crystals the structure transition occurs around 170 K upon warming. When screening crystals, we noticed that the structure transition can occur in a wide temperature range 50 K-140 K when measuring upon cooling. For type-II crystals, the phase coexistence is always observed for crystals with T\({}_{N}\) below 7 K. Fig.S1 of Supporting Materials shows the x-ray diffraction results of two different crystals. Both crystals have T\({}_{N}\)=7.1 K (from specific heat) and the structure transition at 120 K upon cooling. However, one crystal shows a complete structure transition while the other one shows phase coexistence below 120 K seems to be the lowest structure transition temperature that one \(\alpha\)-RuCl\({}_{3}\) crystal can have before showing a sluggish first order structure transition and phase coexistence at low temperatures.
Considering the limited penetration depth of x-ray diffraction, we studied the structure of different pieces of pristine \(\alpha\)-RuCl\({}_{3}\) crystals using single crystal neutron diffraction. All crystals studied with neutron diffraction were well characterized by measuring magnetic properties and/or specific heat before exposing to neutron beam. Multiple pieces with mass ranging from 6 mg to 200 mg and T\({}_{N}\) ranging from 6 K to 7.6 K were checked. All crystals crystallize in \(C2/m\) at room temperature but have
Figure 1: (color online) Magnetic order and structure transition in two typical types of \(\alpha\)-RuCl\({}_{3}\) single crystals. As described in the text, type-I crystals have magnetic ordering temperatures above 7.2 K, while type-II crystals normally order magnetically below 7 K. (a,b) Temperature dependence of magnetization and specific heat (Cp) below 20 K. The magnetization was measured in a magnetic field of 1 kOe applied along the zig-zag direction (perpendicular to the Ru-Ru bond). The vertical dashed lines highlight the Neel temperature, T\({}_{N}\), defined as the temperature where Cp peaks. T\({}_{N}\)=7.6 K in (a) is 1.1 K higher than that in (b). (c-e) Temperature dependence of magnetization and 005 reflection at 2\(\theta\) near 84.5 degree in the temperature range 20 K-200 K for the crystal with T\({}_{N}\)=7.6 K, a representative for type-I crystals. The temperature dependence of magnetization in (d) was measured in a magnetic field of 10 kOe applied perpendicular to the honeycomb plane. (f-h) Temperature dependence of magnetization and 005 reflection at 2\(\theta\) near 84.5 degree in the temperature range 20 K-200 K for the crystal with T\({}_{N}\)=6.5 K, an example for type-II crystals.
different degree of mosaic. We are aware of previous reports of successful growth of \(\alpha\)-RuCl\({}_{3}\) crystals with the \(P3_{1}12\) structure [23]. Our neutron results do not rule out the possibility that some small crystals suitable for x-ray single crystal diffraction can have \(P3_{1}12\) type structure. Figures 2(a,b) show the temperature dependence of (116) or (113) reflection to determine the structure transition in both crystals. For the crystal with T\({}_{N}\)=7.6 K, the structure transition occurs around 140 K upon cooling and 170 K upon warming, consistent with those determined from magnetic and x-ray diffraction measurements shown in Fig. 1. For the crystal with T\({}_{N}\)=6.5 K, the structure transition occurs below 80 K upon cooling and 170 K upon warming. The latter temperature agrees well with those determined from magnetic and X-ray diffraction measurements shown in Fig. 1. The slightly different transition temperature when measuring cooling is in line with the fact that this transition temperature can vary in a wide temperature range 50 K-140 K depending on the concentration and detailed distribution of stacking imperfection.
Detailed reciprocal space maps provide better understanding of the nature of average and local structure for both \(\alpha\)-RuCl\({}_{3}\) single crystals. Figures 2b and c show the neutron diffraction patterns collected at 200 K (above the structure transition) and 100 K (below the structure transition) for the crystal with T\({}_{N}\)=7.6 K. As reported previously, the high temperature monoclinic \(C2/m\) structure becomes unstable upon cooling and transforms into a rhombohedral \(R\bar{3}\) structure [4]. This can be appreciated by the appearance of additional Bragg peaks along the \(L\) direction (see Fig. 2c). The right half of each panel shows the line cut along \([0,2,L]\) highlighting that (1) this crystal is of high quality with negligible mosaic and (2) the absence of any diffuse feature in the temperature range investigated. The correlation length \(\xi\) derived from the full width half maximum (FWHM) of the \((0,2,0)\) reflection at 200 K using Lorentzian line-shape is \(\sim 450\pm 10\) A, which is at the resolution limit of the neutron measurement. The peak broadens slightly at 100 K and suggests a reduction of the correlation length to about \(350\pm 10\) A. Figures 2(e)-2(f) show the neutron diffraction patterns at 200 K (above the structure transition) and 10 K (below the structure transition) for the crystal with T\({}_{N}\)=6.5 K. The crystal clearly has a much larger mosaic and more interestingly, diffuse streaks is evident below the structural transition [Fig. 2(f)]. The correlation length is about \(78\pm 5\) A at 200 K and reduces to \(32\pm 3\) A at 10 K. Both values are much smaller comparing to those for the crystal with T\({}_{N}\)=7.6 K. De
Figure 2: (color online) Neutron single crystal diffraction found a sharp structure transition for the crystal with T\({}_{N}\)=7.6 K but a sluggish structure transition and diffuse streaks for the crystal with T\({}_{N}\)=6.5 K. (a) The thermal evolution of the \((1,1,6)\) reflection measured in cooling and warming confirms the sharp structural transition in the crystal with T\({}_{N}\)=7.6 K. (b, c) The neutron diffraction patterns at 200 K (in the \(C/2m\) phase) and 100 K (in the \(R\bar{3}\) phase) for the crystal with T\({}_{N}\)=7.6 K. Extra reflections in (c) confirm the symmetry change at low temperatures. (d) The temperature evolution of \((1,1,3)\) reflection measured in cooling and warming of the crystal with T\({}_{N}\)=6.5 K. (e, f) The neutron diffraction patterns at 200 K (above the structure transition) and 10 K (below the structure transition) for the crystal with T\({}_{N}\)=6.5 K. The mosaic in (e) and the diffuse streaks in (f) are highlighted by the line cut shown in the right half of each panel (b,c,e,f). The transformation matrix between the high-\(T\)\(C2/m\) and low-\(T\)\(R\bar{3}\) unit cell are: \(\mathbf{a}_{r}=\mathbf{a}_{m}\), \(\mathbf{b}_{r}=1/2(-\mathbf{a}_{m}+\mathbf{b}_{m})\), \(\mathbf{c}_{r}=\mathbf{a}_{m}+3\mathbf{c}_{m}\), where the subscript “r” and “m” denote the rhombhedral and monoclinic structures.
spite the apparent difference in the correlation lengths, the local environment surrounding the Ru ions for both single crystals are very similar in the high temperature \(C2/m\) phase. As detailed in Table 1 in Supporting Materials, those two crystals have the same lattice parameters, Cl-Ru-Cl bond angles, Ru-Cl bond lengths at 200 K. Given the surprisingly similar crystal structure and lack of unidentified phase, the observed diffuse streaks in the crystal with T\({}_{N}\)=6.5 K are attributed to the stacking faults that are prevalent in van der Waals bonded layered materials. As discussed later, the stacking disorder can act as the pinning center preventing a uniform structure phase transition. Thus the structure transition occurs at a lower temperature. The inhomogeneous distribution of stacking disorder might be responsible for the sluggish transition occurring in a wider temperature range.
After warming above the structure transition, both crystals seem to get back to the original pristine state. In Supporting Materials, we show the neutron diffraction patterns of both crystals after warming up to 200 K from below the structure transition. Interestingly, the diffraction pattern and the correlation length are comparable before and after the thermal cycling for both crystals. From other previous neutron experiments, we learned that the mosaic becomes larger after some thermal cyclings. We also monitored possible effects of thermal cycling on the structure transition and magnetic order by measuring the temperature dependence of magnetization and specific heat. As shown in the Supporting Materials, the thermal cycling does not seem to affect the magnetic order after up to 10 cycles.
Figure 3(a) shows the typical temperature dependence of thermal conductivity (\(\kappa_{a}\), thermal current along _a_-axis) of both single crystals below 20 K. The small difference in T\({}_{N}\) can be well resolved. Both samples show a recovery of heat transport upon cooling through T\({}_{N}\) and a peak around 5 K in the temperature dependence of thermal conductivity. This is consistent with previous reports[28; 29; 30; 17; 20]. The crystal with T\({}_{N}\)=7.6 K has a higher thermal conductivity below T\({}_{N}\). This magnitude of thermal conductivity is higher than the threshold value for the observation of half-integer quantized thermal Hall
Figure 3: (color online) Thermal transport measurement results of \(\alpha\)-RuCl\({}_{3}\). (a) Thermal conductivity of \(\alpha\)-RuCl\({}_{3}\) crystal. Heat current is applied along the zig-zag direction (perpendicular to the Ru-Ru bond). (b) Oscillatory features of thermal conductivity. Data collected with both heat current and magnetic field along the zig-zag direction. (c, d) Thermal Hall resistivity (c) and conductivity (d) at 5 K. Both heat current and magnetic field are applied along the zig-zag direction. The horizontal dashed lines in (d) highlight the thermal Hall conductivity relative to the fraction of quantized thermal Hall \(\kappa_{QH}\)/n, where n = 2,4,and 8 are shown.
effect [20].
Figure 3(b) shows the oscillatory features in the field dependence of longitudinal thermal conductivity (\(\kappa_{a}\)) at 2 K, shown as the scaled derivative (\(\kappa^{-1}\)d\(\kappa\)/dB T\({}^{-1}\)) of thermal conductivity with respect to applied magnetic field. This scaling procedure brings the oscillatory features of various samples with different thermal conductivities to the same scale and emphasizes the critical fields instead of absolute oscillatory amplitudes. We observed that the oscillatory features, especially those after the system has entered the field-induced disordered or quantum spin liquid phase ( 7.5 T), are nearly identical for both samples. This indicates that the oscillatory features in the field induced disordered or quantum spin liquid region are dominated by in-plane physics. We also studied the thermal Hall effect of both crystals. Figures. 3c and d show a plateau region in both thermal resistivity and (scaled) conductivity measured at 5 K. The plateau spans the field region in which the field-induced disordered or quantum spin liquid phase dominates (7 T - 9 T). The profile of the thermal Hall signal is better shown by resistivity. The scaled thermal Hall conductivity is highly sample dependent. We noticed in our sample screening that the plateau value could vary by a factor of 5 for different samples with different thermal conductivities. The crystal with T\({}_{N}\)=7.6 K has less stacking disorder and hence a higher thermal conductivity. This leads to a near half quantized \(\kappa_{xy}\)/T. In contrast, \(\kappa_{xy}\)/T for the crystal with T\({}_{N}\)=6.5 K is further away from the half quantized value as illustrated by the data shown in Fig. 3(d). A more detailed study of the anisotropic thermal transport properties and the origin of the oscillatory features will be reported separately.
## IV Discussions
The detailed physical properties of two \(\alpha\)-RuCl\({}_{3}\) crystals were reported in this work as the representatives of two types of \(\alpha\)-RuCl\({}_{3}\) crystals. Type I crystals normally have T\({}_{N}\) above 7.2 K. Type II crystals normally have T\({}_{N}\) below 7 K. For those crystals with T\({}_{N}\) near 7.1 K, as discussed later, the properties depend on the concentration and detailed distribution of stacking disorder. Specific heat results shown in Fig. 1 and Supporting Materials show that T\({}_{N}\) of our \(\alpha\)-RuCl\({}_{3}\) crystals can vary in the range 6 K-7.6 K. Our WDS measurements confirmed that _nonstoichiometry should not be responsible for the variation of T\({}_{N}\) of different \(\alpha\)-RuCl\({}_{3}\) crystals_. Three crystals with T\({}_{N}\)=6.0 K, 6.8 K, and 7.6 K were studied with WDS to explore possible nonstoichiometry. All three crystals have the same atomic ratio. Then the difference in properties between type-I and type-II crystals may provide some valid insight into the origin of the variation of T\({}_{N}\) in different \(\alpha\)-RuCl\({}_{3}\) crystals.
A careful comparison between type-I and II crystals suggests that _stacking disorder leads to a sluggish structure transition and phase coexistence at low temperatures_. Type II crystals distinguish themselves from type-I crystals by showing a sluggish structure transition most likely originating from the stacking disorder. The larger mosaic (see Fig. 2e) suggests the presence of stacking disorder in type-II crystals. From magnetic measurements, x-ray and neutron diffraction measurements of different crystals, the structure transition when being measured upon cooling occurs at a lower temperature with increasing mosaic and two-phase coexistence is always observed when the structure transition occurs below 120 K. The phase coexistence was observed at 20 K, the lowest temperature of our X-ray diffraction measurements. It will not be a surprise if the phase coexistence persists at even lower temperatures, for example, below T\({}_{N}\). Our x-ray and neutron diffraction studies provide solid evidence for the phase coexistence but cannot tell the detailed concentration and distribution of those two phases or the stacking disorder. However, one would expect the stacking disorder can act as the pinning center for the sliding of RuCl\({}_{3}\) honeycomb layers during the structure phase transition and leads to more complex stacking disorder upon cooling. This expectation is also consist with the observation of diffuse streaks for type-II crystals shown in Fig. 2(f). Such diffuse streaks were observed in previous studies and attributed to stacking disorder between RuCl\({}_{3}\) honeycomb layers [6; 10]. The concentration and detailed distribution of stacking disorder in starting crystals determine the mosaic, structure transition temperature, the concentration and distribution of both high temperature and low temperature phases at low temperatures.
Neutron single crystal diffraction confirms that type-I crystals are of high quality manifested by a large correlation length beyond the resolution limit of our neutron measurement. However, this doesn't mean that these crystals are free of stacking disorder. The reduced correlation length observed below the structure transition can arise from the stacking disorder in the as-grown crystals. Previously, a scanning transmission electron microscope (STEM) study observed \(P3_{1}\)12 type stacking in \(\alpha\)-RuCl\({}_{3}\) flakes from one crystal with \(C2/m\) structure determined by neutron single crystal diffraction [31]. This result was reproduced in our recent STEM studies. STEM probes a small region of a thin flake, while neutron diffraction studies the whole crystal in neutron beam averaging over many similar and different regions. The discrepancy between STEM and neutron diffraction indicates the presence of stacking disorder in type-I crystals and the disorder can even exist in local regions in the form of other types of layer stacking with comparable energy[9]. However, the long coherence length suggests the population of stacking disorder in high quality crystals is low.
_The stacking disorder can be responsible for the slightly suppressed T\({}_{N}\)._ The phase coexistence or stacking disor
der may affect both the interlayer and intralayer magnetic interactions. The inhomogenous layer spacing and exchange paths disturb the interlayer magnetic interactions. The intralayer exchange interactions are found to be sensitive to the stacking structure in a theoretical study [32]. At 200 K, neutron single crystal diffraction found the same local structure in both types of \(\alpha\)-RuCl\({}_{3}\) crystals. However, whether the local structure similarity persists at low temperatures is unknown. Despite tremendous efforts, the low temperature structure is not yet determined due to the appearance of diffuse scattering below the structure transition[6; 9]. Solving the low temperature structure is beyond the scope of current effort. However, we would point out that this is essential to resolve possible local distortion of RuCl\({}_{6}\) octahedra that determines the relative magnitude of those terms in magnetic Hamiltonian and also critical for a quantitative understanding of the suppression of T\({}_{N}\). In addition, when solving the low temperature structure, the phase coexistence scenario should be considered depending on the quality of crystals employed.
The above discussion of stacking disorder suppressed T\({}_{N}\) is not in contradiction to the previous proposal that mechanical deformation induced magnetic anomalies in the temperature range 10-14 K[9; 21]. Mechanical deformation was previously employed to introduce a large concentration of stacking disorder and this can eventually lead to crystals with one single magnetic order at 14 K. The concentration of stacking disorder in both types of crystals studied in this work is much lower and this is especially true to type I crystals. On the other hand, the suppression of T\({}_{N}\) from 7 K and the appearance of weak magnetic anomalies in the temperature range 10-14 K seem to be strongly correlated and may have the same origin. Specific heat data for more samples are presented in the Supporting Materials and the features below and above 8 K are highlighted in separate panels. For those crystals with suppressed T\({}_{N}\) (for example lower than 7 K) or with more than one lambda type anomaly, some weak anomalies can be observed above 8 K. All the observations suggest that a small amount of stacking disorder suppresses the long range magnetic order with T\({}_{N}\) around 7 K and magnetic phases with T\({}_{N}\) above 10 K become observable with increasing population of stacking faults.
_The stacking disorder is also responsible for the sample dependent thermal transport properties._ Upon cooling through T\({}_{N}\), type-I crystals show a stronger recovery of thermal conductivity (see Fig. 3a). The high thermal conductivity and T\({}_{N}\) of type-I crystals indicate the importance of interlayer coupling and interactions. As reported previously [16; 20], a large thermal conductivity is necessary for the observation of half-integer quantized thermal Hall effect. In Fig. 3(d), the crystal with T\({}_{N}\)=7.6 K shows a high thermal Hall conductivity \(\kappa_{xy}\)/T, closer to the half integer quantized value. It is interesting to note that half-integer quantized thermal Hall effect was recently observed for a deformed \(\alpha\)-RuCl\({}_{3}\) crystal with T\({}_{N}\)=14 K which exhibits a high thermal Hall resistivity despite a low thermal conductivity[21]. These two studies suggest two different ways to observe thermal Hall conductivity at half-integer quantized value: using high quality samples with minimal amount of stacking disorder and high thermal conductivity, or using samples with a high thermal Hall resistivity. Obviously, thermal Hall conductivity can be rather sample dependent and the stacking disorder can have a dramatic effect on it. Despite the sample dependent thermal Hall conductivity, both types of crystals have the similar critical fields for oscillatory features in magnetothermal conductivity and also the same field range in which thermal Hall effect and oscillatory features in thermal conductivity are observed.
## Summary
In summary, we study the effects of small amount of stacking disorder on the structure, magnetic and thermal transport properties in \(\alpha\)-RuCl\({}_{3}\) single crystals with T\({}_{N}\) varying from 6.0 K to 7.6 K. The amount of stacking disorder may not be enough to induce magnetic anomalies in the temperature range 10-14 K, but can still have a dramatic effect on the physical properties. The stacking disorder in as-grown crystals can suppress the structure transition temperature (upon cooling), the Neel temperature, and lattice thermal conductivity. The similar oscillatory field dependent thermal conductivity and plateau like feature in thermal Hall resistivity were observed in all \(\alpha\)-RuCl\({}_{3}\) single crystals studied in this work despite the variation in T\({}_{N}\). However, \(\alpha\)-RuCl\({}_{3}\) single crystals with minimal amount of stacking disorder have a higher thermal conductivity, which pushes the thermal Hall conductivity to be close to the half quantized value. Our results demonstrate a strong correlation between the layer stacking, structure transition, magnetic and thermal transport properties and highlight the importance of interlayer coupling in \(\alpha\)-RuCl\({}_{3}\) despite the weak van der Waals bonding. \(\alpha\)-RuCl\({}_{3}\) is one member of a large family of transition metal halides that could host a large variety of physics including topological magnons and chiral phonons. Due to the weak interlayer van der Waals bonding, the presence of stacking disorder may be universal in all these transition metal halides and maybe other layered cleavable magnets. The understanding of this work can be applied to other compounds and help resolve and understand the intrinsic properties of these interesting compounds.
## Acknowledgment
JY would thank discussions with Tom Berlijn, Huibo Cao, Hwan Do and Alan Tennant. The authors thank
Michael Lance for WDS measurements. HZ, SN, MM, and JY were supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Quantum Science Center. QZ, DM, AM, HM, and BS were supported by the US Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division. JC and MC were supported by an Early Career project supported by DOE Office of Science FWP ERKCZ55. A portion of this research used resources at the Spallation Neutron Source, a DOE Office of Science User Facility operated by the Oak Ridge National Laboratory.
This manuscript has been authored by UT-Battelle, LLC, under Contract No. DE-AC0500OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for the United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan ([http://energy.gov/downloads/doe-public-access-plan](http://energy.gov/downloads/doe-public-access-plan)).
|
2304.08770 | Gene activity fully predicts transcriptional bursting dynamics | Transcription commonly occurs in bursts, with alternating productive (ON) and
quiescent (OFF) periods, governing mRNA production rates. Yet, how
transcription is regulated through bursting dynamics remains unresolved. Here,
we conduct real-time measurements of endogenous transcriptional bursting with
single-mRNA sensitivity. Leveraging the diverse transcriptional activities in
early fly embryos, we uncover stringent relationships between bursting
parameters. Specifically, we find that the durations of ON and OFF periods are
linked. Regardless of the developmental stage or body-axis position, gene
activity levels predict individual alleles' average ON and OFF periods. Lowly
transcribing alleles predominantly modulate OFF periods (burst frequency),
while highly transcribing alleles primarily tune ON periods (burst size). These
relationships persist even under perturbations of cis-regulatory elements or
trans-factors and account for bursting dynamics measured in other species. Our
results suggest a novel mechanistic constraint governing bursting dynamics
rather than a modular control of distinct parameters by distinct regulatory
processes. | Po-Ta Chen, Michal Levo, Benjamin Zoller, Thomas Gregor | 2023-04-18T06:58:46Z | http://arxiv.org/abs/2304.08770v3 | # Common bursting relationships underlie eukaryotic transcription dynamics
###### Abstract
Transcription commonly occurs in bursts resulting from alternating productive (ON) and quiescent (OFF) periods. Yet how transcriptional bursts are regulated to determine spatiotemporal transcriptional activity remains unclear. Here we perform live transcription imaging of key developmental genes in the fly embryo, with single polymerase sensitivity. Quantification of single allele transcription rates and multi-polymerase bursts reveals shared bursting relationships among all genes, across time and space, as well as _cis_- and _trans_-perturbations. We identify the allele's ON-probability as the main determinant of the transcription rate, while changes in the transcription initiation rate are limited. Any given ON-probability determines a specific combination of mean ON and OFF times, preserving a constant characteristic bursting time scale. Our findings point to a convergence of various regulatory processes that predominantly affect the ON-probability, thereby controlling mRNA production rather than mechanism-specific modulation of ON and OFF times. Our results thus motivate and guide new investigations into the mechanisms implementing these bursting rules and governing transcriptional regulation.
+
Footnote †: Correspondence: [email protected]
+
Footnote †: Correspondence: [email protected]
+
Footnote †: Correspondence: [email protected]
## Introduction
Eukaryotic transcriptional regulation is an inherently dynamic and stochastic process. Multiple molecular events orchestrate, in space and time, the initiation of productive transcription by individual RNA polymerases (Pol II complexes), leading to the synthesis of nascent RNA [1; 2]. The amount of transcribed mRNA molecules in turn shapes protein production and thereby dictates cellular behavior. Studies across various systems, from yeast to mammalian cells, have revealed that transcription occurs in bursts, namely the release of multiple Pol IIs in what is often referred to as an ON period, followed by a quiescent OFF period [3; 4; 5; 6; 7; 8]. Yet it remains unclear how the kinetic parameters of transcriptional bursting determine mRNA production and govern spatiotemporal transcription dynamics. Is the transcription rate controlled primarily by tuning the durations of ON or OFF periods, the initiation rate (the rate of Pol II release during active periods), or by a combination of these? Furthermore, are distinct bursting parameters controlled by specific regulatory processes, and are distinct bursting strategies underlying temporal versus spatial (tissue-specific) control of transcription?
As a multitude of molecular processes is known to influence transcriptional activity, several studies aimed to uncover links between regulatory determinants and parameters of transcriptional bursting [9; 10; 11]. Transcriptional bursting is commonly described by parameters such as burst size, burst frequency, the kinetic rates governing ON and OFF times as well as transcription initiation [3; 12; 13]. Regulatory determinants, such as transcription factor (TF) binding, _cis_-regulatory elements, nucleosome occupancy, histone modification, and enhancer-promoter interactions were suggested to affect distinct bursting parameters [14; 15; 16; 17; 18; 19; 20; 21; 22]. Yet, it is difficult to integrate these observations and form a unified understanding of transcriptional control via bursting dynamics.
Much of our quantitative knowledge about transcriptional bursting heavily relies on fixed data [3; 14; 23; 24; 25; 26]. Capturing transcriptional bursts in vivo across space and time remains challenging [27; 28; 29; 20]. Adding to this challenge, live measurements need to be quantifiable in absolute units (i.e., mRNA count) to facilitate comparisons between different genes and conditions [30; 31; 32]. Moreover, to understand the entire spectrum of bursting dynamics, there is a need to probe the full dynamic range of a gene's activity [25]. Measurements in an endogenous system where tightly regulated spatiotemporal transcriptional dynamics dictate cell-fate determination can further elucidate the functional consequences of bursts and their relation to different regulatory determinants. The early _Drosophila_ embryo provides a unique system that meets all these requirements [33].
Here we quantify the endogenous transcription dynamics of key developmental genes in living _Drosophila_ embryos. We identify a single control parameter, the instantaneous ON-probability of an allele as the dominant determinant of transcriptional activity, while the initiation rate is mostly conserved. This finding holds across spatial domains, developmental times, genes, and perturbations of _cis_-regulatory elements and _trans_-regulators. Surprisingly, we find a largely constant ON-OFF switching correlation time of roughly one minute. A corollary of the latter is that mean ON and OFF times are tightly coupled, regardless of the _trans_-environment or the _cis_-regulatory architecture. While perturbations in the up
stream regulatory processes lead to dramatic changes in the ON-probability, i.e., spatiotemporal changes in transcription rate, the underlying changes in ON and OFF periods are predicted from wild-type. Instead of a particular perturbation type dictating changes in specific bursting parameters, we observe that more generally lowly transcribing alleles are tuned to higher expression levels by increasing burst frequency, while highly transcribing alleles are mostly tuned by increasing burst size.These results imply that for the examined genes, the bursting phenomenon can be quantitatively understood by a few simple rules: two of the transcription parameters are quasi-constant, and all others are determined by the ON-probability. Hence, future investigations necessitate a re-examination of our mechanistic understanding of transcription, focusing on how regulatory processes influence a unique control parameter.
## Results
**Instantaneous single allele transcription rate measurements.** To study the principles that govern transcription dynamics across space and time, we need access to the endogenous bursting kinetics at a single allele level. We designed an approach to obtain such quantitative measurements in living _Drosophila_ embryos [31, 35]. A versatile CRISPR-based scheme is employed to incorporate MS2 cassettes into the gap genes' introns (or 3'UTR) [36]. These form stem-loops in the transcribed nascent RNA that are subsequently bound by fluorescent coat-proteins (Fig. 1A, S1A and Methods) [37, 38]. A custom-built two-photon microscope generates fluorescence images, capturing RNA synthesis at one tagged allele per nucleus with a 68-fold signal improvement over previous studies [31], approaching single-mRNA sensitivity (Fig. S1D-E). An optimized field-of-view yields 10 s interval time-lapses for hundreds of nuclei per embryo, during nuclear cycles (NC) 13 and 14 (Fig. 1A-B; Videos V1-V4), essential for statistical analysis.
To achieve a fully quantitative characterization, we calibrate our measurements to absolute units. We convert the fluorescence signal at the site of transcription into equivalent cytoplasmic mRNA units (C.U.) by matching the mean transcriptional activity to previously calibrated smFISH measurements (Fig. 1B, see Methods) [25, 34]. We find high agreement between smFISH and live measurements, as a single conversion factor adjusts for the difference in fluorescence signal between the two methods, with an average relative error of \(\sim 5\%\) across all gap genes and MS2 insertion sites (Fig. 1C and S1B). This agreement extends to higher moments (Fig. S1C), despite the fully orthogonal nature of these techniques: one being non-invasive genetically but involving fixation, while the other involves gene editing and stem-loop cassette insertions. This strongly suggests that our live approach captures the endogenous situation and provides means to express our dynamic transcription measurements in terms of absolute mRNA counts.
Our unique combination of absolute calibration and near single transcript sensitivity (Fig. S1D-E) allows us to reconstruct the underlying single allele transcription initiation events by individual Pol II, namely the events in which Pol II complexes are released onto the gene and engage in productive elongation. To infer these initiation events for each time series, we adopt a Baysian deconvolution approach that accounts for measurement noise (Fig. S1D and Methods). The convolution kernel models the fluorescent signal resulting from the Pol II elongation process through the stem-loop cassette (with constant and deterministic elongation, Fig. 1D) [20, 27]. For each time series, the approach generates multiple configurations of transcription initiation events (Fig. 1E). Averaging these configurations gives us a time-dependent instantaneous single allele transcription rate \(r(t)\) per time series (Fig. 1F).
We validate this kernel-based deconvolution approach by performing dual-color tagging of the gene body (a 5' proximal intron and a 3'UTR tag). These measurements support our key assumptions (see Methods) and allow us to extract a Pol II elongation rate of \(K_{\text{elo}}=1.8\pm 0.1\) kb/min, which is in line with previous measurements [31, 39] (Fig. S2). Our inferred transcription rates are thus no longer masked by the Pol II elongation dwell time, unlike the directly measured intensities of transcriptional activity. Transcription rates are thus independent of gene length, enabling the direct comparison between different genes and opening a path to identify common principles underlying transcription dynamics.
**Single allele transcription rates hint at a universal bursting regime.** Before analyzing the gap genes' transcription dynamics at the single allele level, we sought to determine whether the averages of the deconvolved single allele transcription rates \(r\) recapitulate the well-documented average protein dynamics [40]. We compute a mean transcription rate \(R=\langle r\rangle\) per gene along the anterior-posterior (AP) axis for all time points during NC13 and NC14 (Video V5); averaging occurs over 200-300 nuclei (each contributing one allele) in the same AP and time bin from 10-20 embryos (Fig. 1A and 1B). The extracted mean transcription rate profiles (Fig. 2A and S3A) strongly resemble the ones reconstructed from carefully staged gap gene antibody staining, including the well-documented posterior shifts during NC14. Indeed, with simple assumptions on diffusion and lifetime, the mean transcription rates \(R\) predict protein patterns with minimal post-transcriptional regulation (Fig. S4, Video V6). Thus, in this system, the rules governing transcription rate modulation will largely determine function, i.e., protein synthesis.
While the examined genes exhibit a common range of mean transcription rates R (Video V5), they display distinct spatiotemporal profiles, giving rise to gene-specific protein patterns. Yet when we examine the distributions
of single allele transcription rates, \(r\), underlying a similar mean transcription rate, \(P(r|R)\), we find that these distributions collapse across genes (Fig. 2B). Strikingly, for low- to mid-levels of \(R\), the underlying distributions differ starkly from a constitutive regime, which would result in a Poisson distribution. At these levels, the large amounts of non-transcribing or barely transcribing alleles hint at quiescent OFF periods deviating from a constitutive regime. As the mean transcriptional activity increases, the distributions become more Poissonian, suggesting that \(P_{\text{ON}}\), i.e., the probability of the genes being ON, increases. These observations are consistent with bursting behavior, where the gene alternates between ON and OFF states.
The collapse of our data and the deviation from a constitutive, Poisson regime, are readily observed also when we compute the relationships between \(R\) and the higher moments of the \(P(r|R)\) distributions (Fig. 2C and S3C). Our data approaches the Poissonian regime only on the extreme ends of the \(R\) spectrum, implying that the gap
Figure 1: **Live single-cell transcription rate measurements of endogenous gap genes.** (A) Transcriptional activity measured of a single gap gene allele using a custom-built two-photon microscope in a living fly embryo. An 24xMS2 stem-loop cassette is inserted in the first intron of the gap genes. Constitutively expressed MCP-GFP binds stem-loops formed on nascent transcripts, making transcription sites appear bright above background (green hotspots) and enabling quantification of transcriptional activity at single allele resolution along the anterior-posterior (AP) axis of the embryo. (For genomic strategy of all gap gene loci see Fig. S1A). (B) Example single allele transcription time series for the gene _hunchback_ across nuclear cycles NC13 and NC14 (sampled every 10 s) from a single AP bin (width \(\sim 2\%\) embryo egg length \(L\)) at \(x/L=0.435\pm 0.010\). Intrinsically low embryo-to-embryo variability (compared to the total variance in our data, Fig. S1F) facilitated by the _Drosophila_ system allows for pooling alleles from multiple spatially and temporally aligned embryos (\(n=\)10–20). Mean transcriptional activity (black line) obtained from pooling 200–350 alleles. (C) Calibration of transcriptional activity in absolute units performed by matching mean activity profiles in a 5-min-interval during NC13 (gray shade in B) from live (color) and previously calibrated fixed smFISH (black) measurements for all examined gap genes [25]. A global conversion factor (one for all genes) leads to a match within 5% error between live and fixed profiles (Fig. S1B), resulting in our activity unit, i.e., cytoplasmic unit (C.U.) equivalent to the intensity of a fully elongated transcript [34]. (D) Reconstruction of transcription initiation events from deconvolution of single allele transcription time series. The signal is modeled as a convolution between transcription initiation events and a kernel accounting for the elongation of a single Pol II through the MS2 cassette and the gene body (using an elongation rate \(K_{\text{clo}}=1.8\,\text{kb/min}\), Fig. S2). Bayesian deconvolution is performed by sampling from the posterior distribution of possible configuration of initiation events given the measured activity and measurement noise (Fig. S1D-E). (E) Example deconcubed initiation configuration (gray bars) and corresponding reconstructed signal (red) from a single allele transcription time series (black). (F) single allele transcription rate (gray) from same allele as in E (black). The rate is estimated by counting the number of initiation events within \(10\,\text{s}\) intervals for a given sampled configuration and averaged over \(1\,\text{’000}\) of such configurations. The displayed solid line and envelope for transcription rate (gray) and reconstructed signal (red) correspond to the mean and one standard deviation of the posterior distribution.
Figure 2: **Transcription rates display signatures of a universal bursting regime.** (A) Snapshot of mean transcription rates \(R\) as a function of AP position in early NC14 (\(t=15\) min after mitosis) for different gap genes (color). AP profiles are obtained by averaging the deconvolved single allele transcription rates over \(\sim 200\) nuclei within each AP bin. The black dashed lines correspond to the mean activity (Fig. 1C) of each gap gene at the same position and time normalized by the effective elongation time (see Methods, Fig. S3A and Fig. S4A). Tight agreement of colored and dashed profiles supports the deconvolution approach (error bars are one standard deviation across the means of 10–20 embryos). (B) Distribution \(P(r|R)\) of single allele transcription rates estimated within 1-min-intervals in both NC13 (color dotted lines) and early NC14 (color solid lines). These distributions are computed over all the nuclei from time points and AP bins whose mean transcription rate \(R\) is either in a low \([2.1,3.2]\), mid \([7.5,8.5]\) or high regime \([12.8,13.9]\) (as gray shade in A and C). The various gap gene distributions collapse at all regimes and differ from the Poisson distribution (black dashed line), suggesting of a universal bursting regime. (C) Variance of single allele transcription rates as a function of mean transcription rate \(R\) in both NC13 (square) and early NC14 (circle) (estimated over 1-min-intervals). Note a strong departure of the variance from a constitutive Poisson regime (dashed line, \(\sigma^{2}\sim R\)). All gap genes follow the same trend suggestive of a common bursting regime. For higher moments see Fig. S3C. Vertical gray bars correspond to low, mid, and high \(R\), as in A. (D) Auto-correlation (AC) functions of single allele transcription rates estimated within 10 s intervals for \(hb\) in early NC14 and averaged across time, and within a given AP bin across alleles. Color code stands for position along the AP axis. The AC functions are normalized by the variance and highlight an uncorrelated and a time-correlated component in the single-cell transcription rate fluctuations. The correlated component is characterized by a magnitude \(\sigma_{\rm AC}\) and an exponential decay with time scale \(\tau_{\rm AC}\). Such correlated fluctuations are expected to arise in a bursting regime due to ON–OFF gene switching (Fig. S3D). (E) Magnitude \(\Sigma_{AC}\) of the correlated fluctuations in the single allele transcription rate as a function of mean transcription rate \(R\). All gap data (color) collapses showing a universal trend (dashed line, guide to the eye). The fraction of correlated variability decreases as \(R\) increases, as expected when approaching a constitutive regime of uncorrelated Poisson initiation (Fig. S3E). (F) Correlation time of the correlated fluctuations in the single allele transcription rate as a function of mean transcription rate \(R\). The correlation times are obtained by fitting exponentials to the correlated component of the AC function as in D. Strikingly, the correlation time is mostly conserved across genes and transcription levels. Error bars are bootstrapped 68% confidence interval.
genes transition all the way from fully OFF (\(P_{\text{ON}}=0\)) to fully ON (\(P_{\text{ON}}=1\)). The gap genes thus provide an opportunity to investigate how an underlying bursting regime can account for a full dynamic range of transcriptional activities.
The dynamic nature of our measurement allows us to examine single allele transcription rates not only via distributions pooled across nuclei but also within individual transcription time series. We find the auto-correlation function of the single allele transcription rates provides further evidence for an underlying common bursting regime (Fig. 2D). An initial sharp drop of magnitude \(1-\Sigma_{\text{AC}}\) at our sampling time scale (\(\sim 10\) s) indicates the presence of uncorrelated noise, consistent with independent Pol II initiation events. This drop is followed by a longer decay of correlated noise at time scale \(\tau_{\text{AC}}\). Such correlated noise is expected in a bursting regime, as the switching between ON and OFF states introduces temporal correlation in transcriptional activity. A theoretical and computational analysis using the two-state model of transcription [12] supports both the interpretation of the auto-correlation functions (Fig. S3D), and our ability to estimate \(\Sigma_{\text{AC}}\) and \(\tau_{\text{AC}}\) properly from the deconvolved rates (Fig. S3E-G).
We find both the magnitude of correlated noise \(\Sigma_{\text{AC}}\) (Fig. 2E) and the correlation time \(\tau_{\text{AC}}\) (Fig. 2F) collapse across AP bins and different genes. Notably, the magnitude \(\Sigma_{\text{AC}}\) is highly constrained and drops at high \(R\), consistent with the behavior of the variance (Fig. 2C). Furthermore, the correlation time \(\tau_{\text{AC}}\) is largely conserved across nuclear cycles, across AP bins, and across genes, confined within the range of 1-2 min and averaging to a value of \(1.37\pm 0.31\) min (Fig. 2F). This surprising invariance of \(\tau_{\text{AC}}\) with respect to \(R\) suggests that a key temporal characteristic of transcription dynamics is highly conserved. Thus, both the static moment and correlation-based analyses point to a common bursting regime, applicable across genes and space, motivating a time-dependent analysis of transcriptional bursts at the level of individual time series.
**Allele ON-probability is the key regulated transcriptional parameter.** To directly quantify individual transcriptional bursts at the single allele level, we take advantage of our deconvolved initiation events and instantaneous transcription rate time series that are unencumbered by the signal-blurring effect of elongation (Fig. 3A). They allow us to identify distinct periods of active transcription, characterized by consecutive initiation events (i.e., multiple Pol IIs released into productive elongation), interpreted as ON periods, followed by quiescent periods, namely OFF periods (Fig. 3A). We define the switch of an allele from an OFF to an ON state when a moving average of the single allele transcription rate exceeds 2 mRNA/min (see Methods). This threshold is consistent with our detection sensitivity of 1-2 mRNAs, and the size of the moving window for averaging is set based on the correlation time scale from the auto-correlation analysis. The main strength of our burst calling routine is its sole reliance on a minimal clustering model, and as such being devoid of any mechanistic assumptions on the underlying bursts (no explicit mechanistic model is needed, see Methods).
Given a computed bursting profile (demarcated ON and OFF periods) for every single allele, we can now ask how bursting dynamics underlie transcription rates. Specifically, the mean transcription rate at time \(t\), \(R(t)\), can be decomposed into two parameters: the instantaneous probability of an allele being in the ON state \(P_{\text{ON}}(t)\) (i.e., the fraction of ON alleles) and the mean initiation rate in the ON state \(K(t)\). Starting with the gene \(hb\), we thus estimate, for a given AP bin, the time-dependent parameters \(R(t)\) and \(P_{\text{ON}}(t)\) by averaging all (\(\sim 250\)) single allele instantaneous transcription rates and counting the fraction of alleles in the ON state at time \(t\), respectively (Fig. 3B-C). To compute \(K(t)\), we average initiation events restricted to the ON state (as opposed to \(R\), which is averaging initiation events regardless of allele state). We repeat this procedure for each position of the AP-axis to obtain the full spatiotemporal dependence (Fig. 3D-F). We validate our approach for burst calling and the recovery of bursting parameters from transcription time series on simulated data (based on a 2-state model) with an overall median error of 10% (see Methods, Fig. S5 and Fig. S6).
All three parameters vary significantly across both space and time (Fig. 3D-F). However, given that these are related by \(R=K\cdot P_{\text{ON}}\) (Fig. S7A), it is possible that most of the variation in \(R\) stems from changes to either \(K\), or \(P_{\text{ON}}\), or both. When \(R\) is plotted against \(P_{\text{ON}}\), all data points across time and space collapse in a tight monotonically increasing function (Fig. 3G), with \(P_{\text{ON}}\) spanning from 0 to 1 (fully OFF to fully ON), echoing back to the noise analysis above (Fig. 2B-C). Similarly, the initiation rates \(K\) tightly collapse across space and time when plotted against \(P_{\text{ON}}\). However, \(K\) only covers a two-fold change in dynamic range (Fig. 3H), which is marginal compared to \(R\) spanning from 0 to 15 (mRNA/min) (Fig. 3I and S7B-C). This two-fold change is largely due to the existence of two optically unresolved sister chromatids and a modest time dependence of \(K\) throughout the nuclear cycle (Fig. S7D-I, see Methods). With these considerations we estimate the mean Pol II spacing for a single active chromatid at \(303\pm 73\) bp, consistent with the classic Miller spreads with average Pol II spacing of \(330\pm 180\) bp [41].
Overall, we find that \(R\) is tightly controlled by \(P_{\text{ON}}\), while \(K\) is only moderately modulated and has significantly less predictive power over \(R\). These results for \(hb\) suggest that transcriptional activity is mainly controlled through the probability of an allele being in the ON state. Once in the ON state, transcription initiates at a quasi-constant rate.
**Constant switching correlation time restricts ON and OFF periods.** Given the central role of \(P_{\text{ON}}\) in con
trolling _hb_'s transcription rate, we aim to examine how it decomposes into ON and OFF periods in individual alleles. To this end we compute the mean ON and OFF times, averaged across alleles of the same AP bin, and at a given time (\(T_{\text{ON}}\) and \(T_{\text{OFF}}\), Fig. 4A-C; see Methods). Near steady state, we expect the ON-probability \(P_{\text{ON}}\) to be given by the ratio of \(T_{\text{ON}}\) and \(T_{\text{ON}}+T_{\text{OFF}}\). We verify this relationship (Fig. 4D), showing good agreement (i.e., beyond an initial 7.5 min transient post mitosis, Fig. S7J-K). This is a strong indication of the self-consistency of our general approach for extracting these bursting parameters. Moreover, despite temporal changes in \(P_{\text{ON}}\) and \(P_{\text{ON}}\), the dynamics of the ON-OFF periods are not well understood.
Figure 3: **Direct estimation of instantaneous mean transcriptional parameters.** (A) A simple clustering procedure on single allele transcription rate determines bursts of transcription. The rate (gray curve) is estimated using a moving window of width \(\sim 1\) min over the deconvolved initiation events (gray vertical bar). A threshold at two mRNA/min (black dashed line) applied on the rate identifies individual bursts (blue curve). (B-C) Heatmaps of deconvolved single allele transcription rates (B) (estimated over 10 s intervals) and of corresponding ON-OFF periods (C) (obtained from burst calling) as a function of time during NC14 for \(N=217\) nuclei expressing _hb_-MS2 at AP position \(x/L=0.43\). Instantaneous mean transcription parameters such as transcription rate \(R\) (B, bottom) and ON-probability \(P_{\text{ON}}\) (C, bottom) are obtained by vertically averaging the heatmaps (top) over all nuclei, respectively. (D-F) _hb_ transcription rate \(R\) (D), ON-probability \(P_{\text{ON}}\) (E), and initiation rate K (F) as a function of time in NC14 for all AP positions (color coded). \(R\) and \(P_{\text{ON}}\) are computed as in B and C, respectively, and \(K\) is obtained by averaging the single allele transcription rate (D) conditioned on the locus being ON (C) over all nuclei in each AP bin. (For tests of burst calling procedure on simulated data see Figure S3.) (G-H) Transcription rate \(R\) (G) and initiation rate \(K\) (H) as a function of \(P_{\text{ON}}\), for all time points and positions, demonstrating a massive data collapse, suggesting that \(P_{\text{ON}}\) is the central regulatory parameter for transcriptional bursting. (I) Transcription rate \(R\) as a function of both controlling parameter \(P_{\text{ON}}\) and \(K\) in log-space. Since \(\log\left(R\right)=\log\left(K\right)+\log\left(P_{\text{ON}}\right)\) by construction, changes in \(P_{\text{ON}}\) determine changes in \(R\) below the dashed line (\(R\sim 8.5\) mRNA/min, corresponding to \(P_{\text{ON}}=0.75\)).
due to developmental regulation, transcriptional bursting in this system seems to operate in a near-steady-state regime.
While \(T_{\text{ON}}\) and \(T_{\text{OFF}}\) change over time and in different AP bins (Fig. 4B-C), when we plot \(T_{\text{ON}}\) and \(T_{\text{OFF}}\) against \(P_{\text{ON}}\) all data points collapse again across time and space onto two tight anti-symmetric relationships (Fig. 4E-F). Various combinations of \(T_{\text{ON}}\) and \(T_{\text{OFF}}\) could potentially give rise to any given \(P_{\text{ON}}\), however, here we observe a highly restricted range for these mean durations. Hence a given \(P_{\text{ON}}\) is unequivocally linked to a specific pair of \(T_{\text{ON}}\) and \(T_{\text{OFF}}\), regardless of space and time.
An allele switching dynamically between ON and OFF states will have a correlation time \(T_{\text{C}}\), which determines, on average, the time needed for the single allele transcription rate to become uncorrelated. For such a system, \(T_{\text{C}}\) can be computed directly from the mean ON and OFF times and is defined by \(1/T_{\text{C}}=1/T_{\text{ON}}+1/T_{\text{OFF}}\) (Fig. 4G, see Methods). Surprisingly, for all AP and time bins, \(T_{\text{C}}\) is confined between \(1-1.5\) min and thus largely constant and independent of \(P_{\text{ON}}\). Moreover, \(T_{\text{C}}\) matches quantitatively the correlation time \(\tau_{\text{AC}}\) from the auto-correlation analysis (Fig. 2F), which we found to be independent of the transcription rate. Given that \(T_{\text{C}}\) characterizes allele switching correlations by construction, this match suggests that the nature of \(\tau_{\text{AC}}\) could indeed be related to bursting.
The fact that \(T_{\text{C}}\) seems to remain conserved across space and time restricts the mean ON and OFF times. Indeed, \(T_{\text{ON}}\) and \(T_{\text{OFF}}\) can be expressed as a function of \(P_{\text{ON}}\) and \(T_{\text{C}}\) (via \(T_{\text{ON}}=T_{\text{C}}/1-P_{\text{ON}}\) and \(T_{\text{OFF}}=T_{\text{C}}/P_{\text{ON}}\), since \(P_{\text{ON}}=T_{\text{ON}}/(T_{\text{ON}}+T_{\text{OFF}})\) near the steady state, cf. Fig. 4D). Thus, the constancy of \(T_{\text{C}}\) mathematically explains the tight anti-symmetric relationships of \(T_{\text{ON}}\) and \(T_{\text{OFF}}\) with respect to \(P_{\text{ON}}\) (Fig. 4E-F), so that \(P_{\text{ON}}\) not only governs the mean transcription rate \(R\) but also the entire transcriptional bursting dynamics.
**Common bursting relationships underlie the regulation of all gap genes.** Should we expect these bursting parameter relationships found for _hb_ to generalize to other gap genes or are they gene-specific? The gap genes differ in their _cis_-regulatory elements, namely different numbers and arrangement of enhancers and promoters (Fig. S1A), and different compositions of transcription factors binding sites within each enhancer. Correspondingly, and as discussed above, the gap genes display distinct transcriptional activities along the body axis (Fig. 1C) and across time (Video V8), which, in the case of _hb_ we found to be largely governed by \(P_{\text{ON}}\).
When we apply our deconvolution and burst calling
Figure 4: **Allele ON-probability controls ON and OFF times.** (A) Binarized heatmap from Fig. 3C. Instantaneous mean OFF-time \(T_{\text{OFF}}\) (bottom, gray) and mean ON-time \(T_{\text{ON}}\) (bottom, blue) are obtained by the weighted average of the ON and OFF times over all nuclei (see methods). The weights are given by the inverse of the number of time points within each period. (B-C) Mean OFF-time \(T_{\text{OFF}}\) and mean ON-time \(T_{\text{ON}}\) as a function of time and position (color coded) for _hb_ in NC14. (D) The ratio of \(T_{\text{ON}}\) over the sum of \(T_{\text{ON}}\) and \(T_{\text{OFF}}\) versus ON-probability \(P_{\text{ON}}\) for all positions and time points beyond the 7.5 min mark in B and C (near-steady state both quantities are expected to be equal after initial transient, see Fig. S7J-K). Thus, temporal changes in transcriptional parameters must be slow enough to allow relaxation. (E-F) Mean OFF-time \(T_{\text{OFF}}\) (E) and mean ON-time \(T_{\text{ON}}\) (F) as a function of \(P_{\text{ON}}\), for all positions and time points beyond the 7.5 min mark in B and C. (G) Effective switching correlation time \(T_{\text{C}}\) (defined as: \(1/T_{\text{C}}=1/T_{\text{ON}}+1/T_{\text{OFF}}\)) as a function of \(P_{\text{ON}}\), computed using data points in E and F. \(T_{\text{C}}\) is mostly conserved across time points and position and is \(P_{\text{ON}}\) independent.
Figure 5: **Transcriptional parameters collapse for all gap genes.** (A) Kymographs of ON-probability \(P_{\text{ON}}\) for all gap genes as a function of position and time for NC13 and NC14. The spatiotemporal transcriptional pattern of the gap genes arises from a complex regulation of \(P_{\text{ON}}\) (color map). (B-E) Transcriptional parameters collapse for all gap genes across time and position. Transcription rate \(R\) (B), Mean OFF- (C) and ON-time (D) (\(T_{\text{OFF}}\) and \(T_{\text{ON}}\), respectively), and switching correlation time \(T_{\text{C}}\) (E) as a function of the ON-probability \(P_{\text{ON}}\). Colored data points represent individual gap gene (same color code as in A, see Fig. SSA-B for gt male data); underlying is the remaining data of all other genes (gray). (F) Density of all data points across space and time (NC13 and NC14) of transcriptional parameters for all gap genes. Color code represents the kernel density estimate of the parameters conditioned on \(P_{\text{ON}}\) and normalized by the maximum density. Putative accessible space (gray shade) for plausible ranges of \(K\) (0.1–30 mRNA/min) and \(T_{\text{C}}\) (0.1–10 min). Due to the constancy of \(T_{\text{C}}\) and the small changes in \(K\) (2-fold at most), \(P_{\text{ON}}\) almost fully determines \(R\) and dictates the combinations of \(T_{\text{OFF}}\) and \(T_{\text{ON}}\). For \(T_{\text{OFF}}\) and \(T_{\text{ON}}\), the dashed lines stand for the 2-state model prediction based on \(T_{\text{C}}\), and the solid lines take the finite recording-length into account (see Figure S8D).
procedure (Fig. 3A) to the measured single allele transcription time series of other gap genes (_gt_, _Kr_, and _kni_), we observe that all genes differ substantially in their spatiotemporal \(P_{\text{ON}}\) profiles (Fig. 5A). These differences are expected and reflective of the above-mentioned distinct underlying _cis_ architectures and _trans_ regulators of these genes. Strikingly, despite these differences, they show an almost identical mean transcriptional rate \(R\) to \(P_{\text{ON}}\) relationship (and \(K\) to \(P_{\text{ON}}\) relationship, Fig. S8C), substantiating \(P_{\text{ON}}\) as the governing factor for the transcriptional activity not only across time and space but also across genes (Fig. 5B).
Computing the various bursting parameters for all gap genes and plotting these as a function of \(P_{\text{ON}}\), we find the genes display the same \(P_{\text{ON}}\)-dependent relationships: all genes share common \(T_{\text{OFF}}\) to \(P_{\text{ON}}\) (Fig. 5C) and \(T_{\text{ON}}\) to \(P_{\text{ON}}\) (Fig. 5D) relationships. Thus, when different genes display a specific \(P_{\text{ON}}\) value, possibly at different spatiotemporal coordinates, the underlying \(T_{\text{ON}}\) and \(T_{\text{OFF}}\) periods employed are nonetheless largely similar. This finding can be related to the switching correlation time \(T_{\text{C}}\), which we find to be conserved across genes (Fig. 5E). The average \(T_{\text{C}}\) value across all genes, positions, and times is \(1.25\pm 0.37\) min, very close to the prediction from the single rate auto-correlation analysis (Fig. 2F).
Pooling all our data across all genes, times, locations, and embryos (\(N>10^{6}\) data points) and plotting each of the computed bursting parameters (\(R\), \(K\), \(T_{\text{C}}\), \(T_{\text{OFF}}\) and \(T_{\text{ON}}\)) against \(P_{\text{ON}}\) (Fig. 5F) - including the often-used burst frequency \(F=1/(T_{\text{ON}}+T_{\text{OFF}})\) and burst size \(B=K\cdot T_{\text{ON}}\) (Fig. S8E) - reveals highly constrained relationships. Indeed, all data points occupy only a very small subset of the parameter space (see methods). Tight functions can be mapped out (black lines) that confirm the exceptional predictive power conferred by \(P_{\text{ON}}\). Separating all the data into three developmental time windows (NC13 & early NC14, mid NC14, and late NC14), shows a further tightening of the relationships in these developmental stages (Fig. S9). This separation thus confirms that parts of the dispersion results from slow and moderate changes in \(K\) (and to a lesser extent \(T_{\text{C}}\)) over developmental time (at most 40% reduction of \(K\) and 25% for \(T_{\text{C}}\)).
All measured bursting dynamics seem to adhere to this simple set of rules, across genes, space, and time. The mean transcription rate \(R\) is essentially dictated by \(P_{\text{ON}}\), with a largely constant \(K\). A near-constant switching correlation time \(T_{\text{C}}\) leads to a specific functional relationship for \(T_{\text{ON}}\) and \(T_{\text{OFF}}\) of inverse proportionality. Thus, only one of these two parameters is principally modulated and the other remains quasi-constant. While lowly transcribing alleles are tuned to higher expression levels predominantly by decreasing \(T_{\text{OFF}}\), medium-to-high transcribing alleles are mostly tuned by increasing \(T_{\text{ON}}\). These simple rules contain all information necessary to govern bursting and consequently transcriptional activity in the system.
**The common bursting relationships predict the effects of _cis-_ and _trans-_perturbations.** Diverse regulatory mechanisms have been implicated in the control of transcriptional activity, including _cis_-regulatory elements (e.g., enhancers) and _trans_-factors (e.g., TF repressors). It is often assumed that distinct regulatory mechanisms directly control distinct bursting parameters. Will the established bursting parameter relationships based on wild-type measurements predict bursting dynamics when we perturb regulatory mechanisms?
To address this question, we devised a strategy to perturb the endogenous system in _cis_ and in _trans_. For _cis_-perturbations, we delete a distal _hb_ enhancer from the _hb_ locus that has an MS2-stem loop cassette in the first intron (Fig. 6A). This enhancer removal has a complex effect on _hb_ activity, including increased and decreased transcriptional activity at different times and AP locations (Fig. 6B), consistent with previous observations [42; 43; 44]. Despite the stark deviation in the mutant spatiotemporal transcriptional activity compared to the wild-type, we find that mean transcription rates across space and time are governed by \(P_{\text{ON}}\). Thus, the predictive power of this parameter observed for the wild-type holds for the mutant as well (Fig. 6B). The mutant further adheres to the other bursting relationships identified in the wild-type. In particular, the restrictive \(T_{\text{ON}}\) and \(T_{\text{OFF}}\) to \(P_{\text{ON}}\) relationships hold, as well as the largely conserved switching correlation time \(T_{\text{C}}\) (Fig. 6C).
A second enhancer deletion, namely, the removal of the _kni_ distal enhancer results in a significant reduction in _kni_ activity. The mutant samples have a smaller dynamic range of activity, yet we find a similar data collapse within that range (Fig. S10A-C). Finally, to examine a _trans_-perturbation, we measure _kni_ activity in embryos with a _hb_ null background. _kni_ activity was substantially altered, consistent with earlier studies [45] (Fig. S10D-E). Yet, again, we observe the collapse of the mutant bursting parameters onto the wild-type busting rules (Fig. 6D).
The consistency of these bursting rules suggests that the wild-type derived relationships (Fig. 5F) can predict how changes in \(T_{\text{ON}}\) and \(T_{\text{OFF}}\) account for the change in transcriptional activity upon the perturbation. Specifically, to examine the perturbation's effect on transcriptional activity at a given AP bin and time in development, we can consider the pair \(\{P_{\text{ON}}^{\text{out}},P_{\text{ON}}^{\text{mut}}\}\) at that spatiotemporal position (two examples of such pairs are marked in the kymographs in Fig. 6B). Using the dependencies of \(T_{\text{ON}}\) and \(T_{\text{OFF}}\) on \(P_{\text{ON}}\), we predict whether the change in transcriptional activity (i.e., the change from \(P_{\text{ON}}^{\text{out}}\) to \(P_{\text{ON}}^{\text{mut}}\)) stems predominantly from a change in \(T_{\text{ON}}\) or in \(T_{\text{OFF}}\) (Fig. 6E). For our two example pairs, one is predicted to be mostly governed by \(T_{\text{OFF}}\) ("o" mark), while the other is mostly governed by TON ("\(\star\)" mark). This exercise can be generalized to all \(\{P_{\text{ON}}^{\text{wt}},P_{\text{ON}}^{\text{mut}}\}\) pairs and subsequently verified using the measured \(T_{\text{ON}}\) and \(T_{\text{OFF}}\) values for wild-type and the perturbations (Fig. 6C-D and Fig. S11A-C). We find that over \(\sim 90\%\) of the
Figure 6: **Modulation of ON and OFF times by _cis-_ and _trans-_perturbations.** (A) Distal _hb_ enhancer removal. The MS2-stem loops are inserted at the same location in the mutant (enhancer deletion) and wild-type fly lines. (B) Quantification of _hb_ wild-type and mutant (A) phenotypes. Both transcriptional rate \(R\) (left) and \(P_{\mathrm{ON}}\) level (right, kymograph) display significantly different expression patterns in the mutant. Dotted arrow indicates time point in the kymograph at which rate profiles are depicted. “o” and “\(\star\)” mark signal two bins with predominant \(T_{\mathrm{OF}}\) modulation and predominant \(T_{\mathrm{ON}}\) modulation, respectively. (C-D) Transcriptional parameters for _hb_ in _cis-_mutation (C, cyan) and for _kni_ in _trans-_mutation (D, light green) collapse on corresponding wild-type parameters (gray). _cis-_mutation is the _hb_ distal enhancer removal from A and the _trans-_mutation is a _hb_ null background compared to wild-type (Fig. S10D-E). Solid black lines correspond to the endogenous bursting rules from Fig. 5F. (E) Example for \(T_{\mathrm{OFF}}\) and \(T_{\mathrm{ON}}\) modulation observed upon _hb_ enhancer removal in B (circle and star, respectively). Depending on the pair of wild-type and mutant \(P_{\mathrm{ON}}\) chosen at the same spatiotemporal location (two examples: circle and star in B), the derived endogenous rules (gray and blue solid lines) predict either larger fold-change in \(T_{\mathrm{OFF}}\) compared to \(T_{\mathrm{ON}}\) (left, \(T_{\mathrm{OFF}}\) dominated modulation) or larger fold-change in \(T_{\mathrm{ON}}\) (right, \(T_{\mathrm{ON}}\) dominated modulation). Verifying these examples using the estimated \(T_{\mathrm{OFF}}\) and \(T_{\mathrm{ON}}\) in C) confirms that both types of modulation can be observed for the same perturbation. (F) Verification of predicted \(T_{\mathrm{OFF}}\) and \(T_{\mathrm{ON}}\) modulation for all _hb_ wild-type and mutant \(P_{\mathrm{ON}}\) pairs (defined as in E), for most pairs (\(>85\%\)) the prediction is correct (see Fig. S11A-C). (G) Verification of predicted burst size \(B\) and frequency \(F\) modulation for all _hb_ wt and mutant \(P_{\mathrm{ON}}\) pairs. As in E, \(B\) versus \(F\) modulation can be predicted for each \(P_{\mathrm{ON}}\) pair based on the derived endogenous rules. For most pairs (\(>95\%\)) the prediction is correct (see Fig. S11G-I).
pairs correctly verify the type of modulation predicted by our rules (Fig. 6F). We reached the same conclusions predicting changes in burst size and burst frequency (Fig. 6G and S11G-I). Importantly we find that each type of perturbation displays both predominant \(T_{\text{ON}}\) or \(T_{\text{OFF}}\) modulation at different times and positions.
The generalization of our bursting rules to _cis_- and _trans_-perturbations have strong consequences. From our examination, the type of perturbed regulatory mechanism can hardly be linked to changes in a specific bursting parameter and vice versa. Indeed, as is the case for wild-type, \(P_{\text{ON}}\) emerges as the main governing parameter of the transcriptional activity also for the mutants, while \(K\) and \(T_{\text{C}}\) remain largely unchanged. \(P_{\text{ON}}\) changes upon a perturbation (i.e., wild-type to mutant) are sufficient to determine the corresponding changes in \(R\), \(T_{\text{ON}}\), and \(T_{\text{OFF}}\). Moreover, the functional form of the relationships implies that the \(P_{\text{ON}}\) regime (low versus high) is crucial in determining which parameter is predominantly affected. Different regulatory mechanisms have a different propensity for \(P_{\text{ON}}\) modulation. Yet, the single-parameter regulation set in place by the identified bursting rules points towards general mechanisms conserving \(K\) and \(T_{\text{C}}\), and linking \(T_{\text{ON}}\) and \(T_{\text{OFF}}\).
## Discussion
While it is appreciated that mRNA production likely occurs in bursts across various systems, quantitatively measuring endogenous, single allele transcriptional bursting in real-time, across a wide range of gene activities, still poses a challenge. This hinders our understanding of how the kinetic parameters of bursting underlie transcriptional dynamics across genes, space, and developmental time. In this study, we devised an approach to perform such measurements in the context of the developing early _Drosophila_ embryo, a system that relies on large changes in transcription rates, as a means to govern protein abundance (Fig. S4A). The spatiotemporal transcriptional activity of the examined genes is regulated by a myriad of processes (e.g. repressor and activator binding, chromatin accessibly, PIC formation and pause-release, histone modifications, etc.) [42; 46; 47; 48]. These processes are mediated by gene-specific _cis_-architectures, with distinct combinations of enhancers and promoters, further differing in their internal motif compositions. Surprisingly, despite the complexity of the regulatory processes involved and the differences between the genes, we find highly restricted, unifying properties of the underlying bursting dynamics.
We observed that the mean transcription rate is governed by a single tunable control parameter, the ON-probability (\(P_{\text{ON}}\)), with a near-constant Pol II initiation rate (\(K\)). We further found a conserved time scale (\(T_{\text{C}}\)) of about a minute, over which the allele states, either active (ON) or inactive (OFF), remain correlated. This time constant explicitly links the mean durations of ON and OFF periods that are thus largely determined by the control parameter (\(P_{\text{ON}}\)). \(P_{\text{ON}}\), together with the mostly conserved \(K\) and \(T_{\text{C}}\), fully parameterize the observed bursting dynamics, across genes, space, and developmental time.
While consistent with predictions from our previous measurements on fixed tissues (Fig. S12A) [25], the presented live measurements allow us to go beyond a model-based inference of kinetic parameters from a static snapshot of distributions of nascent transcripts. Our current approach measures the dynamics directly rather than inferring the kinetics indirectly. Therefore, we are relieved from the constraints imposed by a specific mechanistic model and can thus further relax the steady state assumption. Both aspects are often used in the analyses of fixed and live data [49; 3; 6; 25]. Moreover, access to the full dynamics of individual transcription time series (throughout more than 1 hour of development) allows us to estimate time-dependent transcriptional parameters from the underlying bursts, rather than obtaining only a single transcriptional parameter value per condition.
Using synthetic data, we verify the capacity of this procedure to reliably recover a wide range of bursting parameters, including the estimation of \(T_{\text{C}}\) (Fig. S5C-D). The conserved nature of \(T_{\text{C}}\) and its value (i.e., \(1.25\pm 0.35\) min) as estimated from individual bursts) quantitatively agree with the auto-correlation analysis (\(\tau_{\text{AC}}=1.37\pm 0.31\) min), an orthogonal approach not involving burst calling. It is intriguing to consider the functional consequences of a constant correlation time \(T_{\text{C}}\) and of the relatively small measured value. \(T_{\text{C}}\) not only sets the time scale of the bursting dynamics, linking the mean ON and OFF times, but it has further implications on transcription noise filtering: a small \(T_{\text{C}}\) value minimizes noise as bursts are easily buffered by long mRNA lifetimes. A small \(T_{\text{C}}\) further allows gene transcription to respond more rapidly to input TF changes, by means of facilitating a fast relaxation to a steady state (Fig. S7J-K).
The identified bursting rules point to a surprising predictive power of \(P_{\text{ON}}\) on the mean ON and OFF times (Fig. 5F). This observation is only possible because our system allows for the quantification of bursting parameters across a large dynamic range of transcriptional activity, with \(P_{\text{ON}}\) values ranging from fully off (0) to fully on (1) for most of our examined genes and conditions. This leads us to uncover a strict relationship between \(T_{\text{OFF}}\) and \(T_{\text{ON}}\) (Fig. 6E and Fig. 7A). It is commonly thought that distinct regulatory processes will alter transcriptional activity by predominant regulation of specific bursting parameters (e.g., the mean ON durations of a burst versus the mean OFF intervals in between bursts). Yet, when we perform a _cis_- _or_ _trans_-perturbation, which substantially alters transcriptional activity, the predominantly modulated bursting parameters (\(T_{\text{OFF}}\) versus \(T_{\text{ON}}\) or burst frequency versus burst size) can be predicted by the wild-type and mutant \(P_{\text{ON}}\) regimes, rather than by the type of the perturbation performed (Fig. 6E-G and
S11).
To further examine the generality of these results, we investigated previously published transcription measurements in the early fly embryo. These measurements include both endogenous genes and synthetic reporters, where transcription was altered by varying BMP signal (a dorsoventral morphogen) [28] or core promoter composition [20]. Strikingly, we find that these datasets collapse on our established bursting rules (Fig. 7A). As suggested in these studies, the first dataset shows mainly effects on \(T_{\mathrm{OFF}}\), while the latter principally changes \(T_{\mathrm{ON}}\). Intriguingly, the two independent datasets cluster in disjoint halves of the full spectrum of \(P_{\mathrm{ON}}\) values captured by our measurements. Our analysis thus raises the possibility that the predominantly changed parameter (\(T_{\mathrm{OFF}}\) versus \(T_{\mathrm{ON}}\)) might not be inherent to the examined regulatory manipulation (e.g., input TF concentrations or core promoter elements), but rather a consequence of the limited expression range of these genes.
The striking manifestation of general rules underlying bursting parameters in the _Drosophila_ embryo, as well as the conserved nature of regulatory processes and the transcription machinery across eukaryotes, naturally leads to the question of whether our rules apply more broadly [50]. Recent measurements of an extensively perturbed yeast gene [21] provide an equivalent set of bursting parameters that faithfully adhere to our transcription rules (Fig. 7A). Although, the yeast gene initiation rate
Figure 7: **Decoupling between mechanism and response points to unifying rules.** (A) Scatter plot of the transcriptional parameters as a function of \(P_{\mathrm{ON}}\) (color code same as Fig. 5F). Transcriptional parameters from two other _Drosophila_ studies are largely consistent with uncovered transcription rules (black lines); one study investigates response to BMP signaling (yellow circles; Hoppe et al.), the other the effect of core promoter manipulations on a transgene (pink circles; Pimmer et al.). Transcriptional parameters resulting from multiple perturbations performed on yeast _GAL10_ gene all closely follow the rules (orange circles; Brounwer et al.). Besides the initiation rate \(K\), transcriptional parameters inferred from mammalian promoters (cyan circles; Suter et al.) appear mostly compatible with the rules, suggesting that these may apply beyond _Drosophila_. A minimal description of bursts requires three independent parameters, such as initiation rate \(K\), switching correlation time \(T_{\mathrm{C}}\), and ON-probability \(P_{\mathrm{ON}}\). \(P_{\mathrm{ON}}\) is the main regulated parameter controlling the transcription rate \(R\), while \(K\) only accounts for small changes (at most 2-fold), and the correlation time \(T_{\mathrm{C}}\) sets a conserved bursting time scale. Conservation of both \(K\) and \(T_{\mathrm{C}}\) implies that \(T_{\mathrm{OFF}}\) and \(T_{\mathrm{ON}}\), or, alternatively, the burst size \(B\) and burst frequency \(F\), are fully determined by \(P_{\mathrm{ON}}\). Thus, whether changes in \(R\) are mediated by \(T_{\mathrm{OFF}}\) versus \(T_{\mathrm{ON}}\), or \(B\) versus \(F\) dominated modulation only depends on \(P_{\mathrm{ON}}\). (B) While \(K\) and \(T_{\mathrm{C}}\) appear mostly conserved, \(P_{\mathrm{ON}}\) emerges as the key regulatory parameter. It is independent of gene identity (different _cis_-architecture) or the combination of input transcription factor concentrations (that vary along the AP axis), validated by _cis_- and _trans_- perturbations and by data from other studies. Such invariance suggests a mechanistic decoupling, where regulation affects solely \(P_{\mathrm{ON}}\), which in turn determines unequivocally all of the underlying bursting dynamics.
\(K\) appears mostly constant across conditions (similar to the fly genes), calibration of its value was only possible under specific hypotheses leading to a lower bound that needs to be further tested (Methods). Earlier yeast gene data have shown a significantly smaller initiation rate [38; 5]. A comprehensive study allowing multiple gene classes across multiple organisms will be necessary to verify whether our rules hold more generally in eukaryotes.
Mammalian genes are often lowly transcribed [6; 29; 51], potentially exploring a different parameter regime. Indeed, while bursting parameters inferred from measurements of a luminescent reporter protein [6] are restricted to very small \(P_{\text{ON}}\) values (0 to 0.05), they appear to be consistent with our established relationships (Fig. 7A). Again, only the initiation rate \(K\) seems to deviate. Possible deviations in \(K\) across species leave open the possibility that the overall levels of \(K\) might be linked to species-specific rates, such as those linked to metabolism [52; 53]. Mammalian time-lapse data with a broader \(P_{\text{ON}}\) range will be necessary to make a stronger parameter comparison possible.
Revisiting previously performed genome-wide studies in mammalian systems shows trends that are possibly compatible with our established bursting relationships. Similar to our fly genes (Fig. 7A), one study found that while burst frequency is predominantly modulated for low expressing genes, burst size is tuned for high expressing ones, independent on the reporter control sequences [13]. Another study using single-cell RNA-seq found a functional dependence of burst frequency and burst size on mean expression that seems compatible with our established rules (for \(P_{\text{ON}}<0.5\)) [51]. However, scRNA-seq parameters are significantly influenced by mRNA loss and long mRNA lifetimes, making the mapping of the sets of units between the vastly different approaches challenging.
The potentially wider applicability of our bursting relationships to other species calls for a new framework underlying the regulatory processes governing transcription. Instead of specific regulatory processes being inherently linked to specific bursting parameters, the tuning of transcription seems to be funneled through the sole control of \(P_{\text{ON}}\), that all regulatory processes act on (Fig. 7B). Future investigations will have to determine the molecular mechanisms that can implement such a funneling at the level of \(P_{\text{ON}}\) control. How do diverse processes tune \(P_{\text{ON}}\)? How is the constancy of the switching correlation time \(T_{\text{C}}\) implemented molecularly? As for the latter, our work suggests that a highly conserved and general mechanism across eukaryotic genes should be at work. The generality of such a mechanism implies independence from the particularity of a given gene locus and thus could rather be implemented by the environment, including structural consideration of the nuclear architecture [54; 55; 56] or the molecular assembly of components of the transcription machinery [57; 58].
Despite the complexity of transcriptional dynamics across species, genes, space, developmental time, and perturbations, our quantitative real-time measurements revealed strict bursting rules, that set strong constraints on mechanistic models of transcriptional regulation. Our work also has some indicators for the generality of these rules across systems, their functional implications, and their molecular underpinning. As is the case with other areas in which organizing principles are increasingly emerging [59; 60; 61; 62; 63], these rules offer new ways to think about complex processes and point to conserved mechanisms at their core.
###### Acknowledgements.
We thank members of the Gregor laboratory for discussions and comments on the manuscript; E. Wieschaus for suggestions at various stages of the project. This work was supported in part by the U.S. National Science Foundation, through the Center for the Physics of Biological Function (PHY-1734030), and by National Institutes of Health Grants R01GM097275, U01DA047730, and U01DK127429. M.L. is the recipient of a Human Frontier Science Program fellowship (LT000852/2016-L), EMBO long-term postdoctoral fellowship (ALTF 1401-2015), and the Rothschild postdoctoral fellowship.
|
2305.08459 | Introduction to dynamical mean-field theory of randomly connected neural
networks with bidirectionally correlated couplings | Dynamical mean-field theory is a powerful physics tool used to analyze the
typical behavior of neural networks, where neurons can be recurrently
connected, or multiple layers of neurons can be stacked. However, it is not
easy for beginners to access the essence of this tool and the underlying
physics. Here, we give a pedagogical introduction of this method in a
particular example of random neural networks, where neurons are randomly and
fully connected by correlated synapses and therefore the network exhibits rich
emergent collective dynamics. We also review related past and recent important
works applying this tool. In addition, a physically transparent and alternative
method, namely the dynamical cavity method, is also introduced to derive
exactly the same results. The numerical implementation of solving the
integro-differential mean-field equations is also detailed, with an
illustration of exploring the fluctuation dissipation theorem. | Wenxuan Zou, Haiping Huang | 2023-05-15T09:01:40Z | http://arxiv.org/abs/2305.08459v3 | **Introduction to dynamical mean-field theory of generic random neural networks**
## Abstract
**Dynamical mean-field theory is a powerful physics tool used to analyze the typical behavior of neural networks, where neurons can be recurrently connected, or multiple layers of neurons can be stacked. However, it is not easy for beginners to access the essence of this tool and the underlying physics. Here, we give a pedagogical introduction of this method in a particular example of generic random neural networks, where neurons are randomly and fully connected by correlated synapses and therefore the network exhibits rich emergent collective dynamics. We also review related past and recent important works applying this tool. In addition, a physically transparent and alternative method, namely the dynamical cavity method, is also introduced to derive exactly the same results. The numerical implementation of solving the integro-differential mean-field equations is also detailed, with an illustration of exploring the fluctuation dissipation theorem.**
###### Contents
* 1 Introduction
* 2 Generic random neural networks
* 3 Dynamical mean-field theory
* 3.1 Generating functional formalism
* 3.2 Dynamical cavity approach
* 4 Numerical and theoretical analysis
* 4.1 Numerical solution of the DMFT equations
* 4.2 Analysis of fixed point solutions
* 4.3 Fluctuation-dissipation theorem
* 5 Conclusion
* A Novikov's theorem
B Static cavity method for recurrent dynamics
* C Stability analysis for the ReLU transfer function
* D Derivation of fluctuation-dissipation theorem in equilibrium
## 1 Introduction
Stochastic differential equations (SDEs) have a very wide range of applications in physics [1], biology, neuroscience and machine learning (see many examples in understanding the brain [2]). Recently, as the world-wide brain projects are being promoted, and the artificial intelligence starts the fourth industrial revolution, understanding how cognition arises both for a natural brain or an artificial algorithm (like Chat GPT) becomes increasingly important. Better understanding leads to precise predictions, which is impossible without solid mathematical foundation. Stochastic noise inherent in the neural dynamics can either stem from the algorithm details (e.g, in stochastic gradient descent of a deep learning cost function [3], the noise is anisotropic and changes with the training) or stem from unreliable synaptic transmission and noisy background inputs [4]). Therefore, SDEs provide a standard tool to model the complex dynamics common in complex systems.
In particular, one can write the SDE into a Langevin dynamics equation, and put the dynamics into the seminal Onsager-Machlup formalism [5], which introduces the concept of action in a path-integral framework [6]. However, the form of the Onsager-Machlup action is not amenable for calculating the quenched disorder average as commonly required in studying typical behaviors of random neural networks. To overcome this drawback, a response field is introduced and thus the correlation (spontaneous fluctuation) as well as the response function can be easily derived (see below). This new field-theoretical approach is called the Martin-Sigga-Rose-De Dominics-Janssen (MSRDJ) formalism [7, 8, 9]. The MSRDJ formalism has been used to analyze the recurrent neural networks, e.g., studying the onset of chaos [10, 11, 12], and to analyze deep neural networks in recent years [13, 14, 15]. We will next provide a detailed explanation of this field-theoretical formalism.
The MSRDJ formalism bears concise mathematics, resulting in the mean-field description of the original high dimensional stochastic dynamics. The same equation can also be derived using a physically more transparent method, namely the dynamical cavity method [16, 17, 18]. In essence, a neuron is added into the system, and the associated impacts on other neurons are set-consistently derived using the linear response approximation. The dynamics of this additional neuron bears the same characteristics with other neurons, thereby being a representative of the original high dimensional dynamics. In addition, the static version of this method can be used to derive the fixed point solution of the dynamics [16, 18]. By numerically solving the intergro-differential equations involving the correlation and response functions, one can further probe the fundamental fluctuation-dissipation relation in equilibrium dynamics, which we shall show in the last section of this tutorial. All technical details are given in the Appendices.
## 2 Generic random neural networks
We consider a random neural network composed of \(N\) fully-connected neurons. The state of each neuron in time \(t\) is characterized by the synaptic current \(x_{i}(t)\), \(i=1,\ldots,N\), which obeys the following non-linear dynamical equation,
\[\frac{dx_{i}(t)}{dt}=-x_{i}(t)+g\sum_{j=1}^{N}J_{ij}\phi_{j}(t)+\sigma\xi_{i}(t), \tag{1}\]
where \(g\) is the coupling strength and \(\phi_{j}(t)=\phi\big{(}x_{j}(t)\big{)}\) is the transfer function that transforms current to firing rate. A Gaussian white-noise \(\xi_{i}(t)\) of zero mean and unit variance \(\big{\langle}\xi_{i}(t)\xi_{j}(t^{\prime})\big{\rangle}=\delta_{ij}\delta(t-t ^{\prime})\) is introduced to model the stochastic nature of neural wiring (e.g., in cortical circuits [2]). The parameter \(\sigma\) serves as the noise strength. Each element \(J_{ij}\) of the connection matrix is drawn from a Gaussian distribution with zero mean and variance \(\overline{J_{ij}^{2}}=1/N\). In fact, \(J_{ij}\) may correlate with \(J_{ji}\) for each pair of neurons. This asymmetric correlation is thus characterized as follows,
\[\overline{J_{ij}J_{ji}}=\frac{\eta}{N}, \tag{2}\]
where \(\eta\in[-1,+1]\) describes the degree of asymmetry. In particular, the connections are fully symmetric when \(\eta=1\) and fully asymmetric when \(\eta=0\)[19]. Besides, we set \(J_{ii}=0\) to remove all the self-coupling interaction.
The dynamical mean-field theory (DMFT) equation of this generic model is easy to acquire when the asymmetric correlation is absent i.e., \(\eta=0\)[20, 10]. In this case, when we consider the limit \(N\to\infty\), the input current each neuron receives via the coupling \(g\sum_{j=1}^{N}J_{ij}\phi_{j}\left(t\right)\) converges to a Gaussian field according to the central limit theorem. Therefore, the complex network dynamics can be simplified to an effective single-neuron dynamics,
\[\dot{x}(t)=-x(t)+\gamma(t)+\sigma\xi(t), \tag{3}\]
where \(\gamma(t)\) is the effective Gaussian noise with covariance \(\langle\gamma(t)\gamma(t^{\prime})\rangle=g^{2}C(t,t^{\prime})+\sigma^{2} \delta(t-t^{\prime})\). The auto-correlation of firing rates \(C(t,t^{\prime})\) is self-consistently defined as
\[C(t,t^{\prime})=\langle\phi(t)\phi(t^{\prime})\rangle, \tag{4}\]
thereby closing the DMFT equation. Note that \(\tilde{\cdot}\) (or \(\mathbb{E}[\cdot]\) in the following) and \(\langle\cdot\rangle\) represent the quenched-disorder average and thermal average (over different noise trajectories), respectively.
However, when the asymmetric correlation between connections is present, the mean-field description can not be obtained directly from the central limit theorem. This is because in the summation of the afferent currents, a non-negligible correlation between \(J_{ij}\) and \(\phi_{j}(t)\) emerges via \(J_{ji}\) when \(\eta\neq 0\), and thus the central limit theorem breaks down. In the following sections, we introduce two powerful physics tools to tackle this challenge and derive the DMFT equation for generic random neural networks of an arbitrary asymmetric correlation level.
## 3 Dynamical mean-field theory
### Generating functional formalism
We first introduce the generating functional formalism and then unfold the derivation following the standard procedure. The main idea of this method is to recast the original high-dimensional
dynamical equation to the path integral formalism. Here, we consider the MSRDJ path integral. The moment-generating functional with a specific action corresponding to the dynamical equation can be written out explicitly, which is helpful to reduce the dynamics to a low-dimensional mean-field description.
First, we add a perturbation \(j_{i}(t)\) to the original dynamical equation [Eq. (1)],
\[\dot{x}_{i}(t)=-x_{i}(t)+g\sum_{j=1}\!J_{ij}\phi_{j}(t)+j_{i}(t)+\sigma\xi_{i}( t),\quad i=1,\dots,N, \tag{5}\]
which will be useful in the following derivation. Then, we discretize the dynamical equations under the Ito convention [21],
\[x_{i}[t]-x_{i}[t-1]=-x_{i}[t-1]h+g\sum_{j=1}\!J_{ij}\phi_{j}[t-1]h+j_{i}[t-1]h+ \sigma\xi_{i}[t], \tag{6}\]
where \(h\) is a time interval between two consecutive time steps, and \([t]\) indicates the discrete time index. The white noise \(\xi_{i}[t]=\int_{t-h}^{t}\xi_{i}(s)\,\mathrm{d}s\) becomes a Wiener process with the following statistics,
\[\begin{split}\left\langle\xi_{i}[t]\xi_{j}[t^{\prime}]\right\rangle &=\int_{t-h}^{t}\int_{t^{\prime}-h}^{t^{\prime}}\left\langle\xi_{i }(s)\xi_{j}(s^{\prime})\right\rangle\mathrm{d}s\,\mathrm{d}s^{\prime}\\ &=\int_{t-h}^{t}\int_{t^{\prime}-h}^{t^{\prime}}\sigma^{2}\delta _{ij}\delta(s-s^{\prime})\,\mathrm{d}s\,\mathrm{d}s^{\prime}\\ &=\delta_{ij}\delta_{tt^{\prime}}\sigma^{2}h,\end{split} \tag{7}\]
where \(\delta_{ij}\) is a Kronecker delta function. Because of the Markovian property, we introduce the joint distribution of the currents \(\{\mathbf{x}(t)\}_{t=1}^{T}\) across time and space by Dirac delta functions,
\[\begin{split} P\left(\{\mathbf{x}(t)\}_{t=1}^{T}\right)& =\int\prod_{i,t}p(\xi_{i}[t])\,\mathrm{d}\xi_{i}[t]\,\delta\!\left(x_{i}[t+1] -x_{i}[t]+x_{i}[t]h\right.\\ &\qquad\qquad\left.-g\sum_{j=1}\!J_{ij}\phi_{j}[t]h-j_{i}[t]h- \sigma\xi_{i}[t]\right),\end{split} \tag{8}\]
where the initial current state \(\mathbf{x}[0]\) can be arbitrarily chosen, which does not influence the derivation, and \(T\) denotes the length of the trajectory.
We next represent these delta functions by their Fourier integral as \(\delta(x)=\frac{1}{2\pi i}\int_{-i\infty}^{i\infty}\mathrm{d}\tilde{x}e^{- \tilde{x}x}\),
\[\begin{split} P\left(\{\mathbf{x}(t)\}_{t=1}^{T}\right)& =\prod_{i,t}\!\left(\!\int p\left(\xi_{i}[t]\right)\mathrm{d}\xi_{ i}[t]\int_{-i\infty}^{i\infty}\frac{\mathrm{d}\tilde{x}_{i}[t]}{2\pi i} \exp\!\left[-\tilde{x}_{i}[t]\!\left(x_{i}[t+1]-x_{i}[t]+x_{i}[t]h\right.\right. \\ &\qquad\qquad\left.\left.-g\sum_{j=1}\!J_{ij}\phi_{j}[t]h-j_{i}[t ]h-\sigma\xi_{i}[t]\right)\right]\right)\\ &=\prod_{i,t}\!\left(\!\int_{-i\infty}^{i\infty}\frac{\mathrm{d} \tilde{x}_{i}[t]}{2\pi i}\exp\!\left[-\tilde{x}_{i}[t]\!\left(x_{i}[t+1]-x_{i} [t]+x_{i}[t]h\right.\right.\\ &\qquad\left.\left.-g\sum_{j=1}\!J_{ij}\phi_{j}[t]h-j_{i}[t]h- \frac{\sigma^{2}}{2}\tilde{x}_{i}[t]h\right)\right]\right),\end{split} \tag{9}\]
where Eq. (7) is used to derive the last equality. Hence, we can formally define the moment-generating functional of the stochastic dynamics,
\[Z[\mathrm{j},\tilde{\mathrm{j}}]\mathrm{J}] =\prod_{i,t}\left(\int_{-\infty}^{\infty}\mathrm{d}x_{i}[t]\exp \left(\tilde{j}_{i}[t]x_{i}[t]h\right)\right)P\left(\{\mathbf{x}(t)\}_{t=1}^{T}\right) \tag{10}\] \[=\prod_{i,t}\left(\int_{-\infty}^{\infty}\mathrm{d}x_{i}[t]\int_{ -i\infty}^{i\infty}\frac{\mathrm{d}\tilde{x}_{i}[t]}{2\pi i}\exp\left(\tilde{j} _{i}[t]x_{i}[t]h+j_{i}[t]\tilde{x}_{i}[t]h\right)\right)\] \[\exp\left\{\left[-h\sum_{i,t}\tilde{x}_{i}[t]\left(\frac{x_{i}[t+1 ]-x_{i}[t]}{h}+x_{i}[t]-g\sum_{j=1}J_{ij}\phi_{j}[t]-\frac{\sigma^{2}}{2}\tilde {x}_{i}[t]\right)\right]\right\},\]
where \(\tilde{\mathrm{j}}\) and \(\mathrm{j}\) are two types of source fields, whose physical meaning would be clear below. The source \(\mathrm{j}\) could be an external perturbation to which the response is measured by the response field \(\tilde{x}\), which allows one to compute the linear response function by taking the correlation with \(x\) (see below). Taking the continuous limits of \(T\to\infty\) and \(h\to 0\) at the same time, we obtain \(h\sum_{t=0}^{T}f(t)=\int f(t)\,\mathrm{d}t\), and \(\lim_{h\to 0}\frac{x_{i}[t+1]-x_{i}[t]}{h}=\dot{x}_{i}(t)\). We also introduce the notations \(\prod_{i,t}\mathrm{d}x_{i}[t]\stackrel{{ h\to 0}}{{\to}}\mathcal{D}\mathbf{x}\) and \(\prod_{i,t}\frac{\mathrm{d}\tilde{x}_{i}[t]}{2\pi i}\stackrel{{ h\to 0}}{{\to}}\mathcal{D}\tilde{\mathbf{x}}\) for simplicity. Under the continuous limit, the moment generating functional reads,
\[Z[\mathrm{j},\tilde{\mathrm{j}}]\mathrm{J}]=\int\mathcal{D}\mathbf{x}(t)\mathcal{ D}\tilde{\mathbf{x}}\,\exp\left(-S[\mathbf{x},\tilde{\mathbf{x}}|\mathrm{J}]+\sum_{i=1}^{N} \int\tilde{j}_{i}(t)x_{i}(t)dt+\sum_{i=1}^{N}\int j_{i}(t)\tilde{x}_{i}(t)dt \right), \tag{11}\]
where the action of the dynamical equation is naturally introduced as,
\[S[\mathbf{x},\tilde{\mathbf{x}}|\mathrm{J}]=\sum_{i=1}^{N}\int\tilde{x}_{i}(t)\left( \dot{x}_{i}(t)+x_{i}(t)-g\sum_{j=1}^{N}J_{ij}\phi_{j}(t)-\frac{\sigma^{2}}{2} \tilde{x}_{i}(t)\right)dt. \tag{12}\]
It is easy to verify that when \(N\to\infty\), the out-of-equilibrium behavior is independent of the realization of the disorder [22]. We thus focus on the typical behavior of the self-averaging dynamical partition function \(Z[\mathrm{j},\tilde{\mathrm{j}}]\mathrm{J}]\). This partition function is simpler compared to its equilibrium counterpart, as the zero source generating functional is identical to one. In the dynamical setting, taking the average of \(Z[\mathrm{j},\tilde{\mathrm{j}}]\mathrm{J}]\) over \(P(\mathrm{J})\) is sufficient to get the thermal and disorder averaged two-point functions, such as correlation and response. This is in contrast to the equilibrium spin glass theory where a replica trick is commonly applied to obtain the disorder average of the free energy function [20]. In fact, computing the average of \(\mathbb{E}_{\mathrm{j}}Z[\mathrm{j},\tilde{\mathrm{j}}]\mathrm{J}]\) reduces to computing \(\mathbb{E}_{\mathrm{j}}\exp\left(-S[\mathbf{x},\tilde{\mathbf{x}}|\mathrm{J}]\right)\). To proceed, we decompose the connection into symmetric and asymmetric parts [19],
\[J_{ij}=J_{ij}^{s}+kJ_{ij}^{a}, \tag{13}\]
where \(J_{ij}^{s}=J_{ji}^{s}\) and \(J_{ij}^{a}=-J_{ji}^{a}\), both of which follow the centered Gaussian distribution with the same variance,
\[\overline{J_{ij}^{s}J_{ij}^{s}}=\overline{J_{ij}^{a}J_{ij}^{a}}=\frac{1}{N} \frac{1}{1+k^{2}}. \tag{14}\]
Under this decomposition, it is easy to derive that,
\[\overline{J_{ij}J_{ij}}=\frac{1}{N},\qquad\overline{J_{ij}J_{ji}}=\frac{1}{N} \frac{1-k^{2}}{1+k^{2}}, \tag{15}\]
which gives \(k^{2}=(1-\eta)/(1+\eta)\).
Now, we can deal with the term involving \(J_{ij}\),
\[\begin{split}&\sum_{i\neq j}\tilde{x}_{i}(t)J_{ij}\phi_{j}(t)\\ =&\sum_{i\neq j}\tilde{x}_{i}(t)\Big{[}J^{s}_{ij}+ kJ^{a}_{ij}\Big{]}\phi_{j}(t)\\ =&\sum_{i<j}\left\{J^{s}_{ij}\big{[}\tilde{x}_{i}(t) \phi_{j}(t)+\tilde{x}_{j}(t)\phi_{i}(t)\big{]}+kJ^{a}_{ij}\big{[}\tilde{x}_{i} (t)\phi_{j}(t)-\tilde{x}_{j}(t)\phi_{i}(t)\big{]}\right\}.\end{split} \tag{16}\]
Then, carrying out the average over \(J^{s}_{ij}\) and \(J^{a}_{ij}\) leads to
\[\begin{split}&\mathbb{E}_{\mathfrak{p},\mathfrak{j}\mathfrak{e}} \exp\left(\int\mathrm{d}t\sum_{i<j}\left\{J^{s}_{ij}\big{[}\tilde{x}_{i}(t) \phi_{j}(t)+\tilde{x}_{j}(t)\phi_{i}(t)\big{]}+kJ^{a}_{ij}\big{[}\tilde{x}_{i} (t)\phi_{j}(t)-\tilde{x}_{j}(t)\phi_{i}(t)\big{]}\right\}\right)\\ =&\exp\left(\frac{g^{2}}{2N}\sum_{i\neq j}\int\int \left\{\big{[}\tilde{x}_{i}(t)\phi_{j}(t)\tilde{x}_{i}(t^{\prime})\phi_{j}(t^{ \prime})\big{]}+\eta\big{[}\tilde{x}_{i}(t)\phi_{j}(t)\tilde{x}_{j}(t^{\prime })\phi_{i}(t^{\prime})\big{]}\right\}\mathrm{d}t\,\mathrm{d}t^{\prime}\right) \\ \approx&\exp\left(\frac{g^{2}}{2N}\int\int\left[ \sum_{i}\tilde{x}_{i}(t)\tilde{x}_{i}(t^{\prime})\sum_{j}\phi_{j}(t)\phi_{j}(t ^{\prime})+\eta\sum_{i}\tilde{x}_{i}(t)\phi_{i}(t^{\prime})\sum_{j}\phi_{j}(t )\tilde{x}_{j}(t^{\prime})\right]\mathrm{d}t\,\mathrm{d}t^{\prime}\right). \end{split} \tag{17}\]
Note that, we have added back the negligible diagonal term (\(i=j\)) to arrive at the last equality. Then, we define \(Z[\mathrm{j},\tilde{\mathrm{j}}]=\mathbb{E}_{\mathrm{j}}Z[\mathrm{j},\tilde{ \mathrm{j}}]J]\), the average moment-generating functional is given by
\[\begin{split} Z[\mathrm{j},\tilde{\mathrm{j}}]&= \int\mathcal{D}\mathbf{x}(t)\mathcal{D}\tilde{\mathbf{x}}\exp\left(-S_{0}[\mathbf{x},\tilde{ \mathbf{x}}]+\frac{\sigma^{2}}{2}\tilde{\mathbf{x}}\cdot\tilde{\mathbf{x}}+\tilde{\mathrm{ j}}\cdot\mathbf{x}+\mathrm{j}\cdot\tilde{\mathbf{x}}\\ &\quad+\frac{g^{2}}{2N}\int\int\left[\sum_{i}\tilde{x}_{i}(t) \tilde{x}_{i}(t^{\prime})\sum_{j}\phi_{j}(t)\phi_{j}(t^{\prime})+\eta\sum_{i} \tilde{x}_{i}(t)\phi_{i}(t^{\prime})\sum_{j}\phi_{j}(t)\tilde{x}_{j}(t^{ \prime})\right]\mathrm{d}t\,\mathrm{d}t^{\prime}\right)\!,\end{split} \tag{18}\]
where \(S_{0}[\mathbf{x},\tilde{\mathbf{x}}]=\tilde{\mathbf{x}}\cdot[\tilde{\mathbf{x}}+\mathbf{x}\,]\) is called the free action. \(\mathbf{f}\cdot\mathbf{g}=\sum_{i=1}^{N}\int f_{i}(t)g_{i}(t)\,\mathrm{d}t\) is introduced for compactness. From Eq. (18), we have to introduce two auxiliary overlaps,
\[\begin{split} Q_{1}(t,t^{\prime})&=\frac{g^{2}}{N} \sum_{j}\phi_{j}(t)\phi_{j}(t^{\prime}),\\ Q_{2}(t,t^{\prime})&=\frac{g^{2}\eta}{N}\sum_{j} \phi_{j}(t)\tilde{x}_{j}(t^{\prime}),\end{split} \tag{19}\]
which converges to (scaled) Gaussian fields due to the central limit theorem when \(N\) is sufficiently large. Thus, we can insert these order parameters into Eq. (18) by the Fourier integral
representation of Dirac delta functions,
\[\delta\left(-\frac{N}{g^{2}}Q_{1}(t,t^{\prime})+\sum_{j}\phi_{j}(t) \phi_{j}(t^{\prime})\right) \tag{20}\] \[= \frac{1}{2\pi}\int\mathcal{D}\hat{Q}_{1}(t,t^{\prime})\exp\left[ \int\int\hat{Q}_{1}(t,t^{\prime})\Bigg{(}-\frac{N}{g^{2}}Q_{1}(t,t^{\prime})+ \sum_{j}\phi_{j}(t)\phi_{j}(t^{\prime})\Bigg{)}\mathrm{d}t\,\mathrm{d}t^{\prime }\right],\] \[\delta\left(-\frac{N}{g^{2}}Q_{2}(t,t^{\prime})+\eta\sum_{j}\phi_ {j}(t)\tilde{x}_{j}(t^{\prime})\right)\] \[= \frac{1}{2\pi}\int\mathcal{D}\hat{Q}_{2}(t,t^{\prime})\exp\left[ \int\int\hat{Q}_{2}(t,t^{\prime})\Bigg{(}-\frac{N}{g^{2}}Q_{2}(t,t^{\prime})+ \eta\sum_{j}\phi_{j}(t)\tilde{x}_{j}(t^{\prime})\Bigg{)}\mathrm{d}t\,\mathrm{d }t^{\prime}\right].\]
Finally, we can re-express the averaged moment-generating functional as
\[Z[\mathrm{j},\tilde{\mathrm{j}}]= \int\int\mathcal{D}\mathcal{X}\mathcal{D}\mathcal{Q}\exp\biggl{(} -\frac{N}{g^{2}}\hat{Q}_{1}\cdot Q_{1}-\frac{N}{g^{2}}\hat{Q}_{2}\cdot Q_{2}-S _{0}[\mathbf{x},\tilde{\mathbf{x}}]+\frac{\sigma^{2}}{2}\tilde{\mathbf{x}}\cdot\tilde{\mathbf{ x}}+\tilde{\mathrm{j}}\cdot\mathbf{x}+\mathrm{j}\cdot\tilde{\mathbf{x}} \tag{21}\] \[\quad+\frac{1}{2}\int\int\sum_{j}\tilde{x}_{j}(t)Q_{1}(t,t^{ \prime})\tilde{x}_{j}(t^{\prime})\mathrm{d}t\,\mathrm{d}t^{\prime}+\frac{1}{2 }\int\int\sum_{j}\tilde{x}_{j}(t)Q_{2}(t,t^{\prime})\phi_{j}(t^{\prime}) \mathrm{d}t\,\mathrm{d}t^{\prime}\] \[\quad+\int\int\sum_{j}\phi_{j}(t)\hat{Q}_{1}(t,t^{\prime})\phi_{ j}(t^{\prime})\mathrm{d}t\,\mathrm{d}t^{\prime}+\eta\int\int\sum_{j}\phi_{j}(t) \hat{Q}_{2}(t,t^{\prime})\tilde{x}_{j}(t^{\prime})\mathrm{d}t\,\mathrm{d}t^{ \prime}\Bigg{)},\]
where \(\mathcal{D}\mathbf{X}\equiv\mathcal{D}\mathbf{x}\mathcal{D}\tilde{\mathbf{x}}\), and \(\mathcal{DQ}\equiv\left(\frac{N}{2\pi g^{2}}\right)^{2}\mathcal{DQ}_{1}(t,t^{ \prime})\mathcal{D}\hat{Q}_{1}(t,t^{\prime})\mathcal{DQ}_{2}(t,t^{\prime}) \mathcal{D}\hat{Q}_{2}(t,t^{\prime})\), and we also introduce new notations,
\[\hat{Q}_{1}\cdot Q_{1} =\int\int\hat{Q}_{1}(t,t^{\prime})Q_{1}(t,t^{\prime})\mathrm{d}t \,\mathrm{d}t^{\prime}, \tag{22}\] \[\hat{Q}_{2}\cdot Q_{2} =\int\int\hat{Q}_{2}(t,t^{\prime})Q_{2}(t,t^{\prime})\mathrm{d}t \,\mathrm{d}t^{\prime}.\]
We can now remark that the averaged moment-generating functional is completely factorized over neurons, which implies that the original complex dynamics with \(N\) interacting neurons is captured by a mean-field one-neuron system subject to a correlated Gaussian noise. More compactly, we recast the averaged moment-generating functional as
\[Z[j,\tilde{\mathrm{J}}]= \int\mathcal{DQ}\exp\left(Nf(Q,\hat{Q},x,\tilde{x})\right), \tag{23}\] \[f(Q,\hat{Q},x,\tilde{x})= -\frac{1}{g^{2}}\hat{Q}_{1}\cdot Q_{1}-\frac{1}{g^{2}}\hat{Q}_{2} \cdot Q_{2}+\log\tilde{Z}[j,\tilde{j}],\] \[\tilde{Z}[j,\tilde{j}]= \int\mathcal{D}X\exp\left(\mathcal{L}\left(Q,\hat{Q},x,\tilde{x }\right)\right),\] \[\mathcal{L}\left(Q,\hat{Q},x,\tilde{x}\right)= -S_{0}[x,\tilde{x}]+\frac{\sigma^{2}}{2}\tilde{x}\cdot\tilde{x}+ \tilde{j}\cdot x+j\cdot\tilde{x}+\frac{1}{2}\tilde{x}^{T}Q_{1}\tilde{x}+\frac{ 1}{2}\tilde{x}^{T}Q_{2}\phi+\phi^{T}\hat{Q}_{1}\phi+\eta\phi^{T}\hat{Q}_{2} \tilde{x},\]
where \(\tilde{Z}[j,\tilde{j}]\) is the effective moment generating functional for one-neuron system, which will be mathematically clear at the end of the derivation. Thus, \(x,\tilde{x},j,\tilde{j},X\) are the mean-field counterpart of their original meaning in the high dimensional space. Accordingly, we have \(f\cdot g=\int f(t)g(t)dt\). In addition, we define the new notation involving \(\{Q,\hat{Q}\}\) in terms of the quadratic form as \(f^{T}Qg=\int\int f(t)Q(t,t^{\prime})g(t\). In \(N\to\infty\), we estimate asymptotically the averaged dynamical partition function by applying the Laplace method,
\[Z[j,\tilde{j}]=\int\mathcal{DQ}\exp\left(Nf(Q,\hat{Q},x,\tilde{x})\right) \approx\exp\left(Nf(Q^{\star},\hat{Q}^{\star},x,\tilde{x})\right), \tag{24}\]
where \(\{Q^{\star},\hat{Q}^{\star}\}\) maximizes the dynamical action \(f\). We thus have,
\[\begin{split}\frac{\delta f(Q,\hat{Q}_{\star},x,\tilde{x})}{ \delta\hat{Q}_{1}(t,t^{\prime})}=0\to& Q_{1}^{\star}(t,t^{\prime}) =g^{2}\big{\langle}\phi(t)\phi(t^{\prime})\big{\rangle}_{\mathcal{L}},\\ \frac{\delta f(Q,\hat{Q}_{\star},x,\tilde{x})}{\delta Q_{1}(t,t^{ \prime})}=0\to&\hat{Q}_{1}^{\star}(t,t^{\prime})=\frac{g^{2}}{2} \big{\langle}\tilde{x}(t)\tilde{x}(t^{\prime})\big{\rangle}_{\mathcal{L}},\\ \frac{\delta f(Q,\hat{Q}_{\star},x,\tilde{x})}{\delta\hat{Q}_{2}( t,t^{\prime})}=0\to& Q_{2}^{\star}(t,t^{\prime})=g^{2}\eta\left\langle\phi(t)\tilde{x}(t^{ \prime})\right\rangle_{\mathcal{L}},\\ \frac{\delta f(Q,\hat{Q}_{\star},x,\tilde{x})}{\delta Q_{2}(t,t^{ \prime})}=0\to&\hat{Q}_{2}^{\star}(t,t^{\prime})=\frac{g^{2}}{2} \big{\langle}\tilde{x}(t)\phi(t^{\prime})\big{\rangle}_{\mathcal{L}},\end{split} \tag{25}\]
where,
\[\left\langle\mathcal{O}\right\rangle_{\mathcal{L}}=\frac{\int\mathcal{O}(X) \exp[\mathcal{L}(X)]\mathcal{D}X}{\tilde{Z}[j,\tilde{j}]}. \tag{26}\]
This average can be seen as the dynamical mean field measure provided by \(\tilde{Z}[j,\tilde{j}]\) in the one-neuron system, in analogy with the replica analysis in equilibrium spin glass theory [20]. In the following text, we will omit the subscript \(\mathcal{L}\) for simplicity.
Now we come to the physical meaning of the dynamics order parameters. First, it is easy to find that \(Q_{1}^{\star}(t,t^{\prime})\) is related to the auto-correlation function, which is
\[C(t,t^{\prime})=\frac{1}{N}\sum_{i}\langle\phi_{i}(t)\phi_{i}(t^{\prime}) \rangle\to\langle\phi(t)\phi(t^{\prime})\rangle, \tag{27}\]
and \(Q_{1}^{\star}(t,t^{\prime})=g^{2}C(t,t^{\prime})\). Second, \(\hat{Q}_{1}^{\star}(t,t^{\prime})\) will always vanish because \(\frac{g^{n}}{\delta j(t_{1})\cdot\delta j(t_{n})}Z[j,0]|_{j=0}=0\) [see Eq. (10)]. In other words, these response fields do not propagate. Finally, \(Q_{2}^{\star}(t,t^{\prime})\) and \(\hat{Q}_{2}^{\star}(t,t^{\prime})\) bear exactly the same physical meaning, which relates to the response function,
\[R(t,t^{\prime})=\frac{1}{N}\sum_{i}\frac{\delta\langle\phi_{i}(t)\rangle}{ \delta j_{i}(t^{\prime})}\bigg{|}_{j=0}, \tag{28}\]
where the average is taken under the path probability, i.e.,
\[R(t,t^{\prime})=\frac{1}{N}\sum_{i}\frac{\delta\langle\phi_{i}(t)\rangle}{ \delta j_{i}(t^{\prime})}\bigg{|}_{j=0}=\frac{1}{N}\sum_{i}\langle\phi_{i}(t) \tilde{x}_{i}(t^{\prime})\rangle\to\langle\phi(t)\tilde{x}(t^{\prime})\rangle. \tag{29}\]
This relation bears the similarity with the linear response relation (e.g., susceptibility and fluctuation) in equilibrium statistical physics. Thus, \(Q_{2}^{\star}(t,t^{\prime})=g^{2}\eta R(t,t^{\prime})\) and \(\hat{Q}_{2}^{\star}(t,t^{\prime})=\frac{g^{2}}{2}R(t^{\prime},t)\).
Moreover, the response function \(R(t,t^{\prime})\) will vanish once \(t<t^{\prime}\) because of the causality that perturbations in a later time do not affect the present and past states. In addition, the equal time response function \(R(t,t)\) also vanishes under the Ito convention [21]. It is now clear that the term \(Q_{2}\cdot\hat{Q}_{2}\) vanishes because of the causality and the Ito convention (note that \(R(t,t^{\prime})R(t^{\prime},t)=0\)). Finally, we achieve the final form of the moment generating functional,
\[\begin{split} Z[j,\tilde{j}]&=\prod_{i}^{N}\tilde{ Z}[j,\tilde{j}]=\big{(}\tilde{Z}[j,\tilde{j}]\big{)}^{N},\\ \tilde{Z}[j,\tilde{j}]&\propto\int\mathcal{D}X\exp \left(-S_{0}[x,\tilde{x}]+\tilde{j}\cdot x+j\cdot\tilde{x}+\frac{1}{2}\tilde{x }^{T}\Gamma\tilde{x}+\eta g^{2}\ \tilde{x}^{T}R\phi\right)\\ &=\int\mathcal{D}X\exp\left(-S[x,\tilde{x}]+\tilde{j}\cdot x+j \cdot\tilde{x}\right),\end{split} \tag{30}\]
where \(\Gamma(t,t^{\prime})=g^{2}C(t,t^{\prime})+\sigma^{2}\delta(t-t^{\prime})\), and the effective action (decomposed into free and interation parts) reads,
\[S[x,\tilde{x}]=\tilde{x}\cdot[\dot{x}+x]-\frac{1}{2}\tilde{x}^{T}\Gamma \tilde{x}-\eta g^{2}\ \tilde{x}^{T}R\phi. \tag{31}\]
The first line of Eq. (30) clearly illustrates that the \(N\)-neuron interactive system degrades to \(N\) factorized one-neuron effective systems. And equation (31) further suggests that the dynamical mean-field description of the \(N\)-neuron dynamics exists [from Eq. (12)], i.e.,
\[\dot{x}(t)=-x(t)+\eta g^{2}\int_{0}^{t}R(t,t^{\prime})\phi(t^{\prime})\dd t^{ \prime}+\gamma(t), \tag{32}\]
where \(\langle\gamma(t)\gamma(t^{\prime})\rangle=g^{2}C(t,t^{\prime})+\sigma^{2} \delta(t-t^{\prime})\). When \(\eta=0\), Eq. (32) reduces to Eq. (3). It is interesting that the spatially correlated asymmetric connections between neurons in the \(N\)-neuron system are transformed into accumulated interactions of dynamics history in the one-neuron (mean-field description) system. The underlying physics is more transparent in the cavity framework introduced later.
The MSDRJ formalism allows one to derive integro-differential equations involving response and correlation functions. For example, we could compute the dynamical equations of mean-field correlation function \(\Delta(t,t^{\prime})=\langle x(t)x(t^{\prime})\rangle\) and response function \(\chi(t,t^{\prime})=\left.\frac{\delta(x_{i}(t))}{\delta j_{i}(t^{\prime})} \right|_{j=0}\) for currents. First, we consider two identities,
\[\frac{\delta x(t^{\prime})}{\delta\tilde{x}(t)}=0,\qquad\frac{\delta x(t^{ \prime})}{\delta x(t)}=\delta(t-t^{\prime}). \tag{33}\]
Then, we take the path average over the probability defined by Eq. (30),
\[\begin{split}\left\langle\frac{\delta x(t^{\prime})}{\delta \tilde{x}(t)}\right\rangle&=\int\mathcal{D}X\frac{\delta x(t^{ \prime})}{\delta\tilde{x}(t)}\exp\left(-S[x,\tilde{x}]\right)\\ &=\int\mathcal{D}Xx(t^{\prime})\frac{\delta S[x,\tilde{x}]}{ \delta\tilde{x}(t)}\exp\left(-S[x,\tilde{x}]\right)\\ &=\left\langle x(t^{\prime})\bigg{(}\dot{x}(t)+x(t)-\int\Gamma(t,s)\tilde{x}(s)\dd s-\eta g^{2}\int R(t,s)\phi(s)\dd s\right)\right\rangle=0, \end{split} \tag{34}\]
where the integral by parts is used to derive the second equality. This relation will immediately give rise to,
\[\frac{\partial}{\partial t}\Delta(t,t^{\prime})=-\Delta(t,t^{\prime})+g^{2}\int_{ 0}^{t^{\prime}}\chi(t^{\prime},s)C(t,s)ds+\sigma^{2}\chi(t^{\prime},t)+\eta g^{2 }\int_{0}^{t}R(t,s)\big{\langle}x(t^{\prime})\phi(s)\big{\rangle}\,ds. \tag{35}\]
Similarly, the other identity in Eq. (33) leads to,
\[\frac{\partial}{\partial t}\chi(t,t^{\prime})=-\chi(t,t^{\prime})+\delta(t-t^ {\prime})+\eta g^{2}\int_{t^{\prime}}^{t}R(t,s)R(s,t^{\prime})ds. \tag{36}\]
These integro-differential equations are particularly difficult to solve, e.g., no closed form solutions exist except at \(\eta=0\). In general, an perturbative expansion of the non-linear transfer function may be required [12].
### Dynamical cavity approach
In this section, we introduce the dynamical cavity approach, which is more physically transparent (like its static counterpart [20]). The dynamical cavity approach gives exactly the same DMFT equation that is based on the moment generating functional method. Our starting point is still the \(N\)-neuron stochastic dynamics,
\[\dot{x}_{i}(t)=-x_{i}(t)+g\sum_{j=1}\!J_{ij}\phi_{j}(t)+j_{i}(t)+\sigma\xi_{i} (t),\quad i=1,\ldots,N, \tag{37}\]
where the Gaussian noise \(\xi_{i}(t)\) has the variance \(\big{\langle}\xi_{i}(t)\xi_{j}(t^{\prime})\big{\rangle}=\delta_{ij}\delta(t-t ^{\prime})\). Connections \(J_{ij}\) are drawn from the centered Gaussian distribution with the variance \(\frac{1}{N}\) as well as the covariance \(\overline{J_{ij}J_{ji}}=\frac{\eta}{N}\).
First, we add a new neuron into the original system, such that we have a new synaptic current \(x_{0}(t)\) together with the corresponding connections \((J_{0i},J_{i0})\), for \(i=1,\ldots,N\). As a result, all the neurons in the original system will be affected by this new neuron. We regard this impact as a small perturbation in the large network limit. We can thus apply the linear response theory as follows,
\[\begin{split}\tilde{\phi}_{i}(t)&=\phi_{i}(t)+\sum _{k=1}^{N}\int_{0}^{t}\frac{\delta\phi_{i}(t)}{\delta j_{k}(s)}\bigg{|}_{j=0} \,j_{k}(s)\,\mathrm{d}s\\ &=\phi_{i}(t)+\sum_{k=1}^{N}\int_{0}^{t}R_{ik}(t,s)\,[gJ_{k0} \phi_{0}(s)]\,\mathrm{d}s\,,\end{split} \tag{38}\]
where \(R_{ik}(t,s)=\left.\frac{\delta\phi_{i}(t)}{\delta j_{k}(s)}\right|_{j=0}\) defines the linear response function, and the small perturbation is given by \(j_{k}(s)=gJ_{k0}\phi_{0}(s)\). Then, we can write down the dynamical equation of \(x_{0}(t)\),
\[\begin{split}\dot{x}_{0}(t)&=-x_{0}(t)+g\sum_{j\neq 0 }J_{0j}\tilde{\phi}_{j}(t)+j_{0}(t)+\sigma\xi_{0}(t)\\ &=-x_{0}(t)+g\sum_{j=1}^{N}J_{0j}\bigg{[}\phi_{j}(t)+\sum_{k=1}^{ N}\int_{0}^{t}R_{jk}(t,s)[gJ_{k0}\phi_{0}(s)]\,\mathrm{d}s\bigg{]}+j_{0}(t)+ \sigma\xi_{0}(t)\\ &=-x_{0}(t)+g\sum_{j=1}^{N}J_{0j}\phi_{j}(t)+\sigma\xi_{0}(t)+g^ {2}\int_{0}^{t}\sum_{jk}J_{0j}R_{jk}(t,s)J_{k0}\phi_{0}(s)\,\mathrm{d}s+j_{0}( t),\end{split} \tag{39}\]
where the fourth term captures how the asymmetric correlation affects the current state of the new neuron through the response function. The bare field without the effects of synaptic correlation is separated out as follows,
\[\gamma_{0}(t)=g\sum_{j=1}^{N}\!J_{0j}\phi_{j}(t)+\sigma\xi_{0}(t), \tag{40}\]
which becomes the centered Gaussian field whose variance is given by
\[\langle\gamma_{0}(t)\gamma_{0}(t^{\prime})\rangle=g^{2}C(t,t^{\prime})+\sigma^ {2}\delta(t-t^{\prime}), \tag{41}\]
where \(C(t,t^{\prime})=\frac{1}{N}\sum_{j}\phi_{j}(t)\phi_{j}(t^{\prime})\) is the population averaged auto-correlation function.
The computation of the fourth term in Eq. (39) requires us to estimate \(\sum_{jk}J_{0j}R_{jk}(t,s)J_{k0}\) which is subject to central limit theorem by construction. We first consider the diagonal part \(\sum_{j}J_{0j}R_{jj}(t,s)J_{j0}\), which will converge asymptotically to its mean because of the negligible variance (of the order \(1/\sqrt{N}\)),
\[\mathbb{E}_{j}\sum_{j}J_{0j}R_{jj}(t,s)J_{j0}=\frac{\eta}{N}\sum_{j}R_{jj}(t,s). \tag{42}\]
Then, we turn to the non-diagonal part \(\sum_{j\neq k}J_{0j}R_{jk}(t,s)J_{k0}\), whose mean is zero due to \(\overline{J_{0j}J_{k0}}=0\), and we should thus consider the fluctuation given by
\[\mathbb{E}_{j}\sum_{j\neq k}\sum_{j^{\prime}\neq k^{\prime}}\!J_{0j^{\prime}} J_{k0}J_{k^{\prime}0}R_{jk}(t,s)R_{j^{\prime}k^{\prime}}(t,s)=\sum_{j\neq k} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
respect to the noise trajectories and the initial conditions). More precisely,
\[\begin{split} C(t,t^{\prime})&=\frac{1}{N}\sum_{j}\phi_ {j}(t)\phi_{j}(t^{\prime})=\langle\phi(t)\phi(t^{\prime})\rangle,\\ R(t,t^{\prime})&=\frac{1}{N}\sum_{i}\frac{\delta\phi _{i}(t)}{\delta j_{i}(t^{\prime})}\bigg{|}_{j=0}=\left\langle\frac{\delta\phi( t)}{\delta j(t^{\prime})}\right\rangle\bigg{|}_{j=0},\end{split} \tag{47}\]
which leads to the same Eq. (32) that has been previously derived from the moment generating functional. Note that the last equality in the above response function is equivalent to the definition in Eq. (28) [23].
## 4 Numerical and theoretical analysis
### Numerical solution of the DMFT equations
In general, the DMFT equation does not have a closed-form solution (e.g., for \(\eta\neq 0\)). Therefore, we have to solve the equation numerically. In fact, solving the self-consistent DMFT equation [Eq. (32)] is more challenging compared to the counterpart of an equilibrium system. The main reason is that the self-consistent iteration involves time-dependent function (\(C(t,t^{\prime})\) and \(R(t,t^{\prime})\)) rather than scalar variables like overlaps in spin glass theory. Note that the two-point correlation could relax towards the Edwards-Anderson order parameter in mean-field spin glass models [24]. Following the previous work [16], we give the numerical implementation details of solving the DMFT equations for the generic random neural networks below. Codes are available at the Github link [25].
The iteration scheme works in discrete time steps, and we must set a duration \(T(ms)\) as well as a time interval \(\Delta t(ms)\) for discretization. The time is measured in units of millisecond. In the beginning of the iteration, we initialize the self-consistent function matrix \(C[t,t^{\prime}]\) and \(R[t,t^{\prime}]\), whose dimensions are both \((T/\Delta t)\times(T/\Delta t)\). In each iteration, we carry out the following steps:
1. Draw \(M\) samples of noise trajectories \(\{\gamma^{a}[t]\}_{a=1}^{M}\) from the multivariate Gaussian distribution \(\mathcal{N}(0,g^{2}C[t,t^{\prime}]+\sigma^{2}/\Delta t)\), where the emergence of \(\Delta t\) is the result of discretization for the Dirac delta function.
2. For these noise trajectories, run \(M\) corresponding current trajectories independently by a direct discretization, \[x^{a}[t+1]=(1-\Delta t)x^{a}[t]+\gamma^{a}[t]\Delta t+g^{2}\eta\Delta t^{2} \sum_{s=0}^{t}R[t,s]\phi^{a}[s].\] (48)
3. Calculate the self-consistent functions, in which the auto-correlation function \(C[t,t^{\prime}]\) is calculated by \(1/M\sum_{a}\phi_{a}[t]\phi_{a}[t^{\prime}]\) and the response function is calculated by integrating the dynamics Eq. (36), \[\chi^{a}[t+1,t^{\prime}]=(1-\Delta t)\chi^{a}[t,t^{\prime}]+\delta_{t,t^{ \prime}}+\eta g^{2}\Delta t^{2}\sum_{s=t^{\prime}}^{t}R_{\text{iter}}[t,s]R^{ a}[s,t^{\prime}],\] (49) where the superscript \(a\) is the trajectory index, and the response function is computed by \(R[t,t^{\prime}]=\phi^{\prime}(x[t])\chi[t,t^{\prime}]\). Here \(R_{\text{iter}}[t,s]\) refers to the response function estimated from
the _last iteration step_. After running the dynamics, we compute the new response function \(\chi[t,t^{\prime}]=1/M\sum_{a}\chi^{a}[t,t^{\prime}]\).
An alternative way to compute the response function is using the Novikov's theorem (see Appendix A [26]). We do not apply this formula in the iteration, as it needs much more trajectories for the convergence.
We compare the observables obtained from the direct simulation of \(N\)-neuron dynamics [Eq. (1)] and the mean-field solution for the one-neuron dynamics [Eq. (32)] to check the effectiveness of the DMFT. Besides the correlation function and response function, we also compare the observable of mean firing-rate \(m(t)=\left\langle\phi(t)\right\rangle\), which is also an important quantity of the current system. The curves show an excellent agreement for each observable [Figure. (1)], where the curve for the temporal integration is computed from 100 independent runs. For the response function, the argument is selected to be the time difference, as \(R(t-t^{\prime})\) is our focus in the steady state where the time-translation invariance holds. Note that \(R(0)=0\) is a direct result of the Ito convention. The inset shows the relative differences between two types of observables, computed by,
\[\frac{\left\|\hat{m}-\mathbf{m}\right\|_{2}}{\left\|m\right\|_{2}},\qquad\frac{ \left\|\hat{C}-C\right\|_{F}}{\left\|C\right\|_{F}},\qquad\frac{\left\|\hat{R }-R\right\|_{F}}{\left\|R\right\|_{F}}, \tag{50}\]
where hated variables represent the simulation results, while the non-hated ones are DMFT results, and the subscripts \(2\) and \(F\) means the \(\ell_{2}\) norm and Frobenius norm respectively. The relative differences decrease as \(N\) grows, which meets our expectation that the DMFT equation predicts the typical behavior of the dynamics under the large network limit.
Figure 1: Comparison between observables obtained from direct simulation (color curves) and iterative mean-field solution (dash curves). The parameters set \(\{g,\eta,\sigma,\phi,\Delta t,t^{\prime}\}\) is \(\{0.2,0.5,0.1,\tanh,0.1(ms),10(ms)\}\). Inset: relative differences vs \(N\).
### Analysis of fixed point solutions
In this section, we derive the fixed point solution of the DMFT equation. Under a special choice of model parameters, we could even obtain the analytic fixed point solution in the mean-field description. We then focus on the noise-free case of \(\sigma=0\) with the dynamics:
\[\dot{x}(t)=-x(t)+\hat{\gamma}(t)+g^{2}\eta\int_{0}^{t}R(t,t^{\prime})\phi(t^{ \prime})\,\mathrm{d}t^{\prime}+j(t), \tag{51}\]
where \(j(t)\) is a perturbation and \(\hat{\gamma}(t)\) is the noise-free effective field with the variance,
\[\langle\hat{\gamma}(t)\hat{\gamma}(t^{\prime})\rangle=g^{2}C(t,t^{\prime}). \tag{52}\]
We assume that the dynamics converges to a fixed point (\(\dot{x}(t)=0\)). In the steady state, we get \(R(t,t^{\prime})=R(t-t^{\prime})\) to simplify the integral term,
\[\int_{0}^{t}R(t,t^{\prime})\phi(t^{\prime})\,\mathrm{d}t^{\prime}=\int_{0}^{t} R(u)\phi(t-u)\mathrm{d}u\stackrel{{ t\to\infty}}{{\longrightarrow}}\int_{0}^{\infty}R(u)\phi( \infty)\mathrm{d}u=R_{\mathrm{int}}\phi^{*}, \tag{53}\]
where \(R_{\mathrm{int}}=\int_{0}^{\infty}R(u)\mathrm{d}u\) is the integrated response function and \(*\) indicate the steady state. Then, we could obtain the fixed point relation,
\[x^{*}=\hat{\gamma}^{*}+w\phi^{*}+j, \tag{54}\]
where \(w=g^{2}\eta R_{\mathrm{int}}\), and
\[\langle(\hat{\gamma}^{*})^{2}\rangle=g^{2}C. \tag{55}\]
However, the fixed point relation is not closed, and we must evaluate \(R_{\mathrm{int}}\) by its definition. In essence, the integrated response function \(R_{\mathrm{int}}\) could be computed by \(\langle\frac{\partial\phi^{*}}{\partial j}\rangle\), setting \(j\) to zero later. An equivalent derivation is given in the Appendix B. One can generate a series of noise samples \(\hat{\gamma}^{*}\) from \(\mathcal{N}(0,g^{2}C^{\mathrm{iter}})\), where the superscript iter denotes the value from the last iteration step. Second, the new observables from these samples are computed as,
\[C=\left\langle(\phi^{*})^{2}\right\rangle_{\hat{\gamma}^{*}},\qquad R_{ \mathrm{int}}=\left\langle\phi^{*}[\hat{\gamma}^{*}+w\phi^{*}][1+wR_{\mathrm{ int}}^{\mathrm{iter}}]\right\rangle_{\hat{\gamma}^{*}}. \tag{56}\]
These equations can be iteratively solved, requiring a high computational complexity due to the noise sampling and fixed point searching for each noise sample.
To achieve an analytic fixed point solution, we choose the ReLU function \(\phi(x)=x\Theta(x)\), which is commonly used in machine learning and theoretical neuroscience studies. We could thus recast Eq. (54) to the following form,
\[\phi^{*}=\phi(\hat{\gamma}^{*}+w\phi^{*}), \tag{57}\]
where \(j\) is erased. With the help of ReLU function, we can write \(\phi^{*}\) as a function of \(\hat{\gamma}^{*}\),
\[\phi^{*}=\frac{\hat{\gamma}^{*}\Theta(\hat{\gamma}^{*})}{1-w}\equiv\psi(\hat{ \gamma}^{*}), \tag{58}\]
and the response function becomes,
\[R_{\mathrm{int}}=\left\langle\frac{\delta\phi^{*}}{\delta j}\right\rangle= \left\langle\frac{\delta\phi^{*}}{\delta\hat{\gamma}^{*}}\right\rangle=\left \langle\frac{\Theta(\hat{\gamma}^{*})}{1-w}\right\rangle. \tag{59}\]
Therefore, we can derive the self-consistent equations of \(C\), \(R_{\rm int}\) as well as \(m\),
\[C =\langle(\phi^{*})^{2}\rangle=\int\left(\frac{\hat{\gamma}^{*}\Theta (\hat{\gamma}^{*})}{1-w}\right)^{2}p(\hat{\gamma}^{*})\,\mathrm{d}\hat{\gamma}^ {*}=\frac{g^{2}C}{2(1-w)^{2}},\] \[R_{\rm int} =\left\langle\frac{\Theta(\hat{\gamma}^{*})}{1-w}\right\rangle= \int\frac{\Theta(\hat{\gamma}^{*})}{1-w}p(\hat{\gamma}^{*})\,\mathrm{d}\hat{ \gamma}^{*}=\frac{1}{2(1-w)}, \tag{60}\] \[m =\langle\phi^{*}\rangle=\int\frac{\hat{\gamma}^{*}\Theta(\hat{ \gamma}^{*})}{1-w}p(\hat{\gamma}^{*})\,\mathrm{d}\hat{\gamma}^{*}=\frac{g^{2}C }{\sqrt{2\pi}(1-w)},\]
where the integral range covers the entire real value region, and \(w=g^{2}\eta R_{\rm int}\). In the following, we omit the subscript of \(R_{\rm int}\). These relations will give the analytic fixed point (observables),
\[m=0,\qquad C=0,\qquad R=\frac{1-\sqrt{1-2g^{2}\eta}}{2g^{2}\eta}. \tag{61}\]
Note that we discard the other root for \(R\) because of the divergence in the limit \(\eta\to 0\) (keeping \(g\) finite). Equation (61) implies that \(g^{2}/(2(1-w)^{2})\neq 1\) and \(1-2g^{2}\eta>0\) (see Appendix C).
We compare the fixed point solution obtained directly from Eq. (51) to the analytic fixed point given by Eqs. (60, 56). The fixed point iteration becomes difficult when \(\phi=\tanh\) in the current model. This fixed point is a well-known result of \(m=C=0\), which is hard to achieve numerically, because numerical errors always make it impossible to get a perfect zero value. We must set a very small value for the initialization of \(C\), or we can just set \(C\) to zero and then get \(R_{\rm int}\). In spite of this numerical error, we observe a perfect match (Figure 2) as the iteration step of the DMFT equation [Eq. (51)] increases.
### Fluctuation-dissipation theorem
The fluctuation-dissipation theorem (FDT) relates the linear response function to the correlation function in equilibrium, which establishes a model independent relationship connecting the statistics of spontaneous fluctuation to the response to perturbations [22]. A static counterpart is the
Figure 2: Convergence of observables to analytic fixed point solutions during iteration of the DMFT equation. The parameters set \(\{g,\eta,\sigma,\Delta t\}\) is \(\{0.2,0.5,0.1,0.1(ms)\}\). (a) Transfer function \(\phi=\tanh\). (b) Transfer function \(\phi={\rm ReLU}\).
linear response theory that relates the fluctuation and susceptibility. FDT allows one to predict the mean response to external perturbations without applying any perturbation, and instead, by analyzing the time-dependent correlations. FDT holds particularly in a stochastic system subject to conservative forces and the dynamics bears a steady state [19, 22]. We discuss the relevance of FDT in the context of random neural networks in this subsection.
As we know, dynamical systems tend to be more difficult to study than equilibrium systems, because we have no prior knowledge of the steady state (if any) in a general context. In the simplest case, we consider a Langevin dynamics,
\[\lambda\dot{x}_{i}(t)=-\frac{\partial\mathcal{H}(\mathbf{x})}{\partial x_{i}(t)}+ \eta_{i}(t), \tag{62}\]
where \(\lambda\) is a friction coefficient, \(\eta_{i}(t)\) is a Gaussian white noise, and \((\eta_{i}(t)\eta_{j}(t^{\prime}))=2T\lambda\delta_{ij}\delta(t-t^{\prime})\). The temperature bridges the relationship between the noise strength and the friction in the steady state. Here, we assume that the dynamics could be interpreted as moving particles in a potential \(\mathcal{H}(\mathbf{x})\) (or gradient dynamics). This Langevin dynamics, in the long time limit, is able to reach the thermal equilibrium that is captured by the Gibbs-Boltzmann distribution,
\[P(\mathbf{x})\sim\exp\left(-\frac{\mathcal{H}(\mathbf{x})}{T}\right), \tag{63}\]
where the Hamiltonian is exactly the potential function that drives the dynamics through Eq. (62). This precise probability measure can be derived from the Fokker-Planck equation by setting the probability current to zero [22]. We have assumed \(k_{B}=1\) without loss of generality.
Next, we consider a linear and full-symmetric network whose dynamics is governed by
\[\frac{dx_{i}(t)}{dt}=-x_{i}(t)+g\sum_{j=1}^{N}J_{ij}x_{j}(t)+\sigma\xi_{i}(t), \tag{64}\]
where \(J_{ij}=J_{ji}\). By comparing this equation with the Langevin dynamics Eq. (62), we can directly write down the Hamiltonian \(\mathcal{H}(\mathbf{x})=-\frac{1}{2}\sum_{i}x_{i}^{2}-\frac{1}{2}g\sum_{i\neq j}J_ {ij}x_{i}x_{j}\), and the temperature is determined by \(T=\sigma^{2}/2\). We remark that non-gradient dynamics (e.g., \(\eta\neq 1\)) may have a non-equilibrium steady state for which FDT breaks. In this simple gradient dynamics, the equilibrium can be reached and the FDT holds as follows,
\[\chi(t,t^{\prime})=-\frac{1}{T}\partial_{t}\Delta(t,t^{\prime})\Theta(t-t^{ \prime}), \tag{65}\]
where the instantaneous response function \(\chi(t,t^{\prime})=\frac{1}{N}\sum_{i}\frac{\partial(x_{i}(t))_{\xi}}{ \partial j_{i}(t)}\) and the time-dependent fluctuation \(\Delta(t,t^{\prime})=\frac{1}{N}\sum_{i}\langle x_{i}(t)x_{i}(t^{\prime})\rangle _{\xi}\). These functions also bear the time-translation invariance due to the steady state condition. In addition, we can prove that FDT is valid in the dynamical system of Eq. (64), and we leave the proof to the Appendix D.
An experimentally measurable quantity is the integrated response function calculated by
\[\chi_{\rm int}(t,t^{\prime})=\int_{t^{\prime}}^{t}\mathrm{d}s\;\chi(s,t^{ \prime}). \tag{66}\]
Then we rescale the integrated response by the equal time correlation function, and get
\[\hat{\chi}_{\rm int}(t,t^{\prime})=\chi_{\rm int}(t,t^{\prime})/\Delta(t^{ \prime},t^{\prime}),\qquad\hat{\Delta}(t,t^{\prime})=\Delta(t,t^{\prime})/ \Delta(t^{\prime},t^{\prime}). \tag{67}\]
Thus, when \(t\geq t^{\prime}\), we have the relation as,
\[\hat{\chi}_{\text{int}}(t,t^{\prime})=\frac{1}{T}\big{(}1-\hat{\Delta}(t,t^{ \prime})\big{)}. \tag{68}\]
Equation (68) establishes an easy way to measure the temperature determined by the slope of the parametric plot \(\hat{\chi}_{\text{int}}(t+t_{w},t_{w})\) versus \(\hat{\Delta}(t+t_{w},t_{w})\) where \(t_{w}\) is a waiting time for reaching the steady state. This temperature is called the effective temperature [24], which may be a constant or changes with the time difference. If the Gibbs-Boltzmann measure exists, the effective temperature coincides with the thermodynamic temperature. However, these two temperatures are not equal in general. Even in aging systems where the the decay of the correlation and response functions depend on the waiting time (how long the system is prepared), the effective temperature may be different for different ranges of the correlation function (displaying multiple relaxation time scales as in mean-field glass models [24]).
We verify the fluctuation-dissipation theorem in the rate model via solving the DMFT equation and provide insights on the long time behavior. We focus on the influence of the nonlinear function and asymmetric connections on the FDT. The results are summarized in Figure (3). For the case where \(\eta=1\) and \(\phi\) is linear, the FDT must be valid (see the Appendix D), the slope obtained by the linear fitting indicates a temperature closer to the ground truth compared to other non-gradient dynamics. The slight deviation is caused by the numerical errors from simulations. Interestingly, the nonlinear function and the other asymmetry correlation levels do not yield a strongly-violated FDT, because a linear fitting with a larger effective temperature is observed. This suggests that the out-of-equilibrium steady state with unknown probability measure may be approximated by an equilibrium FDT with a larger effective temperature, which offers an interesting perspective to bridge the non-gradient dynamics (commonly observed in recurrent neural networks) to an
Figure 3: Effective temperature of the system given different asymmetry correlation levels and nonlinearities. The waiting time \(t^{\prime}\) is fixed at 15s and the color of points becomes lighter as \(t\) increases. The dash line indicates FDT for the linear system with symmetric connection \(\eta=1\), whose thermodynamic temperature is \(T=\sigma^{2}/2=0.5\) (indicated by a black circle in the inset). (a) Comparison among different asymmetry correlation levels in the linear system. For \(\eta=[-1,0,1]\), the effective temperatures obtained by a linear fitting are \(T_{\text{eff}}=[0.551,0.538,0.524]\), respectively (inset). (b) Comparison among different nonlinear functions when \(\eta=1\). For \(\phi\) selected to be ReLU, Tanh and linear, the effective temperatures obtained by a linear fitting are \(T_{\text{eff}}=[0.527,0.526,0.524]\), respectively (inset).
effective thermodynamic behavior.
## 5 Conclusion
In this lecture note, we briefly introduce the path integral framework, from which the dynamical mean-field theory of generic stochastic dynamics in high dimensional systems is derived. We also introduce a complementary cavity method to derive the exactly same results. Considering the long time limit of the dynamics, we analyze the fixed point solution of the dynamics by a direct deduction of the DMFT equation [16] and by a static cavity analysis [18]. The FDT is also discussed in the context of generic random neural networks. Based on these theoretical descriptions, it is interesting to detect the fundamental relationship between spontaneous fluctuations and response functions to external perturbations, especially in the gradient dynamics commonly observed in deep learning [3, 27]. The fluctuation dissipation relation is also studied in a recent work for spiking neural networks [28], and the path integral framework can also find applications in revealing inner workings in recurrent network models of memory and decision making [29, 30]. We hope this tutorial will expand the cutting-edge researches of learning in neural networks, inspiring more fundamental physics laws governing high dimensional complex neural dynamics and novel algorithmic designs.
## Acknowledgements
We thank all PMI members for inspiring discussions of the dynamical mean-field theory and path integral formalism.
Funding informationThis research was supported by the National Natural Science Foundation of China for Grant number 12122515, and Guangdong Provincial Key Laboratory of Magneto-electric Physics and Devices (No. 2022B1212010008), and Guangdong Basic and Applied Basic Research Foundation (Grant No. 2023B1515040023).
## Appendix A Novikov's theorem
Novikov's theorem characterizes a useful identity to estimate the dynamic response function. In this section, we give a derivation for both the original rate model Eq.(1) as well as the DMFT equation [Eq. (32)]. We consider a general form of dynamical equations,
\[\dot{x}_{i}(t)=F_{i}(\mathbf{x})+c\Xi_{i}(t)+j_{i}(t),\qquad i=1,2,\ldots,N\,, \tag{69}\]
where \(F\) is a general functional of \(\mathbf{x}\) and \(c\) is a constant. Note that, Novikov's theorem is valid only if we consider a Gaussian noise here. Thus, we set a general Gaussian noise with the covariance structure as \(\langle\Xi_{i}(t)\Xi_{j}(t^{\prime})\rangle=D_{ij}(t,t^{\prime})\). For the above dynamics, the response function is defined by
\[R_{ij}(t,t^{\prime})=\left.\frac{\delta\langle\phi_{i}(t)\rangle}{\delta j_{j} (t^{\prime})}\right|_{j=0}. \tag{70}\]
The path average \(\langle\mathcal{O}\rangle\) can be defined by \(\int\int\mathcal{D}\mathbf{x}\mathcal{D}\mathbf{\Xi}\mathcal{O}P(\mathbf{x}\left|\mathbf{\Xi} \right\rangle)P(\mathbf{\Xi})\), where
\[\begin{split} P(\mathbf{x}\left|\mathbf{\Xi}\right\rangle&= \prod_{i,t}\delta(\dot{x}_{i}(t)-F_{i}(\mathbf{x})-c\Xi_{i}(t)-j_{i}(t)),\\ P(\mathbf{\Xi})&=\frac{1}{Z_{\mathbf{\Xi}}}\exp\left(- \frac{1}{2}\sum_{i,j}\int\int\mathrm{d}t\,\mathrm{d}t^{\prime}\,\Xi_{i}(t)D_{ ij}^{-1}(t,t^{\prime})\Xi_{j}(t^{\prime})\right),\end{split} \tag{71}\]
where \(Z_{\mathbf{\Xi}}\) is a normalization constant. With this definition, the response function could therefore be calculated as,
\[\begin{split} R_{ij}(t,t^{\prime})&=\frac{\delta}{ \delta j_{j}(t^{\prime})}\int\int\mathcal{D}\mathbf{x}\mathcal{D}\mathbf{\Xi}P(\mathbf{ \Xi})P(\mathbf{x}\left|\mathbf{\Xi}\right\rangle\phi_{i}(t)\\ &=\int\int\mathcal{D}\mathbf{x}\mathcal{D}\mathbf{\Xi}\phi_{i}(t)P(\mathbf{ \Xi})\frac{\delta}{c\delta\Xi_{j}(t^{\prime})}P(\mathbf{x}\left|\mathbf{\Xi}\right\rangle \\ &=-\int\int\mathcal{D}\mathbf{x}\mathcal{D}\mathbf{\Xi}\phi_{i}(t)P(\mathbf{ x}\left|\mathbf{\Xi}\right\rangle\frac{\delta}{c\delta\Xi_{j}(t^{\prime})}P(\mathbf{ \Xi})\\ &=\frac{1}{c}\int\int\mathcal{D}\mathbf{x}\mathcal{D}\mathbf{\Xi}\phi_{i}( t)P(\mathbf{x}\left|\mathbf{\Xi}\right\rangle\left(\int\sum_{k}\mathrm{d}s\,D_{jk}^{-1} \left(t^{\prime},s\right)\Xi_{k}(s)\right)P(\mathbf{\Xi})\\ &=\frac{1}{c}\left\langle\phi_{i}(t)\left(\int\sum_{k}\mathrm{d} s\,D_{jk}^{-1}\left(t^{\prime},s\right)\Xi_{k}(s)\right)\right\rangle,\end{split} \tag{72}\]
where we have used the property that \(P(\mathbf{x}\left|\mathbf{\Xi}\right\rangle\) is symmetric with respect to \(\Xi_{j}(t^{\prime})\) and \(j_{j}(t^{\prime})\), and we have applied the integral by parts in the third equality. For the original \(N\)-neuron model [Eq.(1)], \(c=\sigma\), \(\Xi_{i}(t)=\xi_{i}(t)\) and \(D_{ij}(t,t^{\prime})=\delta_{ij}\delta(t-t^{\prime})\). We thus have,
\[R_{ij}(t,t^{\prime})=\frac{1}{\sigma}\left\langle\phi_{i}(t)\left(\int\sum_{k} \mathrm{d}s\,\delta_{jk}^{-1}\delta^{-1}(t^{\prime}-s)\xi_{k}(s)\right)\right\rangle =\frac{1}{\sigma}\left\langle\phi_{i}(t)\xi_{j}(t^{\prime})\right\rangle, \tag{73}\]
where we have used the identity \(\int\delta^{-1}(t-s)\delta(s-t^{\prime})\mathrm{d}s=\delta(t-t^{\prime})\). For the DMFT equation [Eq.(32)] of one-neuron system, \(c=1\), \(\Xi(t)=\gamma(t)\) and \(D(t,t^{\prime})=\Gamma(t,t^{\prime})=g^{2}C(t,t^{\prime})+\sigma^{2}\delta(t -t^{\prime})\). Therefore, the response function becomes,
\[R(t,t^{\prime})=\left\langle\phi(t)\int\mathrm{d}s\,\Gamma^{-1}(t^{\prime},s) \gamma(s)\right\rangle. \tag{74}\]
## Appendix B Static cavity method for recurrent dynamics
In this section, we introduce the static cavity method [18] to derive the self-consistent equations for the fixed point of dynamics. We consider the rate model with the ReLU transfer function \(\phi(x)=x\Theta(x)\). The procedure starts from the fixed-point condition of the noise-free rate model [Eq. (1)], which is
\[x_{i}=g\sum_{j=1}^{N}J_{ij}v_{j}+j_{i}, \tag{75}\]
where we set \(v_{j}=\phi(x_{j})\) for simplicity. Note that asymmetric connections incorporate correlation between \(J_{ij}\) and \(v_{j}\), and thereby the central limit theorem in the summation does not apply. To overcome this barrier, we can remove the contribution of \(J_{ji}\) and consider this deletion as a small perturbation. Therefore, the sum in Eq. (75) can be separated into two parts as,
\[x_{i}=g\sum_{j=1}^{N}J_{ij}v_{j\to i}+g\sum_{j=1}^{N}J_{ij}\delta v_{j}+j_{i}, \tag{76}\]
where \(v_{j\to i}\) denotes the firing rate of neuron \(j\) in the absence of \(J_{ji}\), and \(\delta v_{j}\) is the perturbation caused by the presence of \(J_{ji}\). According to the central limit theorem, the first term on the right hand side can now be treated as a Gaussian field, which we denote as \(\tilde{\gamma}_{i}\) and compute the variance,
\[\langle(\tilde{\gamma}_{i})^{2}\rangle=g^{2}\tilde{C}, \tag{77}\]
where \(\tilde{\cdot}\) indicates the cavity quantity. Note that, \(\tilde{C}=\langle v_{j\to i}^{2}\rangle\) is the self-consistent cavity variance function. As for the second term, we compute the perturbation by a linear response approximation,
\[\delta v_{j}=\sum_{k=1}^{N}R_{jk}\eta_{k}, \tag{78}\]
where \(R_{jk}=\frac{\delta v_{j}}{\delta j_{k}}\) is the linear response function. The small perturbation \(\eta_{k}\) here is actually the contribution from neuron \(i\) to neuron \(k\) through \(J_{ki}\), which is exactly \(J_{ki}v_{i}\). Therefore, the second term in the r.h.s. of Eq. (76) becomes,
\[\begin{split} g\sum_{j=1}^{N}J_{ij}\delta v_{j}&=g \sum_{j=1}^{N}\sum_{k=1}^{N}J_{ij}R_{jk}J_{ki}v_{i}\\ &\approx g\eta^{2}Rv_{i},\end{split} \tag{79}\]
where \(R=\frac{1}{N}\sum_{j}R_{jj}\). Note that, the approximation in the last equality is exactly the same as what we did in the dynamical cavity approach [see Eqs.(42,43)].
Finally, we recast the fixed point equation (\(j_{i}=0\)) as
\[x_{i}=\tilde{\gamma}_{i}+wv_{i}, \tag{80}\]
where \(w=g\eta^{2}R\) and \(\langle\tilde{\gamma}_{i}^{2}\rangle=g^{2}\tilde{C}\). The above equation can be transformed to \(v_{i}=\phi(\tilde{\gamma}_{i}+wv_{i})\) and then written as a function of \(\tilde{\gamma}_{i}\) as,
\[v_{i}=\frac{\tilde{\gamma}_{i}\Theta(\tilde{\gamma}_{i})}{1-w}\equiv\psi( \tilde{\gamma}_{i}). \tag{81}\]
We then compute the cavity variance function and the linear response function as follows,
\[\begin{split}\tilde{C}&=\langle v_{i}^{2}\rangle= \int\left(\frac{\tilde{\gamma}_{i}\Theta(\tilde{\gamma}_{i})}{1-w}\right)^{2} p(\tilde{\gamma}_{i})d\tilde{\gamma}_{i}=\frac{g^{2}\tilde{C}}{2(1-w)^{2}},\\ R&=\left(\frac{\delta v_{i}}{\delta\tilde{\gamma}_ {i}}\right)=\int\frac{\Theta(\tilde{\gamma}_{i})}{1-w}p(\tilde{\gamma}_{i})d \tilde{\gamma}_{i}=\frac{1}{2(1-w)}.\end{split} \tag{82}\]
We remark that \(\tilde{C}\), under the limit of \(N\rightarrow\infty\), can be replaced by the full variance function \(\tilde{C}=\langle v_{i}^{2}\rangle\). The above results are now exactly the same with Eq. (60).
Stability analysis for the ReLU transfer function
The stability of the trivial fixed point can be linked to the connectivity spectrum. To be more precise, we take the rate model whose transfer function is \(\phi=\tanh\) as an example. We can first linearize the neural dynamics Eq. (1) (in the absence of white noise) around the fixed point (\(\mathbf{x}^{*}=0\)),
\[\Delta\dot{x}_{i}(t)=\sum_{j}D_{ij}\Delta x_{j}(t)\quad\rightarrow\quad\Delta \mathbf{x}(t)=\exp\left(\mathrm{D}t\right)\Delta\mathbf{x}(0), \tag{83}\]
where,
\[D_{ij}=-\delta_{ij}+gJ_{ij}\phi^{\prime}(x_{j}^{*})=-\delta_{ij}+gJ_{ij} \tag{84}\]
is the local Jacobian matrix at the fixed point. The eigenvalues of this Jacobian matrix determine the stability of the local dynamics. In particular, if the eigenvalues with the largest real part cross zero along the real axis, the dynamics becomes chaotic in our current model. In general, the instability is not a sufficient but necessary condition for the transition to chaos [6]. Moreover, the spectrum of \(J_{ij}\) is actually a well-known elliptic law [31],
\[\rho(\lambda)=\left\{\begin{array}{cc}\frac{1}{\pi(1-\eta^{2})},&\left( \frac{x}{1+\eta}\right)^{2}+\left(\frac{y}{1-\eta}\right)^{2}<1,\\ 0,&\mathrm{otherwise}\,\end{array}\right. \tag{85}\]
where \(\lambda\) is the eigenvalue of complex value, while \(x\) and \(y\) are the coordinates on the real- and imaginary-axis. The special case of \(\eta=0\) gives the circular law in random matrix theory. Thus, the eigenvalue with the largest real part of \(D_{ij}\) is \(g(1+\eta)-1\). Consequently, the stability condition is given by \(g(1+\eta)<1\).
However, the Jacobian is ill-defined when \(\phi=\mathrm{ReLU}\). Instead, we rely on the static cavity method [18]. For the sake of clarity, we follow the same notations introduced in Appendix \(B\). Our starting point is the relation of \(v_{j}=\psi(\tilde{\gamma}_{j})\), which states that the firing-rate of neuron \(j\) could be seen as a function of its cavity input. From the cavity idea, the presence of neuron \(i\) contributes to a perturbation \(\delta v_{j}\), i.e.,
\[\delta v_{j}=\psi^{\prime}(\tilde{\gamma}_{j})\left[g\sum_{k\neq i}J_{jk} \delta v_{k}+gJ_{ji}v_{i}\right], \tag{86}\]
where a linear expansion at \(\tilde{\gamma}_{j}\) is used when the effect of the cavity operation is small. Note that the term in the bracket denotes the deviation of the cavity input to neuron \(j\) between with and without the neuron \(i\). It is reasonable that if the fixed point is stable, the variance of \(\delta v_{j}\) must be finite and positive [18]. Taking the average over the network statistics, we get \(\langle\delta v_{j}\rangle=0\). To compute the variance, we square both sides of Eq. (86) and take the disorder average, which results in,
\[\langle(\delta v)^{2}\rangle=g^{2}\chi_{\phi}\langle(\delta v)^{2}\rangle+ \frac{g^{2}}{N}\chi_{\phi}v_{i}^{2}, \tag{87}\]
where \(\chi_{\phi}=\left\langle(\psi^{\prime}(\tilde{\gamma}_{j}))^{2}\right\rangle\). This equation leads to,
\[N\langle(\delta v)^{2}\rangle=\frac{g^{2}\chi_{\phi}}{1-g^{2}\chi_{\phi}}v_{i} ^{2}. \tag{88}\]
To ensure \(\langle(\delta v)^{2}\rangle\) physical, the condition \(1-g^{2}\chi_{\phi}>0\) must be satisfied. To proceed, we first compute \(\chi_{\phi}\),
\[\chi_{\phi}=\left\langle\left(\psi^{\prime}(\tilde{\gamma}_{j})\right)^{2} \right\rangle=\int\left(\frac{\Theta(\tilde{\gamma}_{i})}{1-w}\right)^{2}p( \tilde{\gamma}_{i})\mathrm{d}\tilde{\gamma}_{i}=\frac{1}{2(1-w)^{2}}, \tag{89}\]
Note that \(w=g^{2}\eta R\) and \(R=\frac{1}{2(1-w)}\), which implies that
\[g^{2}R\eta=\frac{1-\Delta}{2}, \tag{90}\]
where \(\Delta=\sqrt{1-2g^{2}\eta}\). The stability thus requires that \(\frac{g^{2}}{2(1-w)^{2}}<1\), which finally leads to the condition by using Eq. (90),
\[g(1+\eta)<\sqrt{2}. \tag{91}\]
The stability condition further implies that \(1-2g^{2}\eta>1-2g^{2}(\sqrt{2}/g-1)\geq 0\).
## Appendix D Derivation of fluctuation-dissipation theorem in equilibrium
In this section, we give a proof of fluctuation-dissipation theorem for the model [Eq. (1)] with linear transfer function and fully-symmetric connections. We begin the derivation by rewriting the model in the vector form,
\[\dot{\mathbf{x}}(t)=-\mathbf{x}(t)+J\mathbf{x}(t)+\sigma\mathbf{\xi}(t)+\mathbf{j}(t), \tag{92}\]
where \(\mathbf{J}=\mathbf{J}^{T}\) and \(\left\langle\mathbf{\xi}(t)^{T}\mathbf{\xi}(t^{\prime})\right\rangle=\mathbb{1}_{N \times N}\delta(t-t^{\prime})\) by construction. The solution of this linear dynamics can be given by,
\[\mathbf{x}(t)=\int_{0}^{t}\mathrm{d}t^{\prime}\mathrm{e}^{(-1+J)(t-t^{\prime})} \big{[}\sigma\mathbf{\xi}(t^{\prime})+\mathbf{j}(t^{\prime})\big{]}. \tag{93}\]
From this solution, we can compute the response function [32],
\[\mathbf{\chi}(t,t^{\prime})=\left.\frac{\partial\langle\mathbf{x}(t)\rangle}{\partial j (t^{\prime})}\right|_{j=0}=\Theta(t-t^{\prime})\mathrm{e}^{(-1+J)(t-t^{\prime })}, \tag{94}\]
as well as the correlation function,
\[\begin{split}\mathbf{\Delta}(t,t^{\prime})&=\left\langle \mathbf{x}(t)\mathbf{x}^{\mathrm{T}}(t^{\prime})\right\rangle\\ &=\sigma^{2}\int_{0}^{t}\int_{0}^{t^{\prime}}\mathrm{d}t^{\prime \prime}\mathrm{d}t^{\prime\prime\prime}\mathrm{e}^{(-1+J)(t-t^{\prime\prime})} \left\langle\mathbf{\xi}(t^{\prime\prime})\mathbf{\xi}^{\mathrm{T}}(t^{\prime\prime \prime})\right\rangle\mathrm{e}^{(-1+J^{T})(t^{\prime}-t^{\prime\prime\prime})} \\ &=\sigma^{2}\int_{0}^{\min(t,t^{\prime})}\mathrm{d}t^{\prime \prime}\mathrm{e}^{(-1+J)(t-t^{\prime\prime})}\mathrm{e}^{(-1+J^{T})(t^{\prime }-t^{\prime\prime})}\\ &=\sigma^{2}\int_{0}^{\min(t,t^{\prime})}\mathrm{d}t^{\prime \prime}\mathrm{e}^{(-1+J)(t+t^{\prime}-2t^{\prime\prime})},\end{split} \tag{95}\]
where the bold functions \(\mathbf{\chi}(t,t^{\prime})\) and \(\mathbf{C}(t,t^{\prime})\) refers to \(N\times N\) matrices. Next, we calculate the mean auto-correlation function [32],
\[\Delta(t,t^{\prime}) \equiv\frac{1}{N}\operatorname{Tr}\mathbf{\Delta}(t,t^{\prime}) \tag{96}\] \[=\sigma^{2}\int_{-2}^{2}\frac{\mathrm{d}k}{2\pi}\sqrt{4-k^{2}} \int_{0}^{\min(t,t^{\prime})}\mathrm{d}t^{\prime\prime}\mathrm{e}^{(-1+k)(t+t ^{\prime}-2t^{\prime\prime})}\] \[=\sigma^{2}\int_{0}^{\min(t,t^{\prime})}\mathrm{d}t^{\prime \prime}\frac{I_{1}(2(t+t^{\prime}-2t^{\prime\prime}))}{t+t^{\prime}-2t^{ \prime\prime}}\mathrm{e}^{-(t+t^{\prime}-2t^{\prime\prime})}\] \[=\sigma^{2}\int_{|t-t^{\prime}|}^{t+t^{\prime}}\mathrm{d}w\frac{ I_{1}(2w)}{2w}\mathrm{e}^{-w},\]
where the trace used in the second line can be seen as an integral over the eigenvalues. Wigner semi-circle law is used here for the eigenvalue spectrum of fully-symmetric matrix \(\mathbf{J}\)[20]. In the second line, a substitution \(k=2\cos\theta\) is used and the modified Bessel function of the first kind is introduced, i.e., \(\frac{I_{1}(\tau)}{\tau}=\frac{1}{\pi}\int_{0}^{\pi}\mathrm{d}\theta(\sin \theta)^{2}\mathrm{e}^{\tau\cos\theta}\). The final step also involves a substitution of \(w=t+t^{\prime}-2t^{\prime\prime}\). In the long time limit, the upper limit of integral tends to \(\infty\) and the mean auto-correlation function becomes time translation invariant.
Follow the same procedure, we can compute the mean response function,
\[\chi(t,t^{\prime})\equiv\frac{1}{N}\operatorname{Tr}\mathbf{\chi}(t,t^{\prime})= \Theta(t-t^{\prime})\frac{I_{1}\left(2\left(t-t^{\prime}\right)\right)}{t-t^{ \prime}}\mathrm{e}^{-(t-t^{\prime})}. \tag{97}\]
Finally, it would be easy to verify the following FDT using Eq. (95) and Eq. (96),
\[\chi(t,t^{\prime})=-\frac{1}{T}\partial_{t}\Delta(t,t^{\prime})\Theta(t-t^{ \prime}), \tag{98}\]
where \(T=\sigma^{2}/2\) represents the thermodynamic temperature. Performing the Fourier transform, we can also recast the FDT in the frequency domain \(\Delta(\omega)=\frac{2T}{\omega}\mathrm{Im}\chi(\omega)\). |
2306.16337 | A global unstructured, coupled, high-resolution hindcast of waves and
storm surge | Accurate information on waves and storm surges is essential to understand
coastal hazards that are expected to increase in view of global warming and
rising sea levels. Despite the recent advancement in development and
application of large-scale coastal models, nearshore processes are still not
sufficiently resolved due to coarse resolutions, transferring errors to coastal
risk assessments and other large-scale applications. Here we developed a
50-year hindcast of waves and storm surges on an unstructured mesh of >650,000
nodes with an unprecedented resolution of 2-4 km at the global coast. Our
modelling system is based on the circulation model SCHISM that is fully coupled
with the WWM-V (WindWaveModel) and is forced by surface winds, pressure, and
ice coverage from the ERA5 reanalysis. Results are compared with observations
from satellite altimeters, tidal gauges and buoys, and show good skill for both
Sea Surface Height (SSH) and Significant Wave Height (Hs), and a much-improved
ability to reproduce the nearshore dynamics compared with previous,
lower-resolution studies. Besides SSH, the modelling system also produces a
range of other wave-related fields at each node of the mesh with a time step of
3 hours, including the spectral parameters of the first three largest energy
peaks. This dataset offers the potential for more accurate global-scale
applications on coastal hazard and risk | Lorenzo Mentaschi, Michalis Vousdoukas, Guillermo Garcia-Sanchez, Tomas Fernandez Montblanc, Aron Roland, Evangelos Voukouvalas, Ivan Federico, Ali Abdolali, Yinglong J. Zhang, Luc Feyen | 2023-06-28T16:13:22Z | http://arxiv.org/abs/2306.16337v1 | ## Abstract
## Abstract
Accurate information on waves and storm surges is essential to understand coastal hazards that are expected to increase in view of global warming and rising sea levels. Despite the recent advancement in development and application of large-scale coastal models, nearshore processes are still not sufficiently resolved due to coarse resolutions, transferring errors to coastal risk assessments and other large-scale applications. Here we developed a 50-year hindcast of waves and storm surges on an unstructured mesh of >650,000 nodes with an unprecedented resolution of 2-4 km at the global coast. Our modelling system is based on the circulation model SCHISM that is fully coupled with the WWM-V (WindWaveModel) and is forced by surface winds, pressure, and ice coverage from the ERAS reanalysis. Results are compared with observations from satellite altimeters, tidal gauges and buoys, and show good skill for both Sea Surface Height (SSH) and Significant Wave Height (H\({}_{s}\)), and a much-improved ability to reproduce the nearshore dynamics compared with previous, lower-resolution studies. Besides SSH, the modelling system also produces a range of other wave-related fields at each node of the mesh with a time step of 3 hours, including the spectral parameters of the first three largest energy peaks. This dataset offers the potential for more accurate global-scale applications on coastal hazard and risk.
## Introduction
Waves and storm surges are complex oceanographic phenomena that can have significant impacts on coastal communities, particularly during extreme weather events such as hurricanes, typhoons, and extratropical cyclones. Global warming is expected to increase extreme sea levels and coastal flooding around the world (McGranahan et al., 2007; Wahl et al., 2017; Vousdoukas et al., 2018; Tebaldi et al., 2021). To better understand trends in past and future coastal hazards and risks there is a growing need for accurate modeling of waves and storm surge at a global scale, incorporating the mutual impact of wind-waves and storm surge models in a coupled framework. In the last decades, advances in numerical modeling, computing power and availability of coastal observations (in-situ and remote sensing) have enabled the development of global models of waves and storm surges that can provide valuable information for coastal hazard assessment and risk management (e.g. Vousdoukas et al., 2020). The skill of existing large scale coastal hindcasts and reanalyses is generally good when compared versus satellite observations offshore. Closer to shores, however, complex coastal and bathymetric features and interactions hamper the ability of existing models to fully capture the relevant dynamics, partly due to their low resolution and the lack of coupling waves and circulation. In addition, the dynamic interaction of coastal circulation, wind wave, inland hydrology, ice and atmospheric models are not fully captured in standalone or 1 way coupled fashion (Moghimi et al., 2020; Abdolali et al., 2021).
In recent years, unstructured-grid models are more and more becoming an alternative to regular grids for large-scale simulations. These models are flexible by employing meshes with varying resolution that can be adjusted to fit any arbitrary geometry, making them ideal for modeling coastal environments with irregular coastlines, complex bathymetry and structures. The history of unstructured wave and storm surge models dates back to the 1990s when researchers began exploring the use of finite element and finite volume numerical methods to
solve differential equations (e.g. Luettich et al., 1992; Lynch et al., 1996), and they soon became a consolidated tool for coastal modelling. Nowadays, among the established circulation unstructured models are ADCIRC (Luettich et al., 1992; Pringle et al., 2021), the Finite-Volume Coastal Ocean Model (FVCOM, Chen et al., 2003), the Semi-implicit Cross-scale Hydroscience Integrated System Model (SCHISM, Y. Zhang & Baptista, 2008; Y. J. Zhang et al., 2016), the System of HydroDynamic Finite Element Modules (SHYFEM, Umgieisser and Zampato, 2001; Micaletto et al., 2022), the model TELEMAC (Hervouet and Bates, 2000).
Among the earliest spectral wave models employing numerical schemes unstructured grids were the model TOMAWAC (Benoit et al., 1997) and CREST e.g. Ardhuin & Herbers, 2005), which solve the wave action balance equation (WAE) based on semi-Lagrangian schemes. In the following other models developed, which solve the WAE based on a Eulerian viewpoint, in analogy to the mostly applied models based on structured grids (WAM, WW3, SWAN), which have been WWM (Liau, 2001; Hsu et al., 2005, Roland et al. 2006) and MIKE2-SW (Soerensen et al., 2004).
In the following decade the development of the Eulerian-approach gained further momentum and mostly all of the available spectral wave models have been enhanced for the possibility to apply unstructured grids (e.g. Roland, 2008 (WWM-II), 2012b (WAM), Zijlema et al. 2008 (UnSWAN), Qi et al.2009 (FVCOM).
The schemes developed in Roland, 2008 included efficient implicit schemes to overcome the strict CFL constraint and resulted in WWM-II, which was fully coupled to SCHISM and parallelized using Domain Decomposition methods and lead to WWM-III, Roland et al., 2012a). The latest efforts in Sikiric et al. 2018 and further development in this study resulted in the latest Version of the wind wave model (WWM-V), which was utilized in this study. We have validated the implicit schemes in WWM-V and compared those to the latest WW3 version, which showed only marginally worse performance for the global setup. Furthermore, the integration of the triads by applying Patankar rules and underrelaxation of the wave breaking and triad source term to increase the robustness of the model have improved in the implicit scheme. Moreover, the memory management in the model was improved in terms of memory alignment, memory usage and cache locality, which improved the performance of the model in contrast to WWM-III.
Most of the elements have been implemented in the actual "development" Version of the WW3 Framework. The implementation of implicit solvers as done helped provided an opportunity to further develop the wave models to resolve the dominant coastal processes (Abdolali et al., 2020).
For long, the use of unstructured meshes was confined to local high-resolution studies that deepened insights into the complexity of nearshore interactions (e.g. Roland et al., 2008; Roland & Ardhuin, 2012, Bertin et al., 2012; Federico et al., 2017; Amores et al., 2020; Park et al., 2022). Large/global scale unstructured models of circulation and storm surges were developed in the late 2010s (e.g. Muis et al., 2016; Vousdoukas et al., 2018; Fernandez-Montblanc et al., 2020; Saillour et al., 2021). Recently, Zhang et al., (2023) developed a global 3D baroclinic model that includes tides using SCHISM. Existing global studies on circulation and storm surges can reach high spatial resolution at large scale. For example, Wang et al., (2022) reaches a global resolution of about 1.25 km. Zhang et al., (2023) have an offshore resolution of 10-15 km, and nearshore of 3 km globally and 1-2 km in North America.
For waves, the adoption of unstructured meshes at large scale was slower, related to the high computational demand of spectral wave models. The first study using unstructured local subgrids in a global wave model was (Rascle and Ardhuin, 2013). More recent applications pioneered the viability of global unstructured wave models (Mentaschi et al., 2020; Brus et al., 2021), though retaining a relatively low resolution at the global coast of ~10 km. This was achieved thanks to the adoption of efficient domain decomposition techniques and
governance of the CFL number (Abdolali et al., 2020), and of subscale modelling techniques that simplify the workload of mesh generation by introducing a parameterization of wave dampening due to unresolved features (Mentaschi et al., 2015, 2018). Also worth mentioning, there are studies on large basins like the Mediterranean Sea, reaching very high spatial resolution in the order of 100 m, carried out with WW3 and WWM (Lira-Loarca et al., 2022; Toomey et al., 2022).
Another important aspect of ocean modelling is the coupling between waves and circulation. Wind waves can alter circulation in the ocean and affect the surface roughness and near-surface turbulence, while currents can influence the propagation and characteristics of waves, via Doppler effect, whitecapping when propagating against counter currents and by fostering non-linear interactions. Accounting for this coupling is important for accurate modelling, especially in nearshore environments. The coupling of waves and circulation was explored in a number of local studies (e.g. Ferrarin et al., 2008; Roland et al., 2009, 2012; Babanin et al., 2011; Staneva et al., 2016; Clementi et al., 2017; Causio et al., 2021), and only recently global applications were developed (e.g. Law-Chune et al., 2021). Incorporation of variability of water level (tide + surge) in the wave model and inland hydrology enhance the model accuracy in nearshore and provided better estimates of wetting and drying and compound flooding (Bakhtyar et al., 2020; Moghimi et al., 2020).
Taking advantage of these recent advances, here we developed a global coupled model that simulates waves and storm surges on an unstructured mesh with an unprecedented resolution at the coasts of 2-4 km globally. The coupled model combines the circulation model SCHISM (Zhang et al., 2016) and the spectral wave model WWM (Roland, 2008) and is forced by the latest-generation atmosphere reanalysis ERA5 (Hersbach et al., 2020). With this setup, we carried out a 53-year hindcast of waves and storm surges and present here its validation versus satellite and field observations, and a comparison with results from prior lower-resolution studies.
## Materials and Methods
### Model setup
Our modelling framework consists of the Semi-implicit Cross-scale Hydroscience Integrated System Model (SCHISM, Zhang et al., 2016, 2023) that is 2-way coupled with the 3\({}^{\text{rd}}\)-generation spectral wave model Wind Wave Model (WWM-V, Roland et al., 2012; Dutour Sikiric et al., 2018).
SCHISM is an extension of the original SELFE code (Y. Zhang & Baptista, 2008) that has undergone several enhancements and upgrades, including the inclusion of large-scale eddying and a seamless cross-scale capability from small bodies of water to the ocean. The model solves the Navier-Stokes equation, under hydrostatic and Boussinesq approximation, on unstructured grids, using a Finite-Element semi-implicit Galerkin scheme for horizontal motions, and a Finite-Volume scheme for vertical motions. The model accounts for the combined effects of wind and atmospheric pressure gradients. In this study, SCHISM was configured in its two-dimensional barotropic mode, which increases model efficiency and is a reasonable approximation for global applications with high computational and storage requirements (Fernandez-Montblanc et al., 2019). In this work, SCHISM was used to simulate the generation and propagation of storm surges. The surface stress in SCHISM was calculated using a bulk formula with the drag coefficient computed estimated as a function of the wave motion. The bed shear stress was computed using the Manning approach, assuming a value of 0.02 for the Manning friction coefficient. A biharmonic viscosity filter with a diffusion number-like dimensionless constant of 0.025 is used to suppress grid-scale noises.
WWM-V is a 3\({}^{\text{rd}}\)-generation state-of-the-art spectral wave model that solves the wave action equation on unstructured grids using the CRD-N-Scheme (e.g Ricchiuto et al. 2005) for the advection part and discretizing
the spectral part using simple 1st order implicit finite difference schemes, rending the whole approach monotone and convergent. The source terms have been linearized (e.g. Patankar, 1980) and if the linearization was not possible, we have underrelaxed the source terms as done for the triad-wave-wave interactions source term. For this study we employed the fully implicit RD-schemes with no splitting, using a Block-Gauss & Jacobi iterative solvers (e.g.: Abdollai, 2020) on unstructured grids. The spectral domain used in this study consists of 36 frequencies and 24 directions. The frequencies are separated by a constant factor, from 0.04 Hz, corresponding to a period of 25 s, to 1 Hz, corresponding to a period of 1 second. The growth/dissipation of the waves was parameterized using (Ardhuin et al., 2010), setting the parameter BETAMAX to the value suggested for ECMWF data, while the non-linear interactions are reproduced by means of the Discrete Interaction Approximation (DIA) (Hasselmann and Hasselmann, 1985). With a resolution of 2 km along coasts, shallow-water interactions can play a significant role. In particular, the bottom friction was parameterized using the JONSWAP approximation (Hasselmann et al., 1973). Depth-limited breaking was represented using the approach by (Battjes and Janssen, 1978).In particular, the Unresolved Obstacles Source Term (UOST, Mentaschi, Perez, et al., 2015; Mentaschi et al., 2018, 2019, 2020) was introduced to parameterize the energy dampening of small unresolved islands. Furthermore, as the ice plays an important role on wave dynamics at high latitudes, an approach based on UOST was implemented to dampen the energy as a function of the ice concentration. No treatment of ice was adopted in the circulation model. Finally, the methodology proposed by (Tracy et al., 2006) for spectral decomposition was implemented in the model. This approach allows for saving the mean parameter for the separate peaks of the wave spectrum, providing summarized information on the spectra at each node.
The models are 2-way coupled, in that the hydrodynamic component provides the wave model with current velocities and sea levels, while the sea surface roughness is estimated by WWM-V and passed back to SCHISM.
The hindcast simulation was carried out on an unstructured mesh of 651,652 nodes covering the global ocean with a resolution ranging from about 50 km offshore to about 2-4 km nearshore globally. The mesh was generated using the OceanMesh2D tool (Roberts et al., 2019). The bathymetry was taken from the General Bathymetric Chart of the Oceans (GEBCO) dataset (Weatherall et al., 2015). The model was forced with hourly data of sea level pressure, wind speed and sea ice concentration from the fifth-generation reanalysis ERA5 (Hersbach et al., 2020). Apart from its resolution of 25 km, high for a global reanalysis, we selected ERA5 for its extended time span (from 1950 on), and its accuracy, compared to other products, in reproducing long-term trends (Erikson et al., 2022). The model time step was 400 s for both the circulation component and the wave component, with circulation-wave coupling occurring at each time step. The time span of the simulation is from 1970 to 2022. Model output was saved at each domain node with a time frequency of 3 hours for the variables listed in Table 1.
\begin{table}
\begin{tabular}{l|l|l} \hline
**Name** & **Description** & **Units** \\ \hline Sea Surface Height & Residual water elevation (tides are not included) & m \\ Wind Speed & Wind speed at 10 m (from ERA5) & m s\({}^{-1}\) \\ \hline Currents Velocity & Horizontal water velocity & m s\({}^{-1}\) \\ \hline Significant Wave Height & Significant Wave Height & m \\ \hline Mean Period T\({}_{0:1}\) & Mean period weighted on the spectral component energy & s \\ Zero-Crossing Period T\({}_{0:2}\) & Zero-crossing mean period & s \\ \hline Overtopping/runup period T\({}_{10}\) & Mean period for overtopping/runup & s \\ \hline Mean Wavelength & Mean Wavelength & m \\ \hline Mean Wave Direction & Mean Wave Direction, oceanographic convention & *N \\ \hline \end{tabular}
\end{table}
Table 1: List of variables saved at each mesh node with a 3-hour time step.
Model performance evaluation: sea surface height
Modelled Sea Surface Height (SSH) was compared globally versus along-track altimetry data from the Copernicus Marine Service (CMS) (Pujol, 2022), which comprises several altimeter missions (Topex-Poseidon, Jason-1, OSTM/Jason-2, Sentinel-3A, ERS-1, ERS-2, Envisat, Geosat Follow On, Cryosat, SARAL/AltiKa, HY-2A) from 1993 ongoing. The dataset includes all the atmospheric forcing contributions to sea level variation, included as a linear component to the overall Sea Level Anomaly (SLA). The model output was space-time interpolated to the corresponding points along the tracks. The paired observation and modelled values were then binned on a validation grid with a spatial resolution of 0.5*x0.5* and the skill was evaluated for each cell.
In addition to the SSH validation at the open ocean, the performance of the nearshore SSH was assessed versus the tidal gauge data provided by the GESLA-3 database (Woodworth et al., 2016; Haigh et al., 2022).
As our barotropic model is unable to reproduce low-frequency variability of sea levels and Sea Level Rise, both simulations and observations of SSH were detrended prior to comparison.
The statistical indicators used to quantify the skill both of the offshore and nearshore SSH are:
* Root Mean Squared Error (RMSE): \[RMSE=\sqrt{\frac{\sum_{i=1}^{N}\bigl{(}\eta_{i}^{m}-\eta_{i}^{o}\bigr{)}^{2}} {N}}\enspace,\] (1) where N is the number of observation-simulation pairs, and \(\eta_{i}^{m}\) and \(\eta_{i}^{o}\) are the modelled and observed SSH, respectively.
* Percentage Root Mean Squared Error, RMSE(%), similar to RMSE but normalized by the difference between the 1st and 99th percentile of the observed water levels: \[RMSE(\%)=\frac{\sqrt{\frac{\sum_{i=1}^{N}\bigl{(}\eta_{i}^{m}-\eta_{i}^{o} \bigr{)}^{2}}{N}}}{\text{p}_{99}(\eta^{o})-\text{p}_{1}(\eta^{o})}\enspace,\] (2)
* Pearson correlation:
\[r=\frac{\sum_{i=1}^{N}(\eta_{i}^{m}-\overline{\eta_{m}})(\eta_{i}^{o}-\overline{ \eta_{o}})}{\sigma_{m}\sigma_{o}}\ \, \tag{3}\]
where \(\overline{\eta_{m}}\) and \(\overline{\eta_{o}}\) are the mean modelled and observed SSH, and \(\sigma_{m}\) and \(\sigma_{o}\) are the standard deviations of the 2 signals.
To assess the behavior of the model in normal and extreme water level conditions, the skill indicators were evaluated a) on the whole time series and b) on the data beyond the 95\({}^{\rm th}\) percentile of the model or observation of each cell/tidal gauge.
Model performance evaluation: waves
The modelled significant wave height H\({}_{\rm s}\) was validated with the along-track altimetry H\({}_{\rm s}\) data provided by CMS, in collaboration with the Climate Change Initiative (CCI) of the European Space Agency (ESA) (Dodet et al., 2020). Similar to SSH, the model data were space-time interpolated to the points along the satellite tracks, and binned on the same validation grid. H\({}_{\rm s}\) was also compared with buoy data from a dataset of wave buoys from different sources, including the National Buoy Data Center (NDBC) and the European Marine Observation and Data Network (EMODnet).
The statistical indicators used to quantify the skill of significant wave height are:
* Normalized Bias (NBI): \[NBI=\frac{\sum_{i=1}^{N}{H_{s_{i}}}^{m}-{H_{s_{i}}}^{o}}{\sum_{1=1}^{N}{H_{s _{i}}}^{o}}\ \ \,\] (4) where \({H_{s_{i}}}^{m}\) and \({H_{s_{i}}}^{o}\) are the modelled and observed significant wave height, respectively.
* The Root Mean Squared Error (RMSE) with a definition similar to that of SSH: \[RMSE=\sqrt{\frac{\sum_{i=1}^{N}{\left({H_{s_{i}}}^{m}-{H_{s_{i}}}^{o}\right) }^{2}}{N}}\ \.\] (5)
* Normalized Root Mean Squared Error (NRMSE): \[NRMSE=\sqrt{\frac{\sum_{i=1}^{N}{\left({H_{s_{i}}}^{m}-{H_{s_{i}}}^{o}\right) }^{2}}{\sum_{1=1}^{N}{\left({H_{s_{i}}}^{o}\right)}^{2}}}\ \.\] (6)
Similar to SSH, the behavior of the model was assessed both in normal and extreme conditions of H\({}_{\rm s}\).
Results
The model shows overall good skill of SSH versus altimeters, with an average RMSE of about 7.9 cm, a RMSE(%) of 17.47%, and a Pearson correlation of 0.55. In large portions of the global ocean RMSE(%) is below 10% and the correlation is > 0.7 (Figure 1). The model performance decreases in areas characterized by low-frequency variability, for example in correspondence of the western-boundary currents. This is clear at the Gulf Stream, the Kuroshio, the Brazil current, at the Mozambique current and the gradient between the latter and the Southern Atlantic and Southern Indian currents, and the Eastern Australia current, especially when the model skill is evaluated in terms of correlation (Figure 1b). Model performance also varies with latitude. At higher latitudes both north- and southward correlation is higher and RMSE(%) lower, whereas closer to the equator the correlation becomes non-significant, despite RMSE(%) values in the order of 20% or lower (Figure 1). For several semi-enclosed basins, such as the Mediterranean Sea, the Red Sea, the Persian Gulf, the East China Sea,
part of the South China Sea, the Gulf of Thailand, the Arafura Sea, the correlation is significantly higher than in open-ocean areas at the same latitude. Higher values beyond the 95th percentile of SSH are to be expected as the normalization term is the same as for the whole signal. The map of RMSE(%) beyond the 95th percentile presents spatial patterns similar to the ones found for the whole signal, but with higher relative uncertainty at the western boundary currents and at some section of the Southern Circumpolar Current (Figure 2a). The correlation model-altimeters beyond the 95th percentile shows a general improvement over the whole signal, with r > 0.9 in large swaths of the domain, and significant values of the correlation even in proximity of the equator (Figure 2a).
The comparison of simulated SSH with tidal gauge records shows similar model performance compared to altimeters. There is overall good skill, especially at higher latitudes for both mean (Figure 3) and extreme conditions (Figure 4), and also for tidal gauges located in proximity to the coast.
Figure 2: As in Figure 1, but for SSH beyond the 95th percentile of altimeters or model.
Figure 3: Skill for Sea Surface Height (SSH), model vs tidal gauges, a: percentage Root Mean Square Error RMSE(%), b: Pearson Correlation
The comparison of modelled vs altimeter values for H\({}_{\text{s}}\) shows that NRMSE is smaller than 25% over most of the domain (Figure 5a). Contrary to the skill for SSH (Figure 1a), NRMSE increases for higher latitudes north and south and is smallest closer to the equator. Biases are generally positive at low latitudes and negative at high latitudes, especially in the Northern Hemisphere but also in large swaths of the Southern Hemisphere (Figure 5b). The mean normalized bias over the entire domain is -3.73%. When the whole wave climate is considered (both mean and extreme conditions), the bias is generally negative in semi enclosed basins, such as the Mediterranean Sea, the Red Sea, the Persian Gulf, the East China Sea, part of the South China Sea, the inner seas of the Indonesian archipelago. Part of the Antarctic coasts are instead characterized by a positive bias (Figure 5b). The evaluation of the model skill on the extreme tails (H\({}_{\text{s}}\) beyond the 95\({}^{\text{th}}\) percentile of the observations) shows patterns of NRMSE similar to those found for the whole signal, but with significant improvements in some areas, in particular in semi-enclosed basins (Figure 6a). Beyond the 95\({}^{\text{th}}\) percentile the bias of Hs becomes more negative than in mean conditions, -7.85% on average, and the only areas retaining a
Figure 4: As in Figure 3, but for SSH beyond the 95th percentile of gauges or model.
positive bias are Coral Sea, North-West of Australia, and parts of the western Indian Ocean. The bias is low especially at high latitudes in the Northern Hemisphere, with negative bias values less than -15% in the Labrador Sea, in the Hudson Bay, along the coasts of Greenland and in other areas partially covered in ice during winter. In small, semi-enclosed basins such as the Mediterranean Sea, the Red Sea, the Persian Gulf, the inner seas of Indonesia and others, the negative bias improves beyond the 95\({}^{\text{th}}\) percentile of Hs compared with mean conditions (Figure 6b). The comparison of H\({}_{\text{s}}\) with buoys offers a picture similar to altimeters, though with more local variability as the ability of the model to reproduce observations can depend on the resolution of coastal micro-features and sheltering (Figure 7 and Figure 8). With few exceptions, the model generally shows skill in line with that versus altimeters even for buoys close to the coasts, though with a significantly more pronounced negative bias (-11.8% on the whole signal, and -16.26 beyond the 95\({}^{\text{th}}\) percentile, Figure 7b and Figure 8b).
Figure 6: As in Figure 5, but for H\({}_{i}\) beyond the 95th percentile of altimeters or model.
Figure 7: Skill of Significant Wave Height (H\({}_{\lambda}\)), model vs buoys, \(\alpha\): percentage Normalized Root Mean Squared Error (NRMSE) b: Normalized Bias (NBI).
## Discussion
### Offshore skill
Overall, our results show offshore skills in line with the ones of other contemporary large-scale models, for both storm surges (Muis et al., 2016; Vousdoukas et al., 2017; Fernandez-Montblanc et al., 2020; Tadesse et al., 2020) and waves (e.g. Mentaschi et al., 2020; Morim et al., 2022; Perez et al., 2017; Smith et al., 2021).
The offshore skill of SSH shows a clear dependence on latitude (Figure 1, Figure 2), with values of RMSE(%) and correlation significantly better at mid-high latitudes with respect to tropical and equatorial areas. This is a common feature in storm surge models and is due to a range of factors. The variability of sea levels at mid-high latitudes is dominated by the eddy activity and baroclinic instability in the atmosphere, which result in short-term inverse barometric effect and wind setup that our model can capture. The lower atmospheric eddy activity in tropical/equatorial regions is a reason for the worse skills of SSH in these areas (Hoskins and Valdes, 1990). At low latitudes storm surges are typically caused by tropical cyclones, hurricanes, and typhoons, which are
Figure 8: As in Figure 7, but for H\({}_{s}\) beyond the 95th percentile of buoys or model.
relatively rare events, with a limited contribution to the mean signal. Though ERAS can reproduce such events to some extent, limitations still exist, partially related to the still relatively low spatial resolution (Hersbach et al., 2020), with results usually characterized by negative biases of wind. Another reason is the complex ocean currents and coastal geography in the equatorial region, which is characterized by weak Coriolis acceleration. The equatorial region is home to the Equatorial Undercurrent (EUC) and the Equatorial Counter Current (ECC), which are difficult to model due to their strong variability and complexity (e.g. Stellema et al., 2022). These currents can also cause upwelling and vertical circulation, which can have an impact on SSH, but the ability of barotropic models to reproduce it is limited. Related to this is the low-frequency variability that characterize the sea levels in tropical and equatorial areas, due to the Tropical Instability Waves and the seasonal cycle, and the associated thermostatic expansion. Another source of uncertainty is the oscillating intensities of the monsoons and of the Walker cells, and the associated patterns of variability such as the El Nino Southern Oscillation (Nidheesh et al., 2013). Other areas characterized by low-frequency variability are the western boundary currents, where the water levels depend on currents' intensity. The better skill in relatively sheltered tropical/equatorial basins such as the Red Sea, the Persian Gulf, the East China Sea, part of the South China Sea, the Gulf of Thailand, the Arafura Sea, is likely due to the relatively smaller low-frequency variability.
That low-frequency variability is a major contribution to the error is confirmed by a comparison of our model's SSH against the data of Dynamic Atmospheric Correction (DAC) from Pujol, (2022), which better identifies the short-term variability than SLA (Figure 9). The comparison, done for the mean signal, shows a significantly improved model skill, with a mean correlation of 0.73 versus 0.55 against SLA (Figure 0(b)), and better performances everywhere, with the exception of the Red Sea and the coasts of Vietnam.
Figure 9: Comparison of model SSH versus Dynamic Atmospheric Correction (DAC) from (Pujol, 2022). Map of correlation coefficient r.
Wave results show overall good skill, though with a tendentially negative bias, especially low in areas characterized by seasonal presence of ice, e.g. in the southern portion of the Arctic Ocean, in the Labrador Sea, and in the Southern hemisphere in the Weddell and Ross seas, and in semi-enclosed basins, with consequent higher values of NRMSE (Figure 5). The error in H\({}_{\text{s}}\) in these areas is related to the high uncertainty in the sea ice concentration datum provided by ERA5, especially in the Marginal Ice Zones (Renfrew et al., 2021), together with the fact that the presence of ice invalidates the altimetric observations of H\({}_{\text{s}}\)(Dodet et al., 2020). On the other hand, introducing a parameterization of ice-induced wave dampening comes with benefits, as simulations neglecting it result in significant positive biases of H\({}_{\text{s}}\) in the Southern Ocean and in the Northern Pacific Ocean and Atlantic Ocean that are now absent. The negative bias and NRMSE increase beyond the 95\({}^{\text{th}}\) percentile (Figure 6), reflecting a general tendency of our wave models in underestimating the peaks, partly due to an underestimation of extreme winds in ERA5 (e.g. Campos et al., 2022) as well as possible limitations of 3\({}^{\text{rd}}\) generation models in extreme conditions (Cavaleri, 2009). At the same time, in enclosed and semi enclosed basins (the Mediterranean Sea, the Red Sea, the Persian Gulf, the inner seas of Indonesia and others) the negative bias of H\({}_{\text{s}}\) beyond the 95\({}^{\text{th}}\) percentile improves with respect to the whole signal. This suggests that the higher resolution of ERA5 compared to previous reanalyses and that of our model allow for a better reproduction of mesoscale events relevant for the statistics of the extremes in short fetches (e.g. Mentaschi et al., 2015).
The significant improvement over lower resolution, non-coupled models for both SSH and H\({}_{\rm s}\) relates to the ability to better reproduce the dynamic effect of small coastal features, and to considering the interactions between waves and currents, and the effect of waves on the sea surface roughness. A comparison of the new hindcast versus previous ones (Mentaschi et al., 2017; Vousdoukas et al., 2018, Figure 10) shows how the present model better captures local short-term variability. For the 20% of tidal gauges with the best RMSE for SSH in the present study, the old hindcast showed nowhere a better skill (Figure 11a). Furthermore, the RMSE/RMSE(%) of SSH was larger than the 80th percentile of RMSE/RMSE(%) of the present study in >50% of the tidal gauges. If SSH beyond the 95th percentile is considered, this percentage exceeds 60% (Figure 11ab). For H\({}_{\rm s}\), in normal conditions the improvement of skill is in line with that of SSH, while if we consider H\({}_{\rm s}\) beyond the 95th percentile the improvement of skill with respect to the old hindcast is still clear, but less marked, due to the significant negative bias of H\({}_{\rm s}\) (Figure 11cd). In this respect, it is worth mentioning that the setup of the physics in WW3 and D-FLOW in (Mentaschi et al., 2017; Vousdoukas et al., 2018) is similar to that of the present study, and the offshore skill comparable. Therefore, the nearshore improvement of skill is due to the increase in resolution and to the coupling.
Figure 10: Time series of SSH at \(\alpha\) tidal gauge (b) and of H\({}_{\rm s}\) at \(\alpha\) buoy (c) in the Irish Sea (a). In map (a) the mesh used in the current study (green line) is superimposed to the 0.25” subdomain used for Northern Europe in (Mentaschi et al., 2017), and the positions of the buoy and the tidal gauge are represented. In the time series panels (b and c) the black line represents the observations, the green line the results from the present study, the red line for its the results from Mentaschi et al., (2017) and for SSH from Vousdoukas et al., (2018).
### Limitations
Our model represents a clear improvement on previous lower resolution hindcasts. However, it comes with a number of limitations that will be listed in this section. Tidal forcing was not included in the setup, as we aimed to isolate the contribution of the storm surges. While tide elevation can be added a-posteriori, from tidal databases such as FES2014 (Lyard et al., 2021) or TPXO (Dushaw et al., 1997), recent studies pointed to the importance on non-linear interactions between tides and storm surges nearshore (Krien et al., 2017; Fernandez-Montblanc et al., 2019; Arns et al., 2020; Tausia et al., 2023). Such non-linear interactions are not explicitly accounted for in this study.
As a resolution of 2-4 km is not enough to resolve exhaustively everywhere coastal processes such as nearshore wave dissipation, we opted not to introduce the nearshore wave-circulation coupling described by (Bennis et al., 2011) and implemented in SCHISM by (Martins et al., 2022). Therefore, the water levels at the coast do not include the wave setup. This is a common approach for large-scale models, but it should be mentioned because it is important for certain applications.
The ability of our model to capture tropical cyclones (TCs) is still limited by the offshore grid resolution (~50 km) and by the ability of the forcing fields from ERA5 to reproduce TCs. This is another known limitation of large-scale models (e.g. Bloemendaal et al., 2019) and requires separate special modelling to be addressed.
Figure 11: Skill comparison between the present study and previous, lower-resolution hindcasts (Mentaschi et al., 2017; Vousdoukas et al., 2018). In the panels, the tidal gauges (ab) and the buoys (cd) are grouped by quintile of error in the present study. The dashed line in each panel reproduces the constant percentage of gauges/buoy falling in each quintile. The bars represent the percentage of gauges/buoys of the older hindcasts falling in the same quantiles, in normal conditions (blue bars), and for the part of the signal beyond the 95\({}^{\text{th}}\) percentile. The skill indicators are RMSE and RMSE(%) of SSH (ab), and RMSE and NRMSE of H\({}_{\text{s}}\) (cd).
As mentioned in the methods section, ERA5 forcing was selected for its advantages over other products (high spatial resolution for a global reanalysis, long time extent, ability to detect long-term trends). However, ERA5 comes with significant underestimation of extreme winds (e.g. Campos et al., 2022), which reflects in negative bias especially of extreme H\({}_{\text{s}}\).
Last but not least, this study was conducted using a 2D barotropic model for storm surges, which comes with a limited ability to reproduce 3D ocean circulation and currents. Though an accurate representation of 3D circulation is not an objective of this study, this could have an impact on storm surges, e.g., in relation to phenomena such as the Coriolis setup. Furthermore, this can have consequences on waves, related to the current-induced Doppler effect.
### Final remarks
In this study we developed the first global coupled hindcast of waves and storm surges on unstructured meshes, reaching a nearshore spatial resolution of 2-4 km, unprecedented for waves. Thanks to its high resolution, our hindcast achieves better skill nearshore than previous, lower resolution ones, in particular the ones developed in the past by the authors (Mentaschi et al., 2017; Vousdoukas et al., 2018). This offers several opportunities for advancements in understanding and quantification of coastal hazard and risk at large scale. For example, it offers the potential to better characterize areas of erosion or accretion, and reproduce the occurrence of past flooding events. Its extended time span (50 years, with its extension up to 70 years ongoing) offers opportunities for attribution studies in view of climate change.
The model and hindcast presented advance the state of the art on global scale coastal hydrodynamic modelling, and the limitations discussed represent some key challenges for future further improvement. Some of them can only be addressed in the long term, by running new models on different meshes using improved approaches. Among these, the obvious need for even higher spatial resolution to reproduce the coastal interaction between circulation and waves, the need of even better atmospheric forcing for more accurate modelling TCs and extremes in general, and the use of 3D circulation models to better reproduce ocean circulation.
Other limitations can be partially overcome via Machine-Learning post-processing of our results. Noticeably, the systematic errors of key quantities, such as SSH and H\({}_{\text{s}}\), can be reduced via bias correction versus long-term altimetric observations, similar to what is done with simulated climate variables by general circulation and regional climate models (e.g. Lange, 2019). The non-linear tide-surge interactions can be inferred from model runs that consider both, and our results of SSH can be corrected accordingly (Arns et al., 2020). Separate, high-resolution models can be run for TCs, for a better characterization of extreme conditions at lower latitudes, as done by (Vousdoukas et al., 2018). Wave runup can be parameterized using empirical approaches such as (Stockdon et al., 2006). These and other improvements will be the subject of follow-up research and will contribute to the creation of an even more accurate global database for coastal applications and engineering.
### Acknowledgements
The authors acknowledge Xavier Bertin for his useful suggestions and careful review of the manuscript.
|
2308.06488 | Generating Faithful Text From a Knowledge Graph with Noisy Reference
Text | Knowledge Graph (KG)-to-Text generation aims at generating fluent
natural-language text that accurately represents the information of a given
knowledge graph. While significant progress has been made in this task by
exploiting the power of pre-trained language models (PLMs) with appropriate
graph structure-aware modules, existing models still fall short of generating
faithful text, especially when the ground-truth natural-language text contains
additional information that is not present in the graph. In this paper, we
develop a KG-to-text generation model that can generate faithful
natural-language text from a given graph, in the presence of noisy reference
text. Our framework incorporates two core ideas: Firstly, we utilize
contrastive learning to enhance the model's ability to differentiate between
faithful and hallucinated information in the text, thereby encouraging the
decoder to generate text that aligns with the input graph. Secondly, we empower
the decoder to control the level of hallucination in the generated text by
employing a controllable text generation technique. We evaluate our model's
performance through the standard quantitative metrics as well as a
ChatGPT-based quantitative and qualitative analysis. Our evaluation
demonstrates the superior performance of our model over state-of-the-art
KG-to-text models on faithfulness. | Tahsina Hashem, Weiqing Wang, Derry Tanti Wijaya, Mohammed Eunus Ali, Yuan-Fang Li | 2023-08-12T07:12:45Z | http://arxiv.org/abs/2308.06488v1 | # Generating Faithful Text From a Knowledge Graph
###### Abstract
Knowledge Graph (KG)-to-Text generation aims at generating fluent natural-language text that accurately represents the information of a given knowledge graph. While significant progress has been made in this task by exploiting the power of pre-trained language models (PLMs) with appropriate graph structure-aware modules, existing models still fall short of generating faithful text, especially when the ground-truth natural-language text contains additional information that is not present in the graph. In this paper, we develop a KG-to-text generation model that can generate faithful natural-language text from a given graph, in the presence of noisy reference text. Our framework incorporates two core ideas: Firstly, we utilize contrastive learning to enhance the model's ability to differentiate between faithful and hallucinated information in the text, thereby encouraging the decoder to generate text that aligns with the input graph. Secondly, we empower the decoder to control the level of hallucination in the generated text by employing a controllable text generation technique. We evaluate our model's performance through the standard quantitative metrics as well as a ChatGPT-based quantitative and qualitative analysis. Our evaluation demonstrates the superior performance of our model over state-of-the-art KG-to-text models on faithfulness.
## 1 Introduction
A knowledge graph (KG) is a structured representation of information as a network of interconnected real-world entities, and relationships. The task of KG-to-text generation has been proposed Ribeiro et al. (2020); Koncel-Kedziorski et al. (2019) to make this structured information more accessible to humans, aiming to generate fluent, informative, and faithful natural-language sentences that should describe the contents of an input KG. Recently, this task plays a significant role in a variety of applications such as knowledge-grounded dialogue generation Zhou et al. (2018); Zhao et al. (2020), story generation Guan et al. (2019); Ji et al. (2020), event narration Colas et al. (2021), and question-answering Agarwal et al. (2021); Chen et al. (2023); Saxena et al. (2020).
Significant progress has been made in the KG-to-text generation task by utilizing a set of Transformer-based Vaswani et al. (2017) pre-trained language models (PLMs) such as BART Lewis et al. (2019), T5 Raffel et al. (2020) or GPT Radford et al. (2019) with appropriate graph structure-aware modules Ke et al. (2021); Colas et al. (2022); Han and Shareghi (2022). However, ensuring the faithfulness of KG-to-text generation, i.e. reducing hallucinations Ji et al. (2022); Wang et al. (2022); Raunak et al. (2021); Rebuffel et al. (2022), is an under-explored problem, and existing KG-to-text models fall short of generating faithful text when the ground-truth text of the training dataset contains wrong or extra information that is not consistent with the input.
Figure 1 shows an example of a small KG about a house, which contains information on its internal features and neighborhood, and the corresponding ground-truth reference text, from a real-world real-state KG Das et al. (2021). The ground-truth text, while summarizing the features of the house accurately, also mentions some information that is not available in the input KG (i.e. extrinsic hallucination, highlighted in red).
When a KG-to-text model is trained with such hallucinated reference text, it is likely to produce text that is also hallucinated. This hallucination problem significantly reduces the faithfulness and thus trustworthiness of the generated text. Thus, the ability to reduce hallucination in the presence of noisy reference text is important for the practi
cal application of KG-to-text and other NLG techniques, especially in mission- and safety-critical domains such as medical diagnostics and scientific research.
A number of techniques have been proposed Ji et al. (2022) to control this hallucination problem in abstractive summarization, table-to-text generation, generative question-answering, neural machine translation, and knowledge-grounded dialogue generation Wang et al. (2022); Tang et al. (2022); Rebuffel et al. (2022); Krishna et al. (2021); Zhou et al. (2021); Zhang et al. (2022). However, to the best of our knowledge, controlling hallucination in graph-to-text generation with noisy reference text has not been investigated.
In this paper, we propose a novel framework to address this important and practical problem. Our framework combines contrastive learning technique and controllable text generation. Contrastive learning enables the model to distinguish between faithful and hallucinated text and guides the decoder to generate faithful text instead of hallucinated text. The controllable text generation technique learns the level of hallucination from noisy training text and controls (i.e. minimizes) the level of hallucinated information in the generated text. Our framework can be employed in any KG-to-Text encoder-decoder model to generate faithful natural language text from a given KG, in the presence of noisy reference text.
Our contributions are as follows:
* We propose a framework to deal with the hallucination problem in KG-to-text generation task. Our framework comprises two core ideas: (i) Employing contrastive learning to enable the KG-to-text generation model to better differentiate between faithful and hallucinated information in the reference text and guide the decoder to generate text that is faithful to KG. (ii) Controlling the level of hallucination while generating text from KG using a controllable text generation technique.
* We conduct experiments and evaluate performance using a standard quantitative analysis with automatic metrics. Our comprehensive evaluation on two noisy datasets demonstrates the superior performance of our proposed model over the state-of-art KG-to-text generation models on faithfulness metrics.
* We further propose and perform novel ChatGPT-based quantitative and qualitative evaluations to assess the performance of our model more comprehensively. The evaluation also shows our model's effectiveness in generating faithful text over existing KG-to-text generation models.
## 2 Related Work
### Knowledge Graph-to-Text Generation
KG-to-text generation techniques Koncel-Kedziorski et al. (2019); Guo et al. (2020); Ribeiro et al. (2020); Chen et al. (2020) utilize graph neural networks Velickovic et al. and graph Transformers Vaswani et al. (2017) to effectively encode a graph's structural information. With the rapid advancement of pre-trained language models (PLMs) Lewis et al. (2019); Raffel et al. (2020); Radford et al. (2019), researchers have started adapting and fine-tuning these models to KG-to-text generation tasks and obtained better results compared to previous models Ribeiro et al. (2021); Chen et al. (2020); Kale and Rastogi (2020). Recently, researchers further improved the KG-to-text models' performance by integrating pre-trained language models with appropriate graph-structure-aware modules Ke et al. (2021); Colas et al. (2022) and employing some graph masking pre-training tasks Ke et al. (2021); Han and Shareghi (2022).
However, we have empirically observed that although these state-of-art KG-to-text generation models Ke et al. (2021); Colas et al. (2022); Han and Shareghi (2022) introduce graph aware encoders and/or apply graph masking pre-training strategies to enhance graph-text alignments, still these models are struggling with hallucination problems when trained with noisy input ground-truth text.
### Controlling Hallucinations in Text Generation
This hallucination problem is well explored in other natural language generation tasks such as in table-to-text generation, summarization, dialogue generation, question-answering, and neural machine translation. Planning Su et al. (2021) or skeleton-based method Wang et al. (2021), joint learning strategy Xu et al. (2021), Bayes training framework Tian et al. (2019), table-text optimal-transport matching strategy Wang et al. (2020), control token approach Filippova (2020) are widely used in controlling hallucinations in table-to-text generation tasks. Most recently, Rebuffel et al. (2021)
al. (Rebuffel et al., 2022) proposed a multi-branch decoder approach to control hallucination at decoding time in this area.
Prior works have also focused on minimizing hallucinations in summarization, dialogue generation, question-answering and neural machine translation areas. Some of the recent hallucination mitigation techniques are based on control token approach (Filippova, 2020; Rashkin et al., 2021; Wang et al., 2022), contrastive learning approach (Cao and Wang, 2021; Tang et al., 2022), generate then-refine strategy (Dziri et al., 2021), a routing transformer based approach (Krishna et al., 2021) and self-training of neural machine translation based approach (Zhou et al., 2021). To the best of our knowledge, no work has been done in graph-to-text generation tasks with hallucinated ground-truth text.
### Evaluation using ChatGPT
Large language models such as ChatGPT have recently been employed for evaluating the quality and factual consistency of the generated text in NLP tasks with respect to the source input through ranking, rating, and entailment inference (Kocmi and Federmann, 2023; Wang et al., 2023; Luo et al., 2023). Luo et al. (2023) closely investigated ChatGPT's ability under a zero-shot setting with three factual consistency evaluation tasks: binary entailment inference, summary ranking, and consistency rating. Experimental findings show that ChatGPT generally performs better than previous evaluation metrics across the three tasks, demonstrating its significant potential for factual consistency evaluation. However, they also point out some limitations of ChatGPT such as its preference on lexical similarity instead of semantic entailment, false reasoning, and poor understanding of instructions. Moreover, while these approaches can compute an overall faithfulness score of the output text, they fall short in terms of explaining the score e.g., by quantifying the amount of hallucination (out of all the output facts, how many are hallucinated?), precision (out of all the output facts, how many are input facts?) and recall (out of all the input facts, how many appear in the output?). In this work, we use ChatGPT to quantify each of these values and obtain a finer-grained explanation of what a faithfulness score entails.
## 3 Proposed Model
### Problem Formulation
Let \(G\) = \((V,E)\) represent a knowledge graph, where \(V=\{e_{1},e_{2},\ldots,e_{|V|}\}\) represents the entity set and \(E=\{r_{ij}\}\subseteq V\times V\) represents the relations connecting the entities, the task of KG-to-text aims to generate a passage of text \(\hat{Y}\) = \((y_{1},y_{2},\ldots,y_{n})\), that faithfully represents the information contained in a graph \(G\). The model is given a training set \(\mathcal{D}=\{(G_{i},Y_{i})\}\), in which the reference text \(Y_{i}\) may contain _hallucinated_ information.
### Our Framework
Standard fine-tuning approaches use a cross-entropy loss to maximize the similarity between the ground-truth text and the output text. Thus, if the ground-truth text contains hallucination, the model trained through fine-tuning also learns to generate hallucinated text. To overcome this hallucination problem, we introduce an effective fine-tuning approach that combines a contrastive loss function and a controllable text generation technique with the cross-entropy loss function. As a result, our
Figure 1: A sample knowledge graph for the House dataset with its ground-truth text. The red colored text in the ground-truth text represents extrinsic hallucination information.
method can train a KG-to-text generation model to generate faithful text from a KG.
Figure 2 depicts the overall architecture of our proposed model. The following two subsections illustrate our two proposed techniques in detail.
### Minimizing Hallucinations with Contrastive Learning
Contrastive learning is a popular and effective representation learning method. Originally proposed for computer vision tasks Khosla et al. (2020); Yang et al. (2022), contrastive learning has been successfully applied to learn representations for sentences/documents Gao et al. (2021); Zhang et al. (2021), abstractive summarization Liu and Liu (2021); Cao and Wang (2021); Wan and Bansal (2022) and dialogue generation Tang et al. (2022); Dziri et al. (2022); Geng et al. (2022). Inspired by them, we have utilized this learning framework to reduce hallucinations while generating text from knowledge graphs. It enables the model to differentiate between faithful information and hallucinated information in the text, which then assists the decoder in generating text that should be free of hallucinations.
For an input pair of a graph and an anchor reference text \((G_{i},Y_{i})\) from the training data \(\mathcal{D}\), \(P_{i}\) represents a set of positive samples and \(N_{i}\) represents a set of hallucinated summaries (i.e. negative samples). The contrastive learning objective function is formulated as follows in Equation 1:
\[L_{CL}=-\sum_{(G_{i},Y_{i})\in\mathcal{D}}\sum_{Y_{j}\in P_{i}}\log\frac{\exp( \cos(h_{i},h_{j}))}{\sum\limits_{Y_{k}\in N_{i}}\exp(\cos(h_{i},h_{k}))} \tag{1}\]
Here, \(Y_{j}\) is a positive sample from the set \(P_{i}\), \(Y_{k}\) is a negative sample from the set \(N_{i}\), and \(h_{i}\), \(h_{j}\), \(h_{k}\) are the BART decoder representations of \(Y_{i}\), \(Y_{j}\), and \(Y_{k}\) respectively.
This contrastive objective function encourages the model to learn a preference for positive (faithful) summaries over negative (hallucinated) ones. While the ground-truth text in the training data \(\mathcal{D}\) is noisy, it is reasonable to assume that each reference text is more faithful to the paired graph than a randomly sampled text from \(\mathcal{D}\). Based on this observation, we carefully select the positive and negative samples to ensure the effectiveness of our contrastive learning technique.
**Positive sample construction.** Back-translation Mallinson et al. (2017) is an effective approach for preserving meanings and providing linguistic diversity. Hence, we use NLPAug Ma (2019) to translate each anchor text to German and back to English and take the translated text as a positive sample for the anchor passage.
**Negative sample construction.** For the anchor text of a given graph, we treat the text of any other graph in \(\mathcal{D}\) as a potential negative sample. We randomly select four such text to construct \(N\) for each anchor text. Dataset-specific knowledge can be easily incorporated in this approach to improve the quality of contrastive learning. For the House dataset, we adopt a simple heuristic for constructing the negative sample set. Here, we give more importance to the six major features of a house graph: (1) house location (2) house address (3) number of bedrooms (4) number of bathrooms (5) number of parking spaces, and (6) house property type. If all of these major features of a house differ from the anchor house, then the house's paired text is selected as the negative sample for the anchor house. We choose these six features as major features because information of these features is available in almost every house (91%) in the training set.
### Controlling Hallucinations with Control Feature Token
In contrastive learning, we use the ground-truth reference text as a positive sample. As the ground-truth text contains hallucinations, when training with contrastive learning for generating text, the output text still contains some hallucinations. Thus, we employ a controllable text generation approach to further enhance the faithfulness of our model. Specifically, we append controllable features to the input graph in training in order to control the level of hallucination in the generated text.
**Control feature token.** Control feature token is a hallucination measure that quantifies how much
Figure 2: The overall framework of our KG-to-text model.
the given ground-truth text is faithful to the source graph. We linearized the knowledge graph Chen et al. (2020) into a list of verbalized triples and employ BARTScore Yuan et al. (2021) as the measure of faithfulness between the linearized graph and the corresponding ground-truth text, as it has been shown that it is closely associated with human evaluations of faithfulness Yuan et al. (2021).
**Controllable generation.** According to the BARTScore of the training samples, we split the samples into three buckets, where each bucket contains a list of training samples at a specific range of BARTScore. This range is chosen in a manner that ensures each bucket contains approximately an equal number of samples. These three buckets are represented using the following hallucination tags, \(Hal_{tag}\)={\(Hal_{low}\), \(Hal_{medium}\) and \(Hal_{high}\)} following existing work Filippova (2020); Zhang et al. (2022). At training time, we append the corresponding hallucination tag to the input sample according to its BARTScore. These three hallucination tags represent the three control feature tokens that act as a special input to control the level of hallucination during text generation.
Figure 3 illustrates the fine-tuning process with the control tokens. Let \(G\) and \(Y\) = \((y_{1},y_{2},\dots,y_{n})\) be the input sample graph and its corresponding reference text, and \(H\) be the hallucination tag (i.e. control feature token) for this input sample. Formally, we define the objective function of our fine-tuning strategy with the control token as follows:
\[L_{CE\_CtrlTok}=-\sum_{i=1}^{n}logP(y_{i}|y_{<i},G,H) \tag{2}\]
Thus, during training, the model learns the mapping between the graph-text pair (\(G\), \(Y\)) and its corresponding control token \(H\). The model then becomes an expert at evaluating samples according to the control token. At inference time, the control token is set to the desired hallucinated value i.e., low (\(Hal_{low}\)) to generate faithful text from the KG.
**The overall training objective** of our proposed model is the sum of the contrastive loss and the cross-entropy loss with the control token:
\[L=L_{CL}+L_{CE\_CtrlTok} \tag{3}\]
Thus, during training, instead of blindly following the ground-truth text, the model gives more focus on the faithful parts of the text instead of the hallucinated ones. Moreover, the decoder is encouraged to generate text by minimizing hallucinations through controlled measures.
## 4 Experiments
### Dataset
We conduct experiments and evaluation on two KG-to-text generation datasets: the House dataset Das et al. (2021) about real-estate house listing and the GenWiki dataset Jin et al. (2020). In both datasets, the ground-truth text contains a significant amount of hallucinated information, making the task of generating faithful text especially challenging. Thus, these datasets are the most appropriate to evaluate the performance of our proposed model. Table 1 shows the statistics of these two datasets in detail. Note that we use the "fine" version Jin et al. (2020) of GenWiki.
House.The dataset is prepared from the large real-estate and POI datasets of Melbourne, Australia Das et al. (2021). It includes \(53,220\) records of house sales transactions from \(2013\) to \(2015\). It consists of three types of point-of-interests (POIs), namely regions, schools, and train stations, along with their corresponding features. Every sample in the dataset includes a ground-truth advertisement text describing the features of the house. However, the given ground-truth text contains a significant level of hallucinated information.
GenWiki.It is a large-scale non-parallel Coilas et al. (2021) dataset prepared by matching Wikipedia articles with DBpedia entities Jin et al. (2020).
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Dataset** & **\#Relations** & **\#KG-Text Pairs** \\ & & (Train / Valid / Test) \\ \hline House & \(68\) & \(33\)K / \(10\)K / \(10,219\) \\ GenWiki\({}_{\text{FINE}}\) & \(287\) & \(750\)K / \(7,152\) / \(1,000\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of the datasets, including the total number relations and the data split
Figure 3: Controllable Text generation with Control Feature Token
### Baseline Models
We evaluate the performance of our proposed model against graph-to-text generation models that are based on an encoder-decoder architecture. On the House dataset, we choose three state-of-the-art models: JointGT model (Ke et al., 2021) that jointly learns the graph structure and text; GAP (Colas et al., 2022) that is aware of the graph structure; and GMP (Han and Shareghi, 2022), a self-supervised graph masking pre-training model. On the Gen-Wiki dataset, we compare the results of the following models: the state-of-the-art unsupervised model CycleGT (Guo et al., 2020) for Genwiki dataset, JointGT (T5) model (Ke et al., 2021) and GMP (Han and Shareghi, 2022). Note that in addition to the existing state-of-the art model, GMP, we also include CycleGT as it has the best reported performance on GenWiki dataset.
### Experimental Settings
We adopt JointGT (Ke et al., 2021) as our base model for fine-tuning. JointGT is initialized with the Hugging Face's pre-trained BART-base checkpoint1 for House Dataset. For GenWiki dataset the model is initialized with the Hugging Face's pre-trained T5-base checkpoint2. We select the pre-trained LM BART-base or T5-base in order to do a fair comparison with the baseline models.
Footnote 1: [https://huggingface.co/facebook/bart-base](https://huggingface.co/facebook/bart-base)
Footnote 2: [https://huggingface.co/t5-base](https://huggingface.co/t5-base)
JointGT is pre-trained with a KGTEXT dataset (Chen et al., 2020). For contrastive learning, we use two positive samples and four negative samples for each training sample. For the House dataset, we fine-tune our model for 5 epochs; for the Gen-Wiki dataset, we fine-tune our model for 4000 steps. The batch size is set to \(32\). The maximum length of linearized input graphs is 600 and the maximum length of text sequences is set to 128 tokens. We adopt Adam (Kingma and Ba, 2015) as the optimizer and set the learning rate to be 3e-5. We used one A40 48GB GPU and one A10 24GB GPU for the experiments
### Main Results
We use automatic metrics to measure both fluency and faithfulness of generated text. Following existing KG-to-text work, we employ standard metrics BLEU (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), and ROUGE-L (Lin, 2004). These metrics are usually used to measure accuracy and fluency of the generated text with respect to the ground-truth text. However, as the ground-truth text contains hallucinations, we cannot verify the faithfulness of the generated text by comparing with these metrics. Thus, we use BARTScore (Yuan et al., 2021) and
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multicolumn{6}{c}{**House Dataset**} \\ \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{Comparison with ground-truth text} & \multicolumn{3}{c}{Comparison with linearized graph} \\ \cline{2-5} & BLEU \(\uparrow\) & METEOR \(\uparrow\) & ROUGE-L \(\uparrow\) & BARTScore \(\uparrow\) & FactCC \(\uparrow\) \\ \hline Ground-truth text (5K samples) & **-** & - & **-** & -4.564 & 48.48 \\ JointGT (Ke et al., 2021) & **3.61** & 11.96 & **18.62** & -3.685 & 49.53 \\ GAP (Colas et al., 2022) & 3.47 & **12.05** & 18.16 & -3.666 & 52.71 \\ GMP (Han and Shareghi, 2022) & 3.09 & 10.73 & 16.23 & -3.941 & 48.47 \\ \hline
**Our Full Model** & 2.54 & 11.06 & 16.86 & **-3.245** & **63.61** \\ Control token only & 2.88 & 11.2 & 17.35 & -3.567 & 52.97 \\ Contrastive learning only & 2.56 & 11.04 & 16.89 & -3.247 & 63.04 \\ \hline \hline \multicolumn{6}{c}{**GenWiki Dataset**} \\ \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{Comparison with ground-truth text} & \multicolumn{3}{c}{Comparison with linearized graph} \\ \cline{2-5} & BLEU \(\uparrow\) & METEOR \(\uparrow\) & ROUGE-L \(\uparrow\) & BARTScore \(\uparrow\) & FactCC \(\uparrow\) \\ \hline Ground-truth text (5K samples) & **-** & **-** & -3.464 & 53.80 \\ CycleGT (Guo et al., 2020) & **41.59** & **35.72** & **63.31** & -3.276 & 76.86 \\ JointGT (Ke et al., 2021) & 37.93 & 32.60 & 59.06 & -2.299 & 79.94 \\ GMP (Han and Shareghi, 2022) & 35.43 & 32.68 & 57.63 & **-1.601** & 76.62 \\ \hline
**Our Full Model** & 37.48 & 32.70 & 60.40 & -2.182 & **82.85** \\ Control token only & 37.01 & 32.38 & 59.57 & -2.268 & 81.98 \\ Contrastive learning only & 35.19 & 31.33 & 57.89 & -2.309 & 81.48 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on the **House** and **GenWiki** datasets. We have used BART-base and T5-base for House dataset and Genwiki dataset respectively. **Bold** fonts denote the best results.
FactCC (Kryscinski et al., 2020) for comparing the generated text with the linearized input graph for measuring faithfulness. These two metrics have been widely used for measuring faithfulness in other NLP tasks (Tang et al., 2022; Gao and Wan, 2022; Cao and Wang, 2021; van der Poel et al., 2022).
The faithfulness of the reference text of the House dataset and the GenWiki dataset is also reported in Table 2, as measured by BARTScore and FactCC score. As can be seen, the reference text of both datasets contains significant amounts of hallucination (low BARTScore and FactCC scores).
Table 2 presents the results on the House and GenWiki datasets. From the results on the House dataset, we can observe that our full model achieves best results on faithfulness measures (i.e. when compared with the linearized graph), outperforming the best baseline models on BARTScore and FactCC score by 0.421 and 10.9 absolute points respectively. The performance delta on the GenWiki dataset is smaller, where our model achieves the best performance on FactCC of 1.55 points and second best performance on BARTScore. We posit the larger performance delta on the House dataset is due to it being significantly more noisy evidenced by lower BARTScore and FactCC scores.
For BLEU, METEOR and ROUGE-L, the baseline models perform modestly better than our model when comparing with the ground-truth text. This result is expected and reasonable as compared with our model, the other models tend to generate text with higher similarity with the ground-truth text, resulting in higher values as measured by these metrics. At the same time, due to the noisy nature of the reference text, a high similarity also indicates high hallucination, as discussed above.
In Section 4.5 below, we further measure the faithfulness and fluency of generated text with ChatGPT as the oracle, where we demonstrate that our model achieves superior faithfulness while maintaining fluency.
Table 3 shows a sample ground-truth text and the text generated by different models, where correct facts are highlighted in blue and hallucinated text is highlighted in red. More examples can be found in Appendix C.
### ChatGPT-based Evaluation
We propose to utilize ChatGPT to further measure the factual consistency and fluency of the generated text with respect to the input graph. We randomly sample 50 houses from the House test set, and perform evaluation on the text generated by different models.
To measure **fluency**, similar to (Wang et al., 2023), we prompt ChatGPT to score the fluency of the generated text. To measure **factual consistency**, we carefully design prompts to instruct ChatGPT to enumerate facts in the (linearized) graph (_# input facts_), the common facts between the graph and generated text (_# common facts_), and the hallucinated facts in the generated text (_# hallucinated facts_), respectively. By enumerating facts that are correctly generated, missing, or hallucinated, our ChatGPT-based evaluation provides better explainability of models' faithfulness. Details and examples of our prompts and ChatGPT's responses can be found in Appendix A.
In addition to enumerating the facts, ChatGPT-based evaluation provides a way to measure quantitative metrics such as precision, recall, and hallucination rates. We randomly sample \(50\) graph-text pairs from the test House dataset, and measure the precision (P), recall (R) and amount of hallucination (H) in generated text of these samples, which are formulated as follows: \(P=\frac{\#\text{common facts}}{\#\text{output facts}}\), \(R=\frac{\#\text{common facts}}{\#\text{input facts}}\), and \(H=\frac{\#\text{hallucinated facts}}{\#\text{output facts}}\).
The number of output facts (_# output facts_) is computed by summing up the number of hallucinated facts (_# hallucinated facts_) and the number of common facts (_# common facts_).
Figure 4 shows the results of this analysis. It can be seen that our model outperforms all the baseline KG-to-text generation models on precision, recall and faithfulness (i.e. low hallucination) and achieves competitive scores in terms of fluency.
To determine the gap between our model and the
Figure 4: ChatGPT-based evaluation on \(50\) samples from the House test set.
most capable language models, we also compare our model with ChatGPT on a set of 1,000 random samples from the House dataset in different settings. A comprehensive analysis of this experiment is presented in Appendix B. As can be expected, ChatGPT achieves significantly better performance in faithfulness in zero-shot setting. However, when given noisy ground-truth text as few-shot examples, ChatGPT generates hallucinated text similar to the ground-truth text, showing that it is also prone to noise in the reference text. Our model outperforms ChatGPT in this (3-shot) setting in terms of precision and hallucination (i.e., lower hallucination).
### Ablation Studies
To investigate the effect of contrastive learning and control token techniques individually, we experiment on both datasets with two configurations of our full model: one with control token only and the other one with contrastive learning only.
As we see in Table 2, both model components contribute to our model's better faithfulness, with contrastive learning making a larger impact in House dataset.
## 5 Conclusion
In this paper, we have proposed a novel approach to generate faithful text from a knowledge graph having noisy ground-truth text. To ensure faithful text generation, we have introduced two key ideas: (i) contrastive learning to better differentiate between faithful and hallucinated information, (ii) control token to regulate the level of hallucination in the generated text. Experimental results on two noisy KG-to-text datasets demonstrates that KG-to-text model with our framework outperforms all the baseline models in terms of faithfulness metrics. Moreover, we have proposed a novel ChatGPT based evaluation technique for an in-depth quantitative and qualitative analysis, which further verifies the superior performance of our model on
\begin{table}
\begin{tabular}{|p{227.6pt}|} \hline
**Ground-truth text:** \\
**Exceptional potential with elevated outlook:**Set on the high side in a wonderfully quiet and convenient cul-de-sac location, this clinker-brick home provides an ideal opportunity to renovate and extend, rebuild or redevelop (STCA) and capture the city skyline views. Immediately comfortable, the home currently offers five bedrooms (two with built-in robes) and central bathroom, two living areas and well-appointed kitchen. Other features include heating and cooling, expansive backyard, plus garage and carport via full-length side driveway. Close to Rosehill Road/Hoffmans Road buses, shops and cafes, walking distance to St Bernard???? College and Rosehill Secondary College. Currently tenanted at $1653.00pm. Land size 15.8m x 42.1m???? 665sqm (approx). This is the one! \\ \hline
**JointGT model:** \\
**AUCTION THIS SATURDAY AT 11:30AM:**In one of Niddrie’s most sought-after pockets, this solid-brick home presents an exciting opportunity to renovate and extend (STCA) on a deep block of land (approx 690sqm). Current floorplan includes five bedrooms (all with built-in robes) and central bathroom, formal lounge through to the kitchen/meals area, as well as a separate toilet. Other features of the home include gas heating and split-system air conditioning, plus a north-facing backyard complete with double garage (remote-controlled \\ \hline
**GAP model:** \\
**Spacious family home in a sought-after Niddrie location!**Set on a large block of land (approx 697sqm), this solid-brick home presents an exciting opportunity to renovate, extend or redevelop (STCA). Current floorplan includes five bedrooms (two with BIRs) and central bathroom, open-plan living/meals area and updated kitchen with stainless steel appliances. Other features include split-system heating/cooling, polished floorboards, polished hardwood floors and a double garage with internal access. Close to schools, shops and public transport, it’s a must-see \\ \hline
**GMP model:** \\
**Exceptional family living in a prime Niddrie location:**In a quiet cul-de-sac close to schools, shops and transport, this impressive home offers an immediately comfortable family lifestyle with plenty of scope to further improve and extend (STCA). Beyond its attractive facade, the home’s inviting interior comprises five bedrooms (all with built-in robes) and stylish central bathroom, spacious lounge through to the kitchen/meals area and adjoining family room. Other highlights include ducted heating and evaporative cooling, polished hardwood floors, plus a double garage (with \\ \hline
**Our model:** \\
**SOLD by Frank Dowling Real Estate:** Located in a quiet pocket of Niddrie, this immaculately presented family home is sure to impress. Comprising of 5 good sized bedrooms with BIRs, master with WIR and walk in robe, central bathroom, open plan living and dining area, central kitchen with stainless steel appliances, family bathroom and separate toilet. Other features include gas heating, evaporative cooling, split system air conditioner, double garage with internal access and a large rear yard with rear access. Close to schools, shops and transport. \\ \hline \end{tabular}
\end{table}
Table 3: An example of ground-truth and generated text on the House dataset. Here red colored text represents hallucinated information and blue colored text represents the faithful information.
precision, recall and faithfulness.
**Limitation and Future work** We have applied our proposed framework only in PLM based KG-to-text encoder-decoder model. In future, we plan to explore the hallucination problem in AMR (Abstract Meaning Representations) graph datasets, which can also preserve a number of meaningful semantic relations and widely used in NLP areas.
## Ethical Considerations
Our model utilizes existing pre-trained language model based KG-to-text generation model, thus the ethical concerns associated with these models would also be applicable to our proposed framework.
## Acknowledgments
This material is based on research sponsored by Defense Advanced Research Projects Agency (DARPA) under agreement number HR0011-22-2-0047. The U.S. Government is authorised to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government.
|
2305.16667 | Hopf Monads on Biproducts | A Hopf monad, in the sense of Brugui\`eres, Lack, and Virelizier, is a
special kind of monad that can be defined for any monoidal category. In this
note, we study Hopf monads in the case of a category with finite biproducts,
seen as a symmetric monoidal category. We show that for biproducts, a Hopf
monad is precisely characterized as a monad equipped with an extra natural
transformation satisfying three axioms, which we call a fusion invertor. We
will also consider three special cases: representable Hopf monads, idempotent
Hopf monads, and when the category also has negatives. In these cases, the
fusion invertor will always be of a specific form that can be defined for any
monad. Thus in these cases, checking that a monad is a Hopf monad is reduced to
checking one identity. | Masahito Hasegawa, Jean-Simon Pacaud Lemay | 2023-05-26T06:31:37Z | http://arxiv.org/abs/2305.16667v2 | # Hopf Monads on Biproducts
###### Abstract
A Hopf monad, in the sense of Bruguieres, Lack, and Virelizier, is a special kind of monad that can be defined for any monoidal category. In this note, we study Hopf monads in the case of a category with finite biproducts, seen as a symmetric monoidal category. We show that for biproducts, a Hopf monad is precisely characterized as a monad equipped with an extra natural transformation satisfying three axioms, which we call a fusion invertor. We will also consider three special cases: representable Hopf monads, idempotent Hopf monads, and when the category also has negatives. In these cases, the fusion invertor will always be of a specific form that can be defined for any monad. Thus in these cases, checking that a monad is a Hopf monad is reduced to checking one identity.
**Acknowledgements.** For this research, the first named author was supported by JSPS KAKENHI Grant No. JP18K11165 and JP21K11753. The second named author was financially supported by a JSPS Postdoctoral Fellowship, Award #: P21746.
## 1 Introduction
Hopf monads were originally introduced by Bruguieres and Virelizier in [3] for autonomous categories, and later generalized by Bruguieres, Lack, and Virelizier in [2] to arbitrary monoidal categories. The name comes from the fact that a Hopf monad is a generalization of a Hopf algebra (whose antipode is invertible). Hopf monads have the ability to lift many desirable monoidal related structures. Indeed, for a monoidal closed category, Hopf monads are precisely the kinds of monads that lift the monoidal closed structure to their Eilenberg-Moore category [2, Theorem 3.6], and similarly for autonomous categories [2, Theorem 3.10]. Furthermore, for star autonomous categories or traced monoidal categories, we can precisely characterize which Hopf monads lift star autonomous structure [5] or traced monoidal structure [6]. The purpose of this note is to discuss Hopf monads on a category with finite biproducts, viewed as a symmetric monoidal category.
For an arbitrary category \(\mathbb{X}\), we denote a monad as a triple \((\mathsf{T},\mu,\eta)\) where \(\mathsf{T}:\mathbb{X}\xrightarrow{}\mathbb{X}\) is the endofunctor, \(\mu_{A}:\mathsf{TT}(A)\xrightarrow{}\mathsf{T}(A)\) is the multiplication, and \(\eta_{A}:A\xrightarrow{}\mathsf{T}(A)\) is the unit. For a monoidal category \(\mathbb{X}\), with monoidal product \(\otimes\) and monoidal unit \(I\), a **bimonad**[2, Section 2.4] (also sometimes called an opmonoidal monad [7] or a comonoidal monad [5]) is a monad \((\mathsf{T},\mu,\eta)\) which comes equipped with a natural transformation \(\mathsf{m}_{A,B}:\mathsf{T}(A\otimes B)\xrightarrow{}\mathsf{T}(A)\otimes \mathsf{T}(B)\) and a map \(\mathsf{m}_{I}:\mathsf{T}(I)\xrightarrow{}I\) such that \(\mathsf{T}\) is a comonoidal functor, and \(\mu\) and \(\eta\) are both comonoidal natural transformations. For any bimonad, there are two canonical natural transformations, which can always be defined, called the **fusion operators**[2, Section 2.6]. The **left fusion operator**\(\mathsf{h}^{I}_{A,B}:\mathsf{T}(A\otimes T(B))\xrightarrow{}\mathsf{T}(A) \otimes\mathsf{T}(B)\) and the **right fusion operator**\(\mathsf{h}^{r}_{A,B}:\mathsf{T}(\mathsf{T}(A)\otimes B)\xrightarrow{} \mathsf{T}(A)\otimes\mathsf{T}(B)\) are respectively defined as the following composites:
\[\mathsf{h}^{I}_{A,B} \coloneqq\ \mathsf{T}\left(A\otimes\mathsf{T}(B)\right)\xrightarrow{ \mathsf{m}_{A,\mathsf{T}(B)}}\mathsf{T}(A)\otimes\mathsf{TT}(B)\xrightarrow{ \mathsf{T}_{\mathsf{T}(A)}\otimes\mathsf{T}(B)} \tag{1}\] \[\mathsf{h}^{r}_{A,B} \coloneqq\ \mathsf{T}\left(\mathsf{T}(A)\otimes B\right)\xrightarrow{ \mathsf{m}_{\mathsf{T}(A),B}}\mathsf{TT}(A)\otimes\mathsf{T}(B)\xrightarrow{ \mathsf{m}_{A}\otimes\mathsf{1}_{\mathsf{T}(B)}}\mathsf{T}(A)\otimes\mathsf{T }(B) \tag{2}\]
A **Hopf monad**[2, Section 2.7] is a bimonad whose fusion operators are natural isomorphisms, so we have that:
\[\mathsf{T}(A\otimes\mathsf{T}(B))\cong\mathsf{T}(A)\otimes\mathsf{T}(B)\cong \mathsf{T}(\mathsf{T}(A)\otimes B)\]
For a symmetric monoidal category, a **symmetric bimonad**[7, Section 3] is a bimonad that is also compatible with the natural symmetry isomorphism \(\sigma_{A,B}:A\otimes B\xrightarrow{}B\otimes A\). For a symmetric bimonad, the fusion operators can be defined from one another using the symmetry: \(\mathsf{h}^{\tau}_{A,B}=\sigma_{T(B),T(A)}\circ\mathsf{h}^{\iota}_{B,A}\circ T( \sigma_{T(A),B})\). Therefore for a symmetric bimonad, the invertibility of one fusion operator implies the invertibility of the other [6, Lemma 6.5]. So for a symmetric bimonad, we may speak of _the_ fusion operator \(\mathsf{h}_{A,B}:\mathsf{T}(A\otimes T(B))\xrightarrow{}\mathsf{T}(A)\otimes \mathsf{T}(B)\), and define a **symmetric Hopf monad**[6, Definition 6.4] to be a symmetric bimonad whose fusion operator \(\mathsf{h}\) is a natural isomorphism. The main example of (symmetric) Hopf monads are those of the form \(\mathsf{T}(-)=H\otimes-\) where \(H\) is a (cocommutative) Hopf monoid with an invertible antipode [2, Example 2.10], and these Hopf monads are called _representable_ Hopf monads [2, Section 5].
Before we discuss Hopf monads for biproducts, let us first discuss what one can say about Hopf monads for products and coproducts. Starting with products, let \(\mathbb{X}\) be a category with finite products, with binary product \(\times\), projections \(\pi_{i}:A_{1}\times\ldots\times A_{n}\xrightarrow{}A_{i}\), pairing operator \(\langle-,\ldots,-\rangle\), and terminal object \(*\). Every monad \((\mathsf{T},\mu,\eta)\) on a category \(\mathbb{X}\) with finite products has a unique bimonad structure with respect to the Cartesian monoidal structure [7, Example 4.9] where \(\mathsf{m}_{A,B}:\mathsf{T}(A\times B)\xrightarrow{}\mathsf{T}(A)\times \mathsf{T}(B)\) is defined as \(\mathsf{m}_{A,B}:=\left\langle\mathsf{T}(\pi_{1}),\mathsf{T}(\pi_{2})\right\rangle\) and \(\mathsf{m}_{*}:\mathsf{T}(*)\xrightarrow{}*\) is defined as the unique map to the terminal object. In fact, this gives a symmetric bimonad structure. So the fusion operator \(\mathsf{h}_{A,B}:\mathsf{T}(A\times T(B))\xrightarrow{}\mathsf{T}(A)\times \mathsf{T}(B)\) is worked out to be:
\[\mathsf{h}_{A,B}=\left\langle\mathsf{T}(\pi_{1}),\mu_{B}\circ\mathsf{T}(\pi_{ 2})\right\rangle\]
or in other words, using the universal property of the product, the fusion operator is the unique map which makes the following diagram commute:
So for a category with finite products, we can say that a Hopf monad is a monad \((\mathsf{T},\mu,\eta)\) such that the fusion operator \(\mathsf{h}\) as defined above is a natural isomorphism. However, not much can be necessarily said about the form of the inverse of the fusion operator \(\mathsf{h}^{-1}_{A,B}:\mathsf{T}(A)\times\mathsf{T}(B)\xrightarrow{}\mathsf{ T}\left(A\times\mathsf{T}(B)\right)\).
On the other hand, what about the coproduct case. So now let \(\mathbb{X}\) be a category with finite coproducts, with binary coproduct \(\oplus\), injections \(\iota_{j}:A_{j}\xrightarrow{}A_{1}\oplus\ldots\oplus A_{n}\), copairing operator \([-,\ldots,-]\), and initial object \(\mathsf{0}\). Not every monad \((\mathsf{T},\mu,\eta)\) on a category \(\mathbb{X}\) with finite coproducts will have a bimonad structure with respect to the coCartesian monoidal structure. So we must ask that our monad \((\mathsf{T},\mu,\eta)\) be a symmetric bimonad with structure maps \(\mathsf{m}_{A,B}:\mathsf{T}(A\oplus B)\xrightarrow{}\mathsf{T}(A)\oplus \mathsf{T}(B)\) and \(\mathsf{m}_{0}:\mathsf{T}(\mathsf{0})\xrightarrow{}\mathsf{0}\). In this case, not much can be said about the fusion operator \(\mathsf{h}_{A,B}:\mathsf{T}(A\oplus T(B))\xrightarrow{}\mathsf{T}(A)\oplus \mathsf{T}(B)\). Instead, if we have a symmetric Hopf monad on a category with finite coproducts, we can say something about the inverse of the fusion operator since it is of type \(\mathsf{h}^{-1}_{A,B}:\mathsf{T}(A)\oplus\mathsf{T}(B)\xrightarrow{}\mathsf{ T}\left(A\oplus\mathsf{T}(B)\right)\). Since \(\mathsf{h}^{-1}_{A,B}\circ(\mathsf{m}_{A}\oplus\mathsf{1}_{\mathsf{T}(B)})= \eta_{A\oplus\mathsf{T}(B)}\)[5, Lemma 4.2], by pre-composing by the injection \(\iota_{2}\) and then using the naturality of \(\eta\), it follows that: \(\mathsf{h}^{-1}_{A,B}\circ\iota_{2}=\mathsf{T}(\iota_{2})\circ\eta_{B}\). In other words, by using the couniversal property of the coproduct, the inverse of the fusion operator is of the form:
\[\mathsf{h}^{-1}_{A,B}:=[\mathsf{h}^{*}_{A},\mathsf{T}(\iota_{2})\circ\eta_{B}]\]
for some unique map \(\mathsf{h}^{*}_{A}:\mathsf{T}(A)\xrightarrow{}\mathsf{T}\left(A\oplus\mathsf{ T}(B)\right)\), that is, \(\mathsf{h}^{-1}_{A,B}\) is the unique map which makes the following diagram commute:
[FIGURE:S0.F1][ENDFIGU
Therefore, for coproducts, a symmetric Hopf monad can be characterized in terms of the existence of a natural transformation \(\mathsf{h}^{\wedge}_{A,B}:\mathsf{T}(A)\xrightarrow{\mathsf{T}}\left(A\oplus \mathsf{T}(B)\right)\) such that \(\mathsf{h}_{A,B}\circ\mathsf{h}^{\wedge}_{A}=\iota_{1}\) and \([\mathsf{h}^{\wedge}_{A},\mathsf{T}(\iota_{2})\circ\eta_{B}]\circ\mathsf{h}_{A, B}=\mathsf{I}_{\mathsf{T}\left(A\times\mathsf{T}(B)\right)}\).
To recap, for products we have a full description of the fusion operator \(\mathsf{h}\) but not the inverse \(\mathsf{h}^{-1}\), while for coproducts we can't say much more on the form of the fusion operator \(\mathsf{h}\) but know that the first argument of the inverse of the fusion operator \(\mathsf{h}^{-1}\) must always be of a specific form. Since a biproduct is both a product and a coproduct, we can combine both observations to obtain a characterization of Hopf monads on a category with finite biproducts. Furthermore by naturality and using zero maps, it follows that \(\mathsf{h}^{\wedge}_{A,B}:\mathsf{T}(A)\xrightarrow{\mathsf{T}}\left(A\oplus \mathsf{T}(B)\right)\) is completely determined by the case where \(B\) is the zero object \(\mathsf{0}\). The main objective of this paper is to explain how a Hopf monad on a category with finite biproducts is precisely a monad \((\mathsf{T},\mu,\eta)\) which comes equipped with an extra natural transformation \(\mathsf{h}^{\wedge}_{A}:\mathsf{T}(A)\xrightarrow{\mathsf{T}}\left(A\oplus \mathsf{T}(\mathsf{0})\right)\) satisfying three extra axioms, which we call a **fusion invertor**. We will also explain how in the cases of a representable Hopf monad, an idempotent Hopf monad, or in a setting with negatives, the fusion invertor is always of a specific form and how in these cases, checking that one has a Hopf monad is reduced to checking one identity.
Lastly, before diving into this story, let us quickly discuss lifting again. A category with finite biproducts seen as a monoidal category is closed if and only if it is trivial, that is, all objects are zero objects. So using Hopf monads to lift (compact) closed or (star) autonomous structure in the biproduct setting is not interesting. On the other hand, it is possible to have a trace operator for biproducts. Per [6, Corollary 7.10], for a traced coCartesian monoidal category, a Hopf monad lifts the trace operator if and only if the monad is an idempotent monad. As such, the same is true in the biproduct setting. At first glance, it may seem this clashes with another result which says that representable Hopf monads always lift trace operators [6, Proposition 7.3]. However, in a traced Cartesian monoidal category, the only Hopf monoid is the terminal object [8, Theorem 3.7], which means the only possible representable Hopf monad is the identity monad, which is trivially idempotent. Of course, for a category with finite biproducts, there can be non-representable idempotent Hopf monads, which in a traced setting would lift the trace operator.
## 2 Hopf Monads for Biproducts
Let us reformulate the story of Hopf monads given in the introduction but this time described fully in terms of biproduct structure and its induced additive structure. For a category \(\mathbb{X}\) with finite biproducts, we denote the biproduct of a family of \(n\) objects as \(A_{1}\oplus\ldots\oplus A_{n}\) with injections \(\iota_{j}:A_{j}\xrightarrow{}A_{1}\oplus\ldots\oplus A_{n}\) and projection \(\pi_{j}:A_{1}\oplus\ldots\oplus A_{n}\xrightarrow{}A_{j}\), and denote the zero object as \(\mathsf{0}\). For the induced commutative monoid enrichment, we denote the addition of parallel maps \(f:A\xrightarrow{}B\) and \(g:A\xrightarrow{}B\) as \(f+g\) and the zero maps as \(0:A\xrightarrow{}B\). Lastly, recall that we denote a monad on a category \(\mathbb{X}\) as a triple \((\mathsf{T},\mu,\eta)\) where \(\mathsf{T}:\mathbb{X}\xrightarrow{}\mathbb{X}\) is the endofunctor, and \(\mu_{A}:\mathsf{TT}(A)\xrightarrow{}\mathsf{T}(A)\) and \(\eta_{A}:A\xrightarrow{}\mathsf{T}(A)\) are the natural transformations.
As explained in the introduction, for a category with finite (bi)products, every monad always has a unique symmetric bimonad structure with respect to the (bi)product. Therefore, in the case of (bi)products we can define fusion operators and Hopf monads simply in terms of a monad. In the case of biproducts, the fusion operator can be defined in terms of a sum.
**Definition 2.1**: _For a monad \((\mathsf{T},\mu,\eta)\) on a category \(\mathbb{X}\) with finite biproducts, the **fusion operator** is the natural transformation \(\mathsf{h}_{A,B}:\mathsf{T}\left(A\oplus\mathsf{T}(B)\right)\xrightarrow{} \mathsf{T}(A)\oplus\mathsf{T}(B)\) defined as follows:_
\[\mathsf{h}_{A,B}=\iota_{1}\circ\mathsf{T}(\pi_{1})+\iota_{2}\circ\mu_{B} \circ\mathsf{T}(\pi_{2}) \tag{3}\]
_A **Hopf monad** on a category \(\mathbb{X}\) with finite biproducts is a monad \((\mathsf{T},\mu,\eta)\) on \(\mathbb{X}\) whose fusion operator \(\mathsf{h}\) is a natural isomorphism, so in particular \(\mathsf{T}\left(A\oplus\mathsf{T}(B)\right)\cong\mathsf{T}(A)\oplus\mathsf{T} (B)\)._
A list of identities that the fusion operator satisfies can be found in [2, Proposition 2.6], and a list of identities that the inverse of the fusion operator satisfies can be found in [5, Lemma 4.2]. Here are some examples of Hopf monads on biproducts.
**Example 2.2**: Let \(\mathsf{CMON}\) be the category of commutative monoids (written additively) and monoid morphisms between them. \(\mathsf{CMON}\) is a category with finite biproducts where the biproduct is given by the Cartesian product, \(M\oplus N=M\times N\), with the monoid structure given pointwise, and where the zero object is the singleton \(\mathsf{0}=\{0\}\). For any Abelian group \(G\), \(\mathsf{T}(-):=G\oplus-\) is a Hopf monad where the monad structure is given by:
\[\mu_{M}:G\oplus M\oplus M\xrightarrow{}M \eta_{M}:M\xrightarrow{}G\oplus M\]
\[\mu_{M}(g,h,m) =(g+h,m) \eta_{M}(m)=(0,m)\]
and where the fusion operator and its inverse are:
\[\mathsf{h}_{M,N}:G\oplus M\oplus G\oplus N \xrightarrow{}G\oplus M\oplus G\oplus N \mathsf{h}_{M,N}^{-1}:G\oplus M\oplus G\oplus N \xrightarrow{}G\oplus M\oplus G\oplus N\] \[\mathsf{h}_{M,N}(g,m,h,n) =(g,m,g+h,n) \mathsf{h}_{M,N}^{-1}(g,m,h,n) =(g,m,h-g,n)\]
This is an example of a _representable_ Hopf monad, and can be generalized to any category with finite biproducts, which we discuss in Section 4.
**Example 2.3**: Let \(\mathsf{Ab}\) be the category of Abelian groups and group morphisms between them. \(\mathsf{Ab}\) is a category with finite biproducts, with the same biproduct structure as \(\mathsf{CMON}\). Let \(\mathbb{Z}\) be the ring of integers and let \(\mathbb{Z}_{2}\) be the ring of integers modulo \(2\). Then \(\mathsf{T}(G):=\mathbb{Z}_{2}\otimes_{\mathbb{Z}}G\) (where \(\otimes_{\mathbb{Z}}\) is the tensor product of Abelian groups/\(\mathbb{Z}\)-modules) is a Hopf monad where the monad structure is given by:
\[\mu_{G}:\mathbb{Z}_{2}\otimes_{\mathbb{Z}}\mathbb{Z}_{2}\otimes_ {\mathbb{Z}}G \xrightarrow{}\mathbb{Z}_{2}\otimes_{\mathbb{Z}}G \eta_{G}:G\xrightarrow{}\mathbb{Z}_{2}\otimes_{\mathbb{Z}}G\] \[\mu_{G}\left(x\otimes y\otimes g\right) =xy\otimes g \eta_{G}(g)=1\otimes g\]
and where the fusion operator and its inverse are:
\[\mathsf{h}_{G,H} :\mathbb{Z}_{2}\otimes_{\mathbb{Z}}\left(G\oplus(\mathbb{Z}_{2} \otimes_{\mathbb{Z}}H)\right)\xrightarrow{}(\mathbb{Z}_{2}\otimes_{\mathbb{Z }}G)\oplus(\mathbb{Z}_{2}\otimes_{\mathbb{Z}}H) \mathsf{h}_{G,H}\left(x\otimes(g,y\otimes h)\right)=(x\otimes g,xy \otimes h)\] \[\mathsf{h}_{M,N}^{-1} :(\mathbb{Z}_{2}\otimes_{\mathbb{Z}}G)\oplus(\mathbb{Z}_{2} \otimes_{\mathbb{Z}}H)\xrightarrow{}\mathbb{Z}_{2}\otimes_{\mathbb{Z}}\left(G \oplus(\mathbb{Z}_{2}\otimes_{\mathbb{Z}}H)\right) \mathsf{h}_{G,H}^{-1}(x\otimes g,y\otimes h)=x\otimes(g,0)+y\otimes(0,y \otimes h)\]
We leave it as an exercise for the reader to check for themselves that \(\mathsf{h}_{G,H}^{-1}\) is indeed well-defined and is the inverse of \(\mathsf{h}_{G,H}\). The two reasons that \(\mathbb{Z}_{2}\otimes_{\mathbb{Z}}-\) is a Hopf monad follows from the fact that the biproduct \(\oplus\) distributes over the tensor product \(\otimes_{\mathbb{Z}}\), and that \(\mathbb{Z}_{2}\otimes_{\mathbb{Z}}\mathbb{Z}_{2}\cong\mathbb{Z}_{2}\) via \(x\otimes y\mapsto xy\) and \(x\mapsto x\otimes x\). This latter fact actually implies that \(\mathbb{Z}_{2}\otimes_{\mathbb{Z}}-\) is an _idempotent_ Hopf monad, meaning in particular that \(\mathsf{TT}(G)\cong\mathsf{T}(G)\), which we will discuss more of in Section 5. It is worth noting that \(\mathbb{Z}_{2}\otimes_{\mathbb{Z}}-\) is also a (symmetric) Hopf monad with respect to the tensor product \(\otimes_{\mathbb{Z}}\), and thus gives a (non-trivial) example of a monad which is a Hopf monad with respect to two different monoidal structures. However, we stress that while \(\mathbb{Z}_{2}\otimes_{\mathbb{Z}}-\) is a representable Hopf monad for the tensor product \(\otimes_{\mathbb{Z}}\), it is not a representable Hopf monad for the biproduct \(\oplus\) (since it is not of the form \(G\oplus-\)). This example generalizes to any monoidal category with finite biproducts such that the monoidal product \(\otimes\) distributes over the biproduct \(\oplus\). Then by taking any _solid monoid_[4, Definition 3.1]\(M\) with respect to the monoidal product \(\otimes\) (i.e. a monoid whose multiplication is an isomorphism, so \(M\otimes M\cong M\)), \(\mathsf{T}(-)=M\otimes-\) is a Hopf monad with respect to the biproduct \(\oplus\). Other examples of solid monoids in \(\mathsf{Ab}\) include \(\mathbb{Z}\), the ring of rationals \(\mathbb{Q}\), and the ring of \(p\)-adic integers for any prime \(p\) (see [4] for a full list of all solid monoids in \(\mathsf{Ab}\)).
**Example 2.4**: Here is a simple example that is neither representable nor idempotent. Consider the category \(\mathsf{CMON}\times\mathsf{CMON}\), which has finite biproduct structure given pointwise. Then for any Abelian group \(G\), \(\mathsf{T}(M,N):=(G\oplus M,\mathsf{0})\) is a Hopf monad where the Hopf monad structure in the first argument is the same as in Example 2.2, while the Hopf monad structure in the second argument is trivially all zero maps.
Let us now discuss what we can say about the form of the inverse of the fusion operator. For a Hopf monad on a category with finite biproducts, by the couniversal property of the coproduct, \(\mathsf{h}_{A,B}^{-1}:\mathsf{T}(A)\oplus\mathsf{T}(B)\xrightarrow{}\mathsf{ T}\left(A\oplus\mathsf{T}(B)\right)\) is completely determined by its precomposition with the injection maps: \(\mathsf{h}_{A,B}^{-1}\circ_{t_{1}}:\mathsf{T}(A)\xrightarrow{}\mathsf{T}\left(A \oplus\mathsf{T}(B)\right)\) and \(\mathsf{h}_{A,B}^{-1}\circ_{t_{2}}:\mathsf{T}(B)\xrightarrow{}\mathsf{T} \left(A\oplus\mathsf{T}(B)\right)\). As explained in the introduction, for coproducts \(\mathsf{h}_{A,B}^{-1}\circ_{t_{2}}\) can always be shown to be \(\mathsf{T}(\iota_{1})\circ\eta_{\mathsf{T}(B)}\). Here we will provide an alternative proof using directly the biproduct structure. On the other hand, for coproducts we cannot say much about \(\mathsf{h}_{A,B}^{-1}\circ_{t_{1}}\). For biproducts, however, we can show \(\mathsf{h}_{A,B}^{-1}\circ_{t_{1}}\) is completely independent of the \(B\) term.
**Lemma 2.5**: _Let \((\mathsf{T},\mu,\eta)\) be a Hopf monad on a category \(\mathbb{X}\) with finite biproducts. Then for every pair of objects \(A,B\in\mathbb{X}\), the following diagrams commute:_
Proof: The diagram on the left follows from the naturality of the inverse of the fusion operator and the injection:
\[\mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)\circ\mathsf{h}_{A,0}^{-1}\circ \iota_{1}=\mathsf{h}_{A,B}^{-1}\circ\left(\mathsf{T}\left(1_{A}\right)\oplus \mathsf{T}(0)\right)\circ\circ\iota_{1}=\mathsf{h}_{A,B}^{-1}\circ\iota_{1}\]
For the diagram on the right, we first compute the following:
\[\mathsf{h}_{A,B}\circ\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)} = \left(\iota_{1}\circ\mathsf{T}(\pi_{1})+\iota_{2}\circ\mu_{B} \circ\mathsf{T}(\pi_{2})\right)\circ\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T }(B)}\] \[= \iota_{1}\circ\mathsf{T}(\pi_{1})\circ\mathsf{T}(\iota_{2})\circ \eta_{\mathsf{T}(B)}+\iota_{2}\circ\mu_{B}\circ\mathsf{T}(\pi_{2})\circ\mathsf{ T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}\] \[= \iota_{1}\circ\mathsf{T}(0)\circ\eta_{\mathsf{T}(B)}+\iota_{2} \circ\mu_{B}\circ\eta_{\mathsf{T}(B)}\] \[= \iota_{1}\circ\eta_{A}\circ 0+\iota_{2}\] (Nat. of
\[\eta\]
and
\[\mu\circ\eta=1\]
) \[= 0+\iota_{2}\] \[= \iota_{2}\]
So \(\mathsf{h}_{A,B}\circ\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}=\iota_{2}\). By post-composing by \(\mathsf{h}_{A,B}^{-1}\), we obtain \(\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}=\mathsf{h}_{A,B}^{-1}\circ \iota_{2}\), as desired. \(\Box\)
By the above lemma, it follows that \(\mathsf{h}_{A,B}^{-1}:\mathsf{T}(A)\oplus\mathsf{T}(B)\xrightarrow{}\mathsf{ T}\left(A\oplus\mathsf{T}(B)\right)\) must be of the form:
\[\mathsf{h}_{A,B}^{-1}=\mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)\circ \mathsf{h}_{A,0}^{-1}\circ\iota_{1}\circ\pi_{1}+\mathsf{T}(\iota_{2})\circ \eta_{\mathsf{T}(B)}\circ\pi_{2}\]
Looking more closely, every part on the right side can be defined for any monad except for \(\mathsf{h}_{A,0}^{-1}\), meaning that the only unfixed part of \(\mathsf{h}_{A,B}^{-1}\) is completely determined by \(\mathsf{h}_{A,0}^{-1}\circ\iota_{1}\). Therefore it follows that a monad is a Hopf monad if and only if there exists a map of type \(\mathsf{h}_{A}^{\circ}:\mathsf{T}(A)\xrightarrow{}\mathsf{T}\left(A\oplus \mathsf{T}(0)\right)\) satisfying the necessary identities such that:
\[\mathsf{h}_{A,B}^{-1}:=\mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)\circ \mathsf{h}_{A}^{\circ}\circ\pi_{1}+\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T }(B)}\circ\pi_{2}\]
is the inverse of the fusion operator \(\mathsf{h}\). We call such a natural transformation \(\mathsf{h}^{\circ}\) the _fusion invertor_, since it is used in the construction of the inverse of the fusion operator, and define it below.
## 3 Fusion Invertor
In this section, we introduce the main novel concept of this note, which is a fusion invertor, and prove our main result that for biproducts, a monad is a Hopf monad if and only if it has a fusion invertor.
**Definition 3.1**: _A **fusion invertor** for a monad \((\mathsf{T},\mu,\eta)\) on a category \(\mathbb{X}\) with finite biproducts is a natural transformation \(\mathsf{h}_{A}^{\circ}:\mathsf{T}(A)\xrightarrow{}\mathsf{T}\left(A\oplus \mathsf{T}(0)\right)\) such that:_
* \(\mathsf{T}(\pi_{1})\circ\mathsf{h}_{A}^{\circ}=1_{\mathsf{T}(A)}\)__
* \(\mu_{0}\circ\mathsf{T}(\pi_{2})\circ\mathsf{h}_{A}^{\circ}=0\)__
* _For every object_ \(B\)_,_ \(\mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)\circ\mathsf{h}_{A}^{\circ} \circ\mathsf{T}(\pi_{1})+\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}\circ \mu_{B}\circ\mathsf{T}(\pi_{2})=1_{\mathsf{T}\left(A\oplus\mathsf{T}(B)\right)}\)__
As we will see in the proof of Proposition 3.4 below, the three axioms of a fusion invertor are precisely what is required to construct an inverse for the fusion operator. Indeed, **[FI.1]** and **[FI.2]** are used to show that \(\mathsf{h}\circ\mathsf{h}^{-1}=1\), while **[FI.3]** will be used to prove that \(\mathsf{h}^{-1}\circ\mathsf{h}=1\). A useful identity to have is the special case of **[FI.3]** when \(B=\mathsf{0}\) is the zero object.
**Lemma 3.2**: _For a \((\mathsf{T},\mu,\eta)\) monad with a fusion invertor \(\mathsf{h}_{A}^{\circ}:\mathsf{T}(A)\xrightarrow{}\mathsf{T}\left(A\oplus \mathsf{T}(0)\right)\) on a category \(\mathbb{X}\) with finite biproducts, for every object \(A\), the following equality holds:_
* \(\mathsf{h}_{A}^{\circ}\circ\mathsf{T}(\pi_{1})+\mathsf{T}(\iota_{2})\circ\eta_{ \mathsf{T}(0)}\circ\mu_{0}\circ\mathsf{T}(\pi_{2})=1_{\mathsf{T}\left(A\oplus \mathsf{T}(0)\right)}\)__
Proof: This immediately follows from the fact for the zero object \(\mathsf{0}\), the identity map is equal to the zero map: \(0=1_{\mathsf{0}}\). So **[FI.3.0]** is simply rewritting **[FI.3]** for the case \(B=\mathsf{0}\) using that \(\mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)=1_{A\oplus\mathsf{T}(0)}\). \(\Box\)
In particular, **[FI.3.0]** will be very useful in showing that in special cases, the fusion invertor must always be of a specific form. We can also use **[FI.3.0]** to show that a fusion invertor, if one exists, is unique, allowing one to speak of _the_ fusion operator.
**Lemma 3.3**: _A fusion invertor, if it exists, is unique._
Proof: Suppose that a monad \((\mathsf{T},\mu,\eta)\) on a category \(\mathbb{X}\) with finite biproducts has two fusion invertors \(\mathsf{h}^{\circ}_{A}:\mathsf{T}(A)\to\mathsf{T}\left(A\oplus\mathsf{T}(0)\right)\) and \(\mathsf{h}^{\star}_{A}:\mathsf{T}(A)\to\mathsf{T}\left(A\oplus\mathsf{T}(0)\right)\). Then we compute:
\[\mathsf{h}^{\star}_{A} = \left(\mathsf{h}^{\circ}_{A}\circ\mathsf{T}(\pi_{1})+\mathsf{T}( \iota_{2})\circ\eta_{\mathsf{T}(0)}\circ\mu_{0}\circ\mathsf{T}(\pi_{2})\right) \circ\mathsf{h}^{\star}_{A}\] \[= \mathsf{h}^{\circ}_{A}\circ\mathsf{T}(\pi_{1})\circ\mathsf{h}^{ \star}_{A}+\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(0)}\circ\mu_{0}\circ \mathsf{T}(\pi_{2})\circ\mathsf{h}^{\star}_{A}\] \[= \mathsf{h}^{\circ}_{A}+\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T} (0)}\circ 0\] ( **[FI.1]** and **[FI.2]** for \(\mathsf{h}^{\star}\)) \[= \mathsf{h}^{\circ}_{A}+0\] \[= \mathsf{h}^{\circ}_{A}\]
Thus \(\mathsf{h}^{\star}_{A}=\mathsf{h}^{\circ}_{A}\). So we conclude that a fusion invertor, if it exists, is unique. \(\Box\)
We now prove the main result of this paper.
**Proposition 3.4**: _A monad on a category with finite biproducts is a Hopf monad if and only if it has a fusion invertor. Explicitly, if \((\mathsf{T},\mu,\eta)\) is a monad on a category \(\mathbb{X}\) with finite biproducts, then:_
1. _If_ \((\mathsf{T},\mu,\eta)\) _is a Hopf monad, then_ \((\mathsf{T},\mu,\eta)\) _has a fusion invertor_ \(\mathsf{h}^{\circ}_{A}:\mathsf{T}(A)\to\mathsf{T}\left(A\oplus\mathsf{T}(0)\right)\) _defined as the following composite:_ \[\mathsf{h}^{\circ}_{A}:=\] (4)
2. _If_ \((\mathsf{T},\mu,\eta)\) _has a fusion invertor_ \(\mathsf{h}^{\circ}\)_, then_ \((\mathsf{T},\mu,\eta)\) _is Hopf monad where_ \(\mathsf{h}^{-1}_{A,B}:\mathsf{T}(A)\oplus\mathsf{T}(B)\to\mathsf{T}\left(A \oplus\mathsf{T}(B)\right)\)_, the inverse of the fusion operator, is defined as follows:_ \[\mathsf{h}^{-1}_{A,B}=\mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)\circ \mathsf{h}^{\circ}_{A}\circ\pi_{1}+\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T }(B)}\circ\pi_{2}\] (5)
Proof: Suppose that \((\mathsf{T},\mu,\eta)\) is a Hopf monad. By construction \(\mathsf{h}^{\circ}\) is natural, so it remains to show that \(\mathsf{h}^{\circ}\) satisfies the three identities **[FI.1]**, **[FI.2]**, **[FI.3]**. So we compute:
1. \(\mathsf{T}(\pi_{1})\circ\mathsf{h}^{\circ}_{A}=\)__\(\mathsf{T}(\pi_{1})\circ\mathsf{h}^{-1}_{A,B}\circ\iota_{1}\) (Def. of \(\mathsf{h}^{\circ}\)) \[= \pi_{1}\circ\mathsf{h}^{\circ}_{A}\circ\mathsf{h}^{-1}_{A,B}\circ\iota_{1}\] (Def. of \(\mathsf{h}\)) \[= \pi_{1}\circ\iota_{1}\] \[= 1_{\mathsf{T}(A)}\]
2. \(\mu_{0}\circ\mathsf{T}(\pi_{2})\circ\mathsf{h}^{\circ}_{A}=0\) \[\mu_{0}\circ\mathsf{T}(\pi_{2})\circ\mathsf{h}^{\circ}_{A}= \mu_{0}\circ\mathsf{T}(\pi_{2})\circ\mathsf{h}^{-1}_{A,B}\circ\iota_{1}\] (Def. of \(\mathsf{h}^{\circ}\)) \[= \pi_{2}\circ\mathsf{h}^{\circ}_{A}\circ\mathsf{h}^{-1}_{A,0}\circ\iota _{1}\] (Def. of \(\mathsf{h}\)) \[= \pi_{2}\circ\iota_{1}\] \[= 0\]
3. \(\mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)\circ\mathsf{h}^{\circ}_{A}\circ \mathsf{T}(\pi_{1})+\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}\circ\mu_{B} \circ\mathsf{T}(\pi_{2})=1_{\mathsf{T}(A\oplus\mathsf{T}(B))}\) \[1_{\mathsf{T}(A\oplus\mathsf{T}(B))}= \mathsf{h}^{-1}_{A,B}\circ\mathsf{h}_{A,B}\] \[= \mathsf{h}^{-1}_{A,B}\circ\left(\iota_{1}\circ\mathsf{T}(\pi_{1} )+\iota_{2}\circ\mu_{B}\circ\mathsf{T}(\pi_{2})\right)\] (Def. \(\mathsf{h}\)) \[= \mathsf{h}^{-1}_{A,B}\circ\iota_{1}\circ\mathsf{T}(\pi_{1})+ \mathsf{h}^{-1}_{A,B}\circ\iota_{2}\circ\mu_{B}\circ\mathsf{T}(\pi_{2})\] \[= \mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)\circ\mathsf{h}^{-1}_ {A,0}\circ\iota_{1}\circ\mathsf{T}(\pi_{1})+\mathsf{T}(\iota_{2})\circ\eta_{ \mathsf{T}(B)}\circ\mu_{B}\circ\mathsf{T}(\pi_{2})\] (Lemma 2.5) \[= \mathsf{h}^{\circ}_{A}\circ\mathsf{T}(\pi_{1})+\mathsf{T}(\iota_{2} )\circ\eta_{\mathsf{T}(B)}\circ\mu_{B}\circ\mathsf{T}(\pi_{2})\] (Def. of \(\mathsf{h}^{\circ}\))
So we conclude that \(\mathsf{h}^{\circ}\) is a fusion invertor.
Conversely, suppose that \((\mathsf{T},\mu,\eta)\) has a fusion invertor \(\mathsf{h}^{\circ}\). We must show that \(\mathsf{h}^{-1}\) is an inverse of \(\mathsf{h}\). Using **[FL3]**, we first compute that:
\[\mathsf{h}^{-1}_{A,B}\circ\mathsf{h}_{A,B}= \ \ \left(\mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)\circ \mathsf{h}^{\circ}_{A}\circ\pi_{1}+\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T} (B)}\circ\pi_{2}\right)\circ\mathsf{h}_{A,B}\] (Def. of \[\mathsf{h}^{-1}\] ) \[= \ \mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)\circ\mathsf{h}^ {\circ}_{A}\circ\pi_{1}\circ\mathsf{h}_{A,B}+\mathsf{T}(\iota_{2})\circ\eta_{ \mathsf{T}(B)}\circ\pi_{2}\circ\mathsf{h}_{A,B}\] \[= \ \mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)\circ\mathsf{h}^ {\circ}_{A}\circ\mathsf{T}(\pi_{1})+\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T }(B)}\circ\mu_{B}\circ\mathsf{T}(\pi_{2})\] (Def. of \[\mathsf{h}\] ) \[= \ 1_{\mathsf{T}\left(A\oplus\mathsf{T}(B)\right)}\] ( **[FI.3]** )
So \(\mathsf{h}^{-1}_{A,B}\circ\mathsf{h}_{A,B}=1_{\mathsf{T}\left(A\oplus\mathsf{ T}(B)\right)}\). For the other direction, first recall that in the proof of Lemma 2.5, we computed that:
\[\mathsf{h}_{A,B}\circ\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}=\iota_{2} \tag{6}\]
Using **[FI.1]** and **[FI.2]**, we can also compute that:
\[\mathsf{h}_{A,0}\circ\mathsf{h}^{\circ}_{A}= \ \ \left(\iota_{1}\circ\mathsf{T}(\pi_{1})+\iota_{2}\circ\mu_{0} \circ\mathsf{T}(\pi_{2})\right)\circ\mathsf{h}^{\circ}_{A}\] (Def. of \[\mathsf{h}\] ) \[= \ \iota_{1}\circ\mathsf{T}(\pi_{1})\circ\mathsf{h}^{\circ}_{A}+ \iota_{2}\circ\mu_{0}\circ\mathsf{T}(\pi_{2})\circ\mathsf{h}^{\circ}_{A}\] \[= \ \iota_{1}+\iota_{2}\circ 0\] ( **[FI.1]** and **[FI.2]** ) \[= \ \iota_{1}+0\] \[= \ \iota_{1}\]
So we have that:
\[\mathsf{h}_{A,0}\circ\mathsf{h}^{\circ}_{A}=\iota_{1} \tag{7}\]
Then we compute that:
\[\mathsf{h}_{A,B}\circ\mathsf{h}^{-1}_{A,B}= \ \ \mathsf{h}_{A,B}\circ\left(\mathsf{T}\left(1_{A}\oplus\mathsf{T}(0) \right)\circ\mathsf{h}^{\circ}_{A}\circ\pi_{1}+\mathsf{T}(\iota_{2})\circ\eta _{\mathsf{T}(B)}\circ\pi_{2}\right)\] (Def. of \[\mathsf{h}^{\circ}\] ) \[= \ \mathsf{h}_{A,B}\circ\mathsf{T}\left(1_{A}\oplus\mathsf{T}(0) \right)\circ\mathsf{h}^{\circ}_{A}\circ\pi_{1}+\mathsf{h}_{A,B}\circ\mathsf{ T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}\circ\pi_{2}\] \[= \ \ \left(\mathsf{T}\left(1_{A}\right)\oplus\mathsf{T}(0) \right)\circ\mathsf{h}_{A,0}\circ\mathsf{h}^{\circ}_{A}\circ\pi_{1}+\mathsf{ h}_{A,B}\circ\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}\circ\pi_{2}\] (Nat. of \[\mathsf{h}\] ) \[= \ \left(1_{\mathsf{T}(A)}\oplus\mathsf{T}(0)\right)\circ\iota_{1} \circ\pi_{1}+\iota_{2}\circ\pi_{2}\] ((6) and (7)) \[= \ \iota_{1}\circ\pi_{1}+\iota_{2}\circ\pi_{2}\] (Nat. of \[\iota_{1}\] ) \[= \ 1_{\mathsf{T}(A)\oplus\mathsf{T}(B)}\] ( \[\iota_{1}\circ\pi_{1}+\iota_{2}\circ\pi_{2}=1\] )
So \(\mathsf{h}_{A,B}\circ\mathsf{h}^{-1}_{A,B}=1_{\mathsf{T}(A)\oplus\mathsf{T}(B)}\). So we conclude that the fusion operator is a natural isomorphism, and therefore that \((\mathsf{T},\mu,\eta)\) is a Hopf monad. \(\Box\)
We conclude this section by considering the fusion invertors for the Hopf monad examples in Section 2.
**Example 3.5**: For any Abelian group \(G\) and the induced Hopf monad \(\mathsf{T}(-):=G\oplus-\) on \(\mathsf{CMON}\) (Example 2.2), the fusion invertor \(\mathsf{h}^{\circ}_{M}:G\oplus M\xrightarrow{}G\oplus M\oplus G\oplus \mathsf{0}\) is defined as follows:
\[\mathsf{h}^{\circ}_{M}(g,m)=(g,m,-g,0)\]
In Section 4, we will show that the fusion invertor is always of this form for any representable Hopf monad.
**Example 3.6**: For the Hopf monad \(\mathsf{T}(-):=\mathbb{Z}_{2}\otimes_{\mathbb{Z}}-\) on \(\mathsf{Ab}\) (Example 2.3), the fusion invertor \(\mathsf{h}^{\circ}_{G}:\mathbb{Z}_{2}\otimes_{\mathbb{Z}}G\xrightarrow{} \mathbb{Z}_{2}\otimes_{\mathbb{Z}}\big{(}G\oplus(\mathbb{Z}_{2}\otimes_{ \mathbb{Z}}\mathsf{0})\big{)}\) is defined as follows:
\[\mathsf{h}^{\circ}_{G}(x\otimes g)=x\otimes(g,0)\]
In fact, the fusion operator is precisely \(\mathsf{h}^{\circ}_{G}=\mathbb{Z}_{2}\otimes_{\mathbb{Z}}\iota_{1}\). In Section 5, we will show that the fusion invertor is always of this form for any idempotent Hopf monad.
**Example 3.7**: For any Abelian group \(G\) and the Hopf monad \(\mathsf{T}(M,N):=(G\oplus M,\mathsf{0})\) on \(\mathsf{CMON}\times\mathsf{CMON}\), the fusion invertor is given by the pair \(\mathsf{h}^{\circ}_{(M,N)}:=(\mathsf{h}^{\circ}_{M},0)\), where \(\mathsf{h}^{\circ}_{M}:G\oplus M\xrightarrow{}G\oplus M\oplus G\oplus \mathsf{0}\) is the fusion invertor defined in Example 3.5.
Representable Hopf Monads
As discussed in the introduction, the main class of Hopf monads are those induced by Hopf monoids, which are called representable Hopf monads. In an arbitrary symmetric monoidal category, every (cocommutative) bimonoid \(B\) induces a (symmetric) bimonad \(\mathsf{T}(-):=B\otimes-\), while every (cocommutative) Hopf monoid \(H\) induces a (symmetric) Hopf monad \(\mathsf{T}(-):=H\otimes-\), where the inverse of the fusion operator is constructed using the antipode [2, Example 2.10]. In this section, we consider representable Hopf monads in the biproduct case and show that the fusion invertor is always of the same form.
In a category \(\mathbb{X}\) with finite biproducts, recall that every object \(H\) has a canonical and unique bimonoid structure, which is also bicommutative. The multiplication \(H\oplus H\xrightarrow{}H\) is the canonical codiagonal of the coproduct, the comultiplication \(H\xrightarrow{}H\oplus H\) of the product, while the unit \(\mathtt{0}\xrightarrow{}H\) and counit \(H\xrightarrow{}\mathtt{0}\) are the zero maps from and to the zero object. As such for every object \(A\), we obtain a monad \(\mathsf{T}(-):=H\oplus-\) on \(\mathbb{X}\), where \(\mu_{A}:H\oplus A\oplus A\xrightarrow{}H\oplus A\) and \(\eta_{A}:A\xrightarrow{}H\oplus A\) are respectively defined as follows using the additive structure of the biproduct:
\[\mu_{A}:=\iota_{1}\circ\pi_{1}+\iota_{1}\circ\pi_{2}+\iota_{2}\circ\pi_{3} \eta_{A}:=\iota_{2} \tag{8}\]
Thus the induced fusion operator \(\mathsf{h}_{A,B}:H\oplus A\oplus H\oplus B\xrightarrow{}H\oplus A\oplus H\oplus B\) is given by:
\[\mathsf{h}_{A,B}:=\iota_{1}\circ\pi_{1}+\iota_{2}\circ\pi_{2}+\iota_{3}\circ \pi_{1}+\iota_{3}\circ\pi_{3}+\iota_{4}\circ\pi_{4} \tag{9}\]
In a category \(\mathbb{X}\) with finite biproducts, an object \(H\) is a Hopf monoid if and only if the identity \(1_{H}:H\xrightarrow{}H\) has an additive inverse \(-1_{H}:H\xrightarrow{}H\), that is, \(1_{H}+(-1_{H})=0\). In this case, \(-1_{H}\) is the antipode for the canonical bimonoid structure on \(H\), and using it we may construct the inverse of the fusion operator \(\mathsf{h}_{A,B}^{-1}:H\oplus A\oplus H\oplus B\xrightarrow{}H\oplus A \oplus H\oplus B\) as follows:
\[\mathsf{h}_{A,B}^{-1}:=\iota_{1}\circ\pi_{1}+\iota_{2}\circ\pi_{2}+\iota_{3} \circ(-1_{H})\circ\pi_{1}+\iota_{3}\circ\pi_{3}+\iota_{4}\circ\pi_{4} \tag{10}\]
Using "element notation", one can see that this is indeed a generalization of Example 2.2.
**Lemma 4.1**: _Let \(H\) be an object in a category \(\mathbb{X}\) with finite biproducts such that its identity \(1_{H}:H\xrightarrow{}H\) has an additive inverse \(-1_{H}:H\xrightarrow{}H\). Then for the Hopf monad \(H\oplus-\) on \(\mathbb{X}\), the induced fusion invertor \(\mathsf{h}_{A}^{\circ}:H\oplus A\xrightarrow{}H\oplus A\oplus H\oplus\mathtt{ 0}\) is of the form:_
\[\mathsf{h}_{A}^{\circ}=\iota_{1}\circ\pi_{1}+\iota_{2}\circ\pi_{1}+\iota_{3} \circ(-1_{H})\circ\pi_{1} \tag{11}\]
Proof: By Proposition 3.4, applying (4) to the Hopf monad \(H\oplus-\), we get that the fusion invertor \(\mathsf{h}^{\circ}\) is given by the formula \(\mathsf{h}_{A}^{\circ}:=\mathsf{h}_{A,0}^{-1}\circ(\iota_{1}\circ\pi_{1}+ \iota_{2}\circ\pi_{2})\). Then expanding out the definition of \(\mathsf{h}^{-1}\) and using the biproduct identities that \(\pi_{i}\circ\iota_{j}=0\) if \(i\neq j\) and \(\pi_{i}\circ\iota_{i}=1\), we obtain precisely (11). \(\Box\)
Again, one can use "element notation", to see that this fusion invertor is indeed a generalization of Example 3.5.
## 5 Idempotent Monads
In this section, we consider the case of Hopf monads which are also idempotent monads, since idempotent Hopf monads were of particular interest in [6]. We will show that for an idempotent Hopf monad, the fusion invertor and inverse of the fusion operator are always of a specific form. In fact, we will show that checking that an idempotent monad is a Hopf monad amounts to checking one identity.
Recall that an **idempotent monad**[1, Proposition 4.2.3] on a category \(\mathbb{X}\) is a monad \((T,\mu,\eta)\) on \(\mathbb{X}\) such that the monad multiplication \(\mu_{A}:TT(A)\xrightarrow{}T(A)\) is a natural isomorphism, so \(TT(A)\cong T(A)\) and \(\mu_{A}^{-1}=\eta_{T(A)}=\mathsf{T}(\eta_{A})\). Our first observation is that idempotent monads preserve zero maps and zero objects.
**Lemma 5.1**: _Let \((\mathsf{T},\mu,\eta)\) be a idempotent monad on a category \(\mathbb{X}\) with finite biproducts. Then \(\mathsf{T}\) preserves zero maps, that is, \(\mathsf{T}(0)=0\). Therefore \(\mathsf{T}\) also preserves the zero object._
Proof: Suppose that \((\mathsf{T},\mu,\eta)\) is an idempotent monad Then we compute:
\[\mathsf{T}(0) = \mathsf{T}(0)\circ\mu_{A}\circ\eta_{\mathsf{T}(A)}\] \[= \mathsf{T}\left(0\circ\eta_{A}\right)\circ\mu_{A}\circ\eta_{T(A)}\]
\[= \mathsf{T}(0)\circ\mathsf{T}(\eta_{A})\circ\mu_{A}\circ\eta_{T(A)}\] \[= \mathsf{T}(0)\circ\eta_{\mathsf{T}(A)}\] (Idem. monad so \[\mathsf{T}(\eta_{A})\circ\mu=1\] ) \[= \eta_{B}\circ 0\] (Nat. of \[\eta\] ) \[= 0\]
\(\Box\)
So \(\mathsf{T}(0)=0\). \(\Box\)
The inverse of the above lemma is not necessarily true, there are monads that preserve zero maps but are not idempotent monads. However, for Hopf monads, preserving zero maps is equivalent to being idempotent. Using this fact, we can show that the fusion invertor \(\mathsf{h}^{\circ}_{A}:\mathsf{T}(A)\xrightarrow{\mathsf{T}}\left(A\oplus \mathsf{T}(0)\right)\) must always be \(\mathsf{T}(\iota_{1}):\mathsf{T}(A)\xrightarrow{\mathsf{T}}\left(A\oplus \mathsf{T}(0)\right)\). In fact, we can also show that if the fusion operator is of this form then the Hopf monad must also be an idempotent monad.
**Lemma 5.2**: _Let \((\mathsf{T},\mu,\eta)\) be a Hopf monad on a category \(\mathbb{X}\) with finite biproducts. Then the following are equivalent:_
1. \((\mathsf{T},\mu,\eta)\) _is an idempotent monad;_
2. \(\mathsf{T}\) _preserves zero maps, that is,_ \(\mathsf{T}(0)=0\)_;_
3. _The fusion invertor_ \(\mathsf{h}^{\circ}_{A}:\mathsf{T}(A)\xrightarrow{\mathsf{T}}\left(A\oplus \mathsf{T}(0)\right)\) _is of the form:_ \[\mathsf{h}^{\circ}_{A}=\mathsf{T}(\iota_{1})\] (12)
Proof: Observe that \((i)\Rightarrow(ii)\) is simply Lemma 5.1. For \((ii)\Rightarrow(iii)\), suppose that \(\mathsf{T}\) preserves zero maps. Using **[FI.3.0]**, we compute that:
\[\mathsf{T}(\iota_{1}) = \left(\mathsf{h}^{\circ}_{A}\circ\mathsf{T}(\pi_{1})+\mathsf{T}( \iota_{2})\circ\eta_{\mathsf{T}(0)}\circ\mu_{0}\circ\mathsf{T}(\pi_{2}) \right)\circ\mathsf{T}(\iota_{1})\] ( **[FI.3.0]** ) \[= \mathsf{h}^{\circ}_{A}\circ\mathsf{T}(\pi_{1})\circ\mathsf{T}( \iota_{1})+\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(0)}\circ\mu_{0}\circ \mathsf{T}(\pi_{2})\circ\mathsf{T}(\iota_{1})\] \[= \mathsf{h}^{\circ}_{A}+\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T} (B)}\circ\mu_{B}\circ\mathsf{T}(0)\] (By assump. \[\mathsf{T}(0)=0\] ) \[= \mathsf{h}^{\circ}_{A}+0\] \[= \mathsf{h}^{\circ}_{A}+0\]
So \(\mathsf{h}^{\circ}_{A}=\mathsf{T}(\iota_{1})\). For \((iii)\Rightarrow(i)\), suppose that \(\mathsf{h}^{\circ}_{A}=\mathsf{T}(\iota_{1})\). To prove that \((\mathsf{T},\mu,\eta)\) is an idempotent monad, it suffices to show that \(\eta_{\mathsf{T}(B)}\circ\mu_{B}=1_{\mathsf{T}(B)}\)[1, Proposition 4.2.3]. First using **[FI.2]**, we compute that following:
\[0 = \mu_{B}\circ\mathsf{T}(\pi_{2})\circ\mathsf{h}^{\circ}_{A}\] ( **[FI.2]**) \[= \mu_{B}\circ\mathsf{T}(\pi_{2})\circ\mathsf{T}(\iota_{1})\] (By assump. \[\mathsf{h}^{\circ}_{A}=\mathsf{T}(\iota_{1})\] ) \[= \mu_{B}\circ\mathsf{T}(0)\] ( \[\pi_{2}\circ\iota_{1}=0\] ) \[= \mu_{B}\circ\mathsf{T}(\eta_{B}\circ 0)\] \[= \mu_{B}\circ\mathsf{T}(\eta_{B})\circ\mathsf{T}(0)\] \[= \mathsf{T}(0)\]
So \(\mathsf{T}(0)=0\). Then using this and **[FI.3]**, we compute that:
\[\mathsf{T}(\iota_{2}) = \left(\mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)\circ \mathsf{h}^{\circ}_{A}\circ\mathsf{T}(\pi_{1})+\mathsf{T}(\iota_{2})\circ\eta _{\mathsf{T}(B)}\circ\mu_{B}\circ\mathsf{T}(\pi_{2})\right)\circ\mathsf{T}( \iota_{2})\] ( **[FI.3]** ) \[= \left(\mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)\circ \mathsf{T}(\iota_{1})\circ\mathsf{T}(\pi_{1})+\mathsf{T}(\iota_{2})\circ\eta _{\mathsf{T}(B)}\circ\mu_{B}\circ\mathsf{T}(\pi_{2})\right)\circ\mathsf{T}( \iota_{2})\] (By assump. \[\mathsf{h}^{\circ}_{A}=\mathsf{T}(\iota_{1})\] \[= \mathsf{T}(\iota_{1})\circ\mathsf{T}(\pi_{1})+\mathsf{T}(\iota_{2}) \circ\eta_{\mathsf{T}(B)}\circ\mu_{B}\circ\mathsf{T}(\pi_{2})\circ\mathsf{T}( \iota_{2})\] (Nat. of \[\iota_{1}\] ) \[= \mathsf{T}(\iota_{1})\circ\mathsf{T}(\pi_{1})\circ\mathsf{T}( \iota_{2})+\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}\circ\mu_{B}\circ \mathsf{T}(\pi_{2})\circ\mathsf{T}(\iota_{2})\] \[= \mathsf{T}(\iota_{1})\circ\mathsf{T}(0)+\mathsf{T}(\iota_{2})\circ \eta_{\mathsf{T}(B)}\circ\mu_{B}\] ( \[\pi_{1}\circ\iota_{2}=0\] ) \[= \mathsf{T}(\iota_{1})\circ 0+\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}\circ\mu _{B}\] ( \[\mathsf{T}(0)=0\] ) \[= 0+\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}\circ\mu_{B}\] \[= \mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}\circ\mu_{B}\]
So we have that \(\mathsf{T}(\iota_{2})=\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}\circ\mu_{B}\). Post-composing by \(\mathsf{T}(\pi_{2})\), since \(\pi_{2}\circ\iota_{2}=1\), we obtain \(\eta_{\mathsf{T}(B)}\circ\mu_{B}=1_{\mathsf{T}(B)}\). Therefore we can conclude that \((\mathsf{T},\mu,\eta)\) is an idempotent monad. So we conclude that \((i)\Leftrightarrow(ii)\Leftrightarrow(iii)\). \(\Box\)
It follows that for idempotent Hopf monads, the inverse of the fusion operator is always of the same form as well.
**Corollary 5.3**: _Let \((\mathsf{T},\mu,\eta)\) be an idempotent Hopf monad on a category \(\mathbb{X}\) with finite biproducts. Then the inverse of the fusion operator \(\mathsf{h}_{A,B}^{-1}:\mathsf{T}(A)\oplus\mathsf{T}(B)\to\mathsf{T}\left(A \oplus\mathsf{T}(B)\right)\) is of the form:_
\[\mathsf{h}_{A,B}^{-1}=\mathsf{T}(\iota_{1})\circ\pi_{2}+\mathsf{T}(\iota_{2}) \circ\eta_{\mathsf{T}(B)}\circ\pi_{2} \tag{13}\]
Proof.: By Lemma 5.2, we have that \(\mathsf{T}(0)=0\). This means that for the zero object, we have that \(\mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)=1_{\mathsf{T}\left(A\oplus \mathsf{T}(0)\right)}\). By Lemma 5.2, we also have that \(\mathsf{h}_{A}^{\circ}=\mathsf{T}(\iota_{1})\). So by Proposition 3.4, applying the construction of (5) and rewriting with the previous identities in mind, we obtain precisely (13). \(\Box\)
Of course, for any monad \(\mathsf{T}\) on any category with finite biproducts, the map \(\mathsf{T}(\iota_{1}):\mathsf{T}(A)\to\mathsf{T}\left(A\oplus\mathsf{T}(0)\right)\) can always be defined. However, while \(\mathsf{T}(\iota_{1})\) always satisfies **[FI.1]** and **[FI.2]**, \(\mathsf{T}(\iota_{1})\) does in general also satisfy **[FI.3]**. So checking that idempotent monad is a Hopf monad amounts to checking that \(\mathsf{T}(\iota_{1})\) satisfies **[FI.3]**. It turns out that for an idempotent monad, **[FI.3]** can be simplified even further. Therefore, checking that an idempotent monad is a Hopf monad is reduced to checking one identity.
**Proposition 5.4**: _Let \((\mathsf{T},\mu,\eta)\) be an idempotent monad on a category \(\mathbb{X}\) with finite biproducts. Then \((\mathsf{T},\mu,\eta)\) is a Hopf monad if and only if for all pairs of objects \(A,B\in\mathbb{X}\) the following equality holds:_
\[\mathsf{T}\left(1_{A}\oplus 0\right)+\mathsf{T}(0\oplus 1_{\mathsf{T}(B)})=1_{ \mathsf{T}\left(A\oplus\mathsf{T}(B)\right)} \tag{14}\]
Proof.: Suppose that \((\mathsf{T},\mu,\eta)\) is a Hopf monad. By Lemma 5.2, we have that the fusion invertor is of the form \(\mathsf{h}_{A}^{\circ}=\mathsf{T}(\iota_{1})\). Using **[FI.3]**, we compute:
\[\mathsf{T}\left(1_{A}\oplus 0\right)+\mathsf{T}(0\oplus 1_{B}) = \mathsf{T}(\iota_{1})\circ\mathsf{T}(\pi_{1})+\mathsf{T}(\iota_{ 2})\circ\mathsf{T}(\pi_{2})\] ( \[\iota_{1}\circ\pi_{1}=1\oplus 0\text{ and }\iota_{2}\circ\pi_{2}=0 \oplus 1\] ) \[= \mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)\circ\mathsf{T}( \iota_{1})\circ\mathsf{T}(\pi_{1})+\mathsf{T}(\iota_{2})\circ\mathsf{T}(\pi_{2})\] (Nat. of \[\iota_{1}\] ) \[= \mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)\circ\mathsf{h}_ {A}^{\circ}\circ\mathsf{T}(\pi_{1})+\mathsf{T}(\iota_{2})\circ\mathsf{T}(\pi_{2})\] (Idem. monad so \[\eta\circ\mu=1\] ) \[= 1_{\mathsf{T}\left(A\oplus\mathsf{T}(B)\right)}\] ([FI.3])
So the desired equality (14) holds.
Conversely, suppose that (14) holds. Define the natural transformation \(\mathsf{h}_{A}^{\circ}:\mathsf{T}(A)\to\mathsf{T}\left(A\oplus\mathsf{T}(B)\right)\) as \(\mathsf{h}_{A}^{\circ}=\mathsf{T}(\iota_{1})\). We will now show that \(\mathsf{h}_{A}^{\circ}\) satisfies the three identities **[FI.1]**, **[FI.2]**, and **[FI.3]**. So we compute:
* \(\mathsf{T}(\pi_{1})\circ\mathsf{h}_{A}^{\circ}=1_{\mathsf{T}(A)}\) \[\mathsf{T}(\pi_{1})\circ\mathsf{h}_{A}^{\circ}= \mathsf{T}(\pi_{1})\circ\mathsf{T}(\iota_{1})\] (Def. of \(\mathsf{h}^{\circ}\)) \[= 1_{\mathsf{T}(A)}\] ( \[\pi_{1}\circ\iota_{1}=1\] )
* \(\mu_{B}\circ\mathsf{T}(\pi_{2})\circ\mathsf{h}_{A}^{\circ}=0\) \[\mu_{B}\circ\mathsf{T}(\pi_{2})\circ\mathsf{h}_{A}^{\circ}= \mu_{B}\circ\mathsf{T}(\pi_{2})\circ\mathsf{T}(\iota_{1})\] (Def. of \(\mathsf{h}^{\circ}\)) \[= \mu_{B}\circ\mathsf{T}(0)\] ( \[\pi_{2}\circ\iota_{1}=0\] ) \[= \mu_{B}\circ 0\] (Lem. 5.1) \[= 0\]
* \(\mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)\circ\mathsf{h}_{A}^{\circ}\circ \mathsf{T}(\pi_{1})+\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}\circ\mu_{B} \circ\mathsf{T}(\pi_{2})=1_{\mathsf{T}\left(A\oplus\mathsf{T}(B)\right)}\) \[\mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)\circ\mathsf{h}_{A}^{\circ} \circ\mathsf{T}(\pi_{1})+\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}\circ \mu_{B}\circ\mathsf{T}(\pi_{2})\] \[= \mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)\circ\mathsf{h}_{A}^{ \circ}\circ\mathsf{T}(\pi_{1})+\mathsf{T}(\iota_{2})\circ\mathsf{T}(\pi_{2})\] (Idem. monad so \[\eta\circ\mu=1\] ) \[= \mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)\circ\mathsf{T}( \iota_{1})\circ\mathsf{T}(\pi_{1})+\mathsf{T}(\iota_{2})\circ\mathsf{T}(\pi_{2})\] (Def. of \(\mathsf{h}^{\circ}\)) \[= \mathsf{T}\left(\iota_{1}\circ\mathsf{T}(\pi_{1})+\mathsf{T}(\iota_{2}) \circ\mathsf{T}(\pi_{2})\] (Nat. of \[\iota_{1}\] ) \[= \mathsf{T}\left(1_{A}\oplus\mathsf{0}\right)+\mathsf{T}(0\oplus 1_{B})\] ( \[\iota_{1}\circ\pi_{1}=1\oplus 0\text{ and }\iota_{2}\circ\pi_{2}=0\oplus 1\] ) \[= 1_{\mathsf{T}\left(A\oplus\mathsf{T}(B)\right)}\] (14)
Therefore, \(\mathsf{h}^{\circ}\) is a fusion invertor for \((\mathsf{T},\mu,\eta)\). Then by Proposition 3.4, \((\mathsf{T},\mu,\eta)\) is a Hopf monad. \(\Box\)
## 6 Negatives
In this section, we consider Hopf monads on a category that has finite biproducts and also additive negatives (also sometimes called an additive category). We will show that a setting with negatives, for any Hopf monad, its fusion invertor and inverse of the fusion operator are always of the same form. We will then show that in the presence of negatives, checking that a monad is a Hopf monad is simplified to checking one identity.
In a category \(\mathbb{X}\) with finite biproducts that also has negatives, every homset is Abelian group which means that every map \(f:A\xrightarrow{}B\) has an additive inverse \(-f:A\xrightarrow{}B\), that is, \(f+(-f)=0\). First observe that this implies that every object is a Hopf monoid, which means every object induces a Hopf monad.
**Lemma 6.1**: _In a category \(\mathbb{X}\) with finite biproducts that also has negatives, for every object \(A\), the monad \(\mathsf{T}(-):=A\oplus-\), as defined in Section 4, is a Hopf monad._
Turning our attention from representable Hopf monads to arbitrary Hopf monads, we will now explain how using negatives, one can show that fusion invertor is always of a specific form.
**Lemma 6.2**: _Let \((\mathsf{T},\mu,\eta)\) be a Hopf monad on a category \(\mathbb{X}\) with finite biproducts that also has negatives. Then the fusion invertor \(\mathsf{h}^{\circ}_{A}:\mathsf{T}(A)\xrightarrow{}\mathsf{T}\left(A\oplus \mathsf{T}(0)\right)\) is of the following form:_
\[\mathsf{h}^{\circ}_{A}=\mathsf{T}(\iota_{1})-\mathsf{T}(\iota_{2})\circ\eta _{\mathsf{T}(0)}\circ\mathsf{T}(0) \tag{15}\]
_and the inverse of the fusion operator \(\mathsf{h}^{-1}_{A,B}:\mathsf{T}(A)\oplus\mathsf{T}(B)\xrightarrow{}\mathsf{ T}\left(A\oplus\mathsf{T}(B)\right)\) is of the form:_
\[\mathsf{h}^{-1}_{A,B}=\mathsf{T}(\iota_{1})\circ\pi_{1}-\mathsf{T}(\iota_{2}) \circ\eta_{\mathsf{T}(B)}\circ\mathsf{T}(0)\circ\pi_{1}+\mathsf{T}(\iota_{2} )\circ\eta_{\mathsf{T}(B)}\circ\pi_{2} \tag{16}\]
Proof: Using **[FI.3.0]**, we first compute that:
\[\mathsf{T}(\iota_{1}) = \left(\mathsf{h}^{\circ}_{A}\circ\mathsf{T}(\pi_{1})+\mathsf{T}( \iota_{2})\circ\eta_{\mathsf{T}(0)}\circ\mu_{0}\circ\mathsf{T}(\pi_{2})\right) \circ\mathsf{T}(\iota_{1})\] \[= \mathsf{h}^{\circ}_{A}\circ\mathsf{T}(\pi_{1})\circ\mathsf{T}( \iota_{1})+\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(0)}\circ\mu_{0}\circ \mathsf{T}(\pi_{2})\circ\mathsf{T}(\iota_{1})\] \[= \mathsf{h}^{\circ}_{A}+\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T} (0)}\circ\mu_{0}\circ\mathsf{T}(0)\] \[= \mathsf{h}^{\circ}_{A}+\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T} (0)}\circ\mu_{0}\circ\mathsf{T}(\eta_{0}\circ 0)\] \[= \mathsf{h}^{\circ}_{A}+\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T} (0)}\circ\mu_{0}\circ\mathsf{T}(\eta_{0})\circ\mathsf{T}(0)\] \[= \mathsf{h}^{\circ}_{A}+\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T} (0)}\circ\mathsf{T}(0)\] \[= \mathsf{h}^{\circ}_{A}+\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T} (0)}\circ\mathsf{T}(0)\] \[(\mu\circ\mathsf{T}(\eta)=1)\]
So we have that \(\mathsf{h}^{\circ}_{A}+\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(0)}\circ \mathsf{T}(0)=\mathsf{T}(\iota_{1})\). By subtracting \(\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}\circ\mathsf{T}(0)\) from both sides, we finally obtain that \(\mathsf{h}^{\circ}_{A}=\mathsf{T}(\iota_{1})-\mathsf{T}(\iota_{2})\circ\eta_ {\mathsf{T}(B)}\circ\mathsf{T}(0)\) as desired. So by Proposition 3.4, applying the construction of (5), we compute:
\[\mathsf{h}^{-1}_{A,B} = \mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)\circ\mathsf{h}^{ \circ}_{A}\circ\pi_{1}+\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}\circ\pi _{2}\] \[= \mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)\circ\left(\mathsf{ T}(\iota_{1})-\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}\circ\mathsf{T}(0) \right)\circ\pi_{1}+\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}\circ\pi_{2}\] \[= \mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)\circ\mathsf{T}( \iota_{1})\circ\pi_{1}-\mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)\circ \mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}\circ\mathsf{T}(0)\circ\pi_{1}+ \mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}\circ\pi_{2}\] \[= \mathsf{T}(\iota_{1})\circ\pi_{1}-\mathsf{T}(\iota_{2})\circ\eta_{ \mathsf{T}(B)}\circ\mathsf{T}(0)\circ\mathsf{T}(0)\circ\pi_{1}+\mathsf{T}( \iota_{2})\circ\eta_{\mathsf{T}(B)}\circ\pi_{2}\] \[= \mathsf{T}(\iota_{1})\circ\pi_{1}-\mathsf{T}(\iota_{2})\circ\eta_{ \mathsf{T}(B)}\circ\mathsf{T}(0)\circ\pi_{1}+\mathsf{T}(\iota_{2})\circ\eta_ {\mathsf{T}(B)}\circ\pi_{2}\] \[= \mathsf{T}(\iota_{1})\circ\pi_{1}-\mathsf{T}(\iota_{2})\circ\eta_{ \mathsf{T}(B)}\circ\mathsf{T}(0)\circ\pi_{1}+\mathsf{T}(\iota_{2})\circ\eta_{ \mathsf{T}(B)}\circ\pi_{2}\] \[\Box\]
So \(\mathsf{h}^{-1}_{A,B}=\mathsf{T}(\iota_{1})\circ\pi_{1}-\mathsf{T}(\iota_{2}) \circ\eta_{\mathsf{T}(B)}\circ\mathsf{T}(0)\circ\pi_{1}+\mathsf{T}(\iota_{2}) \circ\eta_{\mathsf{T}(B)}\circ\pi_{2}\) as desired. \(\Box\)
Observe that for any monad \(\mathsf{T}\) on a category with biproducts that also has negatives, the map \(\mathsf{T}(\iota_{1})-\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(0)}\circ\mathsf{ T}(0)\) can always be defined. However, while said map will satisfy **[FI.1]** and **[FI.2]**, it may not satisfy **[FI.3]**. So in this setting, checking if a monad is a Hopf monad, one needs only check that said map satisfies **[FI.3]**. It turns out that the equality in question can be simplified even further. Therefore, in the presence of negatives, checking that a monad is a Hopf monad amounts to showing that one equality holds.
**Proposition 6.3**: _Let \((\mathsf{T},\mu,\eta)\) be a monad on a category \(\mathbb{X}\) with finite biproducts that also has negatives. Then \((\mathsf{T},\mu,\eta)\) is a Hopf monad if and only if for all pairs of objects \(A,B\in\mathbb{X}\) the following equality holds:_
\[\mathsf{T}\left(1_{A}\oplus 0\right)+\mathsf{T}(t_{2})\circ\eta_{\mathsf{T}(B)} \circ\mu_{B}\circ\mathsf{T}(\pi_{2})-\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{ T}(B)}\circ\mathsf{T}(0)=1_{\mathsf{T}\left(A\oplus\mathsf{T}(B)\right)} \tag{17}\]
Proof: Suppose that \((\mathsf{T},\mu,\eta)\) is a Hopf monad. Using **[FI.3]**, we compute:
\[1_{\mathsf{T}\left(A\oplus\mathsf{T}(B)\right)}=\ \mathsf{T} \left(1_{A}\oplus\mathsf{T}(0)\right)\circ\mathsf{h}^{\circ}_{A}\circ\mathsf{ T}(\pi_{1})+\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}\circ\mu_{B}\circ \mathsf{T}(\pi_{2})\] \[=\ \mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)\circ\left( \mathsf{T}(\iota_{1})-\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}\circ \mathsf{T}(0)\right)\circ\mathsf{T}(\pi_{1})+\mathsf{T}(\iota_{2})\circ\eta_{ \mathsf{T}(B)}\circ\mu_{B}\circ\mathsf{T}(\pi_{2}) \tag{15}\] \[=\ \mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)\circ\mathsf{T} (\iota_{1})\circ\mathsf{T}(\pi_{1})-\mathsf{T}\left(1_{A}\oplus\mathsf{T}(0) \right)\circ\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}\circ\mathsf{T}(0) \circ\mathsf{T}(\pi_{1})+\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}\circ \mu_{B}\circ\mathsf{T}(\pi_{2})\] \[=\ \mathsf{T}(\iota_{1})\circ\mathsf{T}(\pi_{1})-\mathsf{T}(\iota_{2}) \circ\eta_{\mathsf{T}(B)}\circ\mathsf{T}(\pi_{1})+\mathsf{T}(\iota_{2})\circ \eta_{\mathsf{T}(B)}\circ\mu_{B}\circ\mathsf{T}(\pi_{2})\qquad\qquad(\text{ Nat. of $\iota_{1}$, $\iota_{2}$, and $\eta$, and $0 0\circ 0=0$})\] \[=\ \mathsf{T}\left(1_{A}\oplus 0\right)-\mathsf{T}(\iota_{2}) \circ\eta_{\mathsf{T}(B)}\circ\mathsf{T}(0)+\mathsf{T}(\iota_{2})\circ\eta_{ \mathsf{T}(B)}\circ\mu_{B}\circ\mathsf{T}(\pi_{2})\qquad\qquad(\iota_{0} \circ\pi_{0}=1_{A}\oplus 0)\]
So the desired equality (17) holds.
Conversely, suppose that (17) holds. Define the natural transformation \(\mathsf{h}^{\circ}_{A}:\mathsf{T}(A)\to\mathsf{T}\left(A\oplus\mathsf{T}(B)\right)\) as follows:
\[\mathsf{h}^{\circ}_{A}=\mathsf{T}(\iota_{1})-\mathsf{T}(\iota_{2})\circ\eta_ {\mathsf{T}(B)}\circ\mathsf{T}(0) \tag{18}\]
We will now show that \(\mathsf{h}^{\circ}_{A}\) satisfies the three identities **[FI.1]**, **[FI.2]**, and **[FI.3]**. So we compute:
1. \(\mathsf{T}(\pi_{1})\circ\mathsf{h}^{\circ}_{A}=1_{\mathsf{T}(A)}\) \[\mathsf{T}(\pi_{1})\circ\mathsf{h}^{\circ}_{A}= \ \mathsf{T}(\pi_{1})\circ\left(\mathsf{T}(\iota_{1})-\mathsf{T}(\iota_{2}) \circ\eta_{\mathsf{T}(B)}\circ\mathsf{T}(0)\right)\] (Def. of \(\mathsf{h}^{\circ}\)) \[= \ \mathsf{T}(\pi_{1})\circ\mathsf{T}(\iota_{1})-\mathsf{T}(\pi_{1}) \circ\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}\circ\mathsf{T}(0)\] \[= \ 1_{\mathsf{T}(A)}-\mathsf{T}(0)\circ\eta_{\mathsf{T}(B)}\circ \mathsf{T}(0)\qquad\qquad\qquad\qquad(\pi_{1}\circ\iota_{1}=1\text{ and $\pi_{1}\circ\iota_{2}=0$})\] \[= \ 1_{\mathsf{T}(A)}-\eta_{A}\circ 0\circ\mathsf{T}(0)\] (Nat. of \[\eta\] ) \[= \ 1_{\mathsf{T}(A)}-0\] \[= \ 1_{\mathsf{T}(A)}\]
2. \(\mu_{B}\circ\mathsf{T}(\pi_{2})\circ\mathsf{h}^{\circ}_{A}=0\) \[\mu_{B}\circ\mathsf{T}(\pi_{2})\circ\mathsf{T}(\pi_{1})-\mathsf{T}(\iota_{2}) \circ\eta_{\mathsf{T}(B)}\circ\mathsf{T}(0)\] (Def. of \[\mathsf{h}^{\circ}\] ) \[= \ \mu_{B}\circ\mathsf{T}(\pi_{2})\circ\mathsf{T}(\iota_{1})-\mu_{B }\circ\mathsf{T}(\pi_{2})\circ\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)} \circ\mathsf{T}(0)\qquad\qquad(\pi_{2}\circ\iota_{1}=0\text{ and $\pi_{2}\circ\iota_{2}=1$})\] \[= \ \mu_{B}\circ\mathsf{T}(0)-\mu_{B}\circ\eta_{\mathsf{T}(B)}\circ \mathsf{T}(0)\] \[= \ \mu_{B}\circ\mathsf{T}\left(\eta_{B}\circ 0\right)-\mathsf{T}(0)\] (\[\mu\circ\eta=1\] ) \[= \ \mathsf{T}(0)-\mathsf{T}(0)\] (\[\mu\circ\mathsf{T}(\eta)=1\] ) \[= \ 0\]
3. \(\mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)\circ\mathsf{h}^{\circ}_{A} \circ\mathsf{T}(\pi_{1})+\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}\circ\mu_ {B}\circ\mathsf{T}(\pi_{2})=1_{\mathsf{T}\left(A\oplus\mathsf{T}(B)\right)}\) \[\mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)\circ\mathsf{h}^{\circ}_{A} \circ\mathsf{T}(\pi_{1})+\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}\circ\mu_ {B}\circ\mathsf{T}(\pi_{2})\] \[= \ \mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)\circ\mathsf{T}( \iota_{1})-\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}\circ\mathsf{T}(0) \circ\mathsf{T}(\pi_{1})+\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}\circ\mu_ {B}\circ\mathsf{T}(\pi_{2})\] (Def. of \[\mathsf{h}^{\circ}\] ) \[= \ \mathsf{T}\left(1_{A}\oplus\mathsf{T}(0)\right)\circ\mathsf{T}( \iota_{1})-\mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}\circ\mathsf{T}(0)+ \mathsf{T}(\iota_{2})\circ\eta_{\mathsf{T}(B)}\circ\mu_{B}\circ\mathsf{T}( \pi_{2})\qquad\qquad(\text{\rm{\small{\small\rm{\small\rm{\small\rm{\small\rm{\small\rm{\rm \small\rm{\rm{\rm\rm{\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\,\ |
2308.07796 | Research Software Engineering in 2030 | This position paper for an invited talk on the "Future of eScience" discusses
the Research Software Engineering Movement and where it might be in 2030.
Because of the authors' experiences, it is aimed globally but with examples
that focus on the United States and United Kingdom. | Daniel S. Katz, Simon Hettrick | 2023-08-15T14:21:47Z | http://arxiv.org/abs/2308.07796v3 | # Research Software Engineering in 2030
###### Abstract
This position paper for an invited talk on the "Future of eScience" discusses the Research Software Engineering Movement and where it might be in 2030. Because of the authors' experiences, it is aimed globally but with examples that focus on the United States and United Kingdom.
research software, research software engineer, research software engineering
## I Introduction
Research software has existed since at least the early 1950s. Formal software engineering started in the 1960s to understand operational software such as operating systems and NASA flights, and later business software, with research software infrequently the focus of study. Bringing these two concepts together and identifying this as a named discipline--research software engineering (RSEng)--with a complementary set of practitioners--Research Software Engineers (RSEs)--was first formalized by a small group in the UK in 2012, as described by Brett et al. in 2017 [1].
Research software is essential to most research [2] and research software engineering (building and maintaining software in a research context using relevant software engineering knowledge) is performed by many people, including RSEs who focus on this work, as well as others in the research community. The RSE Community has grown substantially since 2012. Today (August 2023), there are eight national and multinational RSE associations with established communities and activities, and at least two more developing, all coordinating via an International Council of RSE Associations [3]. The largest, the UK-based Society of Research Software Engineering, has a community of about 4800 people, including about 700 dues-paying members, while the next largest, the US Research Software Association, has about 1900 members. At least 40 UK universities have formal RSE groups, as do many US universities and national laboratories.
In 2021, Cohen et al. [4] defined research software engineering as being built on four pillars: software development, community, training, and policy. In the rest of this position paper, we use these pillars and hypothesize about how the RSE landscape will have changed in 2030, an intervening period that represents a little over half of the current age of the RSE community.
## II How Research Software Engineering might be in 2030
In this section, we predict some qualitative changes to research software engineering that we might expect by 2030. Where it seems reasonable to us, we also make quantitative predictions that we consider plausible, and perhaps even probable.
**Software development**:
* Research software developers (including but not limited to RSEs) will be generally aware of research software engineering as a concept and its best practices, at about the same level that software developers today are aware of source control packages like git. This will be due to a greater awareness of the role of software in research and changes to community, training, and policy.
* This greater awareness of research software, and the existence and availability of RSEs, will lead to research software being more reliable, maintainable, and better tested. This change to one of the most important underpinning tools of research will make research results more reliable and reproducible.
* More research software will be sustained because funders will better understand that funding the maintenance of research software will improve the return on their initial investment into it. This switch to recognising and supporting successful research software will increase awareness of successful research software and reduce reinvention of existing functionality.
* Research software developers will begin to receive recognition, not just for their contribution to research, but also for their contribution to software within academia. This will allow the design of an incentive system that aligns career progression for research software developers with the practices needed to develop reliable and reproducible research software.
**Community**:
* At least 50 countries globally will have RSE associations.
* While RSEs will generally be members of their own local RSE organization, there will also be communication channels they use to talk to and collaborate with RSEs in other national organizations.
* There will be a global flagship RSE conference that moves to a different country each year and that also supports virtual participation.
* There will be at least one research software engineering journal.
* There will be at least 50 professors whose focus is on research software engineering.
* Most professional scholarly societies will have research software tracks or themes in their conferences, and will offer prizes to research software developers.
* Awareness of the RSE career will be significantly more widespread, which will lead to around a quarter of new recruits being recruited after their undergraduate degree.
**Training**:
* At least one university Masters of Research Software Engineering (MRSE) program will exist, and many universities will offer RSE certificates or minors at the undergraduate level.
* Some secondary students will be aware of research software, and will consider an RSE career when entering university.
* Both Software Carpentry and a merged program beyond it, similar to programs today such as INTERSECT [5] and CodeRefinery [6], will offer frequent synchronous and asynchronous research software engineering training.
* As a part of more standardized RSE career paths, skills required at various levels will be specified and training for RSEs new to these levels will be available both synchronously and asynchronously.
* Training for the initial RSE level will be offered in internship programs.
* Courses in software engineering techniques will be either mandatory or strongly encouraged for all new doctoral students.
**Policy**:
* The value of software, and its vital role in the production of research results, will be accepted across all levels in research-performing organizations and by policymakers across government and funders.
* In the US, 75% of R1 universities will have RSE organizations. In the UK, all research-led universities will host an RSE Group.
* Funding organizations will have specific programs to fund 1) the development of new sustainable research software, 2) the maintenance and sustaining of existing research software, and 3) the transition of software developed in research activities into sustainable research software.
* Most US universities will have RSE career paths that contain multiple levels, allow for both technical and management career progression over 30+ years, and allow RSEs to be PIs.
* RSE career paths will be mostly standardized, and it will be as easy to move from one organization to another at a given RSE level as it as today at a given faculty level.
* Institutions with RSE groups will also have Open Source Program Offices and vice-versa, and they will work together to promote and strengthen research software at their institutions directly and via policy changes.
* Research-performing organizations (e.g., universities, national laboratories) will recognize and reward research software work in all research-performing positions, as will national research assessment activities.
## III The value of predictions
While the predictions we make in this position paper may not come to pass, as we've said, we think they are plausible. In some cases, they simply extend current trends, while in others, they require new collaborative work to accomplish. We urge community members to consider how they might make these predictions more like to be fulfilled, working within existing RSE communities and with organizations such as the Software Sustainability Institute [7], the US Research Software Sustainability Institute [8], and the Research Software Alliance [9].
## Acknowledgment
The authors thank all members of the international RSE community for their collective work in getting us to where we are now, defining a vision for the future, and moving us all towards it.
|
2307.08220 | FRANC: A Lightweight Framework for High-Quality Code Generation | In recent years, the use of automated source code generation utilizing
transformer-based generative models has expanded, and these models can generate
functional code according to the requirements of the developers. However,
recent research revealed that these automatically generated source codes can
contain vulnerabilities and other quality issues. Despite researchers' and
practitioners' attempts to enhance code generation models, retraining and
fine-tuning large language models is time-consuming and resource-intensive.
Thus, we describe FRANC, a lightweight framework for recommending more secure
and high-quality source code derived from transformer-based code generation
models. FRANC includes a static filter to make the generated code compilable
with heuristics and a quality-aware ranker to sort the code snippets based on a
quality score. Moreover, the framework uses prompt engineering to fix
persistent quality issues. We evaluated the framework with five Python and Java
code generation models and six prompt datasets, including a newly created one
in this work (SOEval). The static filter improves 9% to 46% Java suggestions
and 10% to 43% Python suggestions regarding compilability. The average
improvement over the NDCG@10 score for the ranking system is 0.0763, and the
repairing techniques repair the highest 80% of prompts. FRANC takes, on
average, 1.98 seconds for Java; for Python, it takes 0.08 seconds. | Mohammed Latif Siddiq, Beatrice Casey, Joanna C. S. Santos | 2023-07-17T03:45:00Z | http://arxiv.org/abs/2307.08220v2 | # A Lightweight Framework for High-Quality Code Generation
###### Abstract
In recent years, the use of automated source code generation utilizing transformer-based generative models has expanded, and these models can generate functional code according to the requirements of the developers. However, recent research revealed that these automatically generated source codes can contain vulnerabilities and other quality issues. Despite researchers' and practitioners' attempts to enhance code generation models, retraining and fine-tuning large language models is time-consuming and resource-intensive. Thus, we describe Franc, a lightweight framework for recommending more secure and high-quality source code derived from transformer-based code generation models. Franc includes a static filter to make the generated code compilable with heuristics and a quality-aware ranker to sort the code snippets based on a quality score. Moreover, the framework uses prompt engineering to fix persistent quality issues. We evaluated the framework with five Python and Java code generation models and six prompt datasets, including a newly created one in this work (SOEval). The static filter improves 9% to 46% Java suggestions and 10% to 43% Python suggestions regarding compilability. The average improvement over the NDCG@10 score for the ranking system is 0.0763, and the repairing techniques repair the highest 80% of prompts. Franc takes, on average, 1.98 seconds for Java; for Python, it takes 0.08 seconds.
code generation, code quality, code security, code generation reliability, large language models
## I Introduction
Code generation techniques based on machine learning automatically produce source code from _prompts_[1] provided as input. These prompts provide a high-level specification of the developers' intent. Prompts can be single/multi-line comments, code expressions (_e.g._, a function definition), or a combination of both. With the recent release of GitHub Copilot [2] and ChatGPT [3], ML-based source code generation tools are increasingly being used by developers in order to reduce software development efforts [4].
The automated code generation process is a _sequence-to-sequence_ translation task, and Natural Language Processing (NLP) techniques can help tackle this task. With the rise of transformer architecture [5] and Large Language Models (LLMs), prior works focused on automating source code generation using these ML techniques [6, 7, 8]. Specifically, LLMs are trained and fine-tuned with _large_ amounts of _code_ snippets, including natural text, to understand the context and problem statements and generated code [9].
Although ML-based code generation techniques can produce functionally correct code, they may contain code smells and vulnerabilities [10, 11, 12]. A recent study showed that LLMs are fine-tuned with samples containing harmful coding patterns that leak to the generated code [10]. Another study found that GitHub Copilot can produce vulnerable code [11]. With the increasing use of LLM-based code assistants, these problematic code snippets can get deployed into production, negatively affecting the software system's security and reliability.
To improve the quality of the generated code, we first need a _good dataset_ (_i.e._, free of quality issues), but this is challenging because collecting training data is _time-consuming_[13] and open-source code commonly used for training _contain quality issues_ (_e.g._, bugs, vulnerabilities, and smells [10, 14, 15]). Moreover, fine-tuning an LLM model is a _resource-hungry_ process [16]. Fine-tuning an LLM requires at least one GPU, and pre-training is typically performed on a large cluster of GPUs [16]. Although models can make inferences without GPUs, the throughput may not be optimal. For example, we used a GPU with 24 GB RAM for an inference model with 2.7 billion parameters to generate 128 tokens with up to 2,048 context tokens. To generate more tokens and infer larger models, more GPUs or a more expensive GPU with a larger RAM would be needed.
Although LLM fine-tuning can be done on the cloud (via an API) to avoid acquiring expensive GPUs, it is still costly. For example, OpenAI's fine-tuning API currently costs US0.03 dollars per 1,000 tokens (8K context GPT-4 model) [17]. Since there can be billions of tokens in a large dataset, fine-tuning this model would cost thousands of dollars. Besides _fine-tuning_ costs, there is a separate cost for _inference_. It costs US0.12 per 1,000 tokens to use your own fine-tuned model.
Since LLMs can generate code with quality issues, we need a non-expensive way to provide the best-generated code to the user. In light of this need, this paper describes Franc, a _lightweight, configurable,_ and _model-agnostic_ framework to _filter, rank,_ and _repair_ code automatically generated by LLMs. Franc works by taking as input a developer's prompt and then using (i) static filters to remove non-compilable code (_filtering phase_), (ii) off-the-shelf quality issues detection tools to rank generated code snippets with respect to their measured quality (_ranking phase_), and (iii) prompt engineering to fix quality issues (_repairing phase_). To our knowledge, this is
the first framework for ranking and repairing generated source codes based on their measured quality.
To demonstrate Franc's effectiveness, we conducted an empirical evaluation in which we used Franc to filter, rank, and fix Java and Python code automatically generated by five models (CodeParrot, InCoder, CodeGen, PolyCoder, and ChatGPT). In this experiment, we generated code from **1,081** prompts collected from existing code generation benchmarks [9, 18, 19, 20, 21] and **70** prompts we created from questions posted on StackOverflow.
This paper's contributions are **(1)** a novel framework (Franc) to filter, rank and repair the output of code generation models based on code quality; **(2)** automated filtering capabilities to remove non-compilable and unnecessary portions of the generated code to minimize human inspection; **(3)** a demonstration of how prompt engineering (and different prompt repair structures) can help to repair quality issues automatically; **(4)** an empirical investigation comparing the effectiveness of the framework with existing code generation and infilling models; **(5)** a dataset of **70** prompts. This paper's replication package is available at: [https://doi.org/10.5281/zenodo.8015394](https://doi.org/10.5281/zenodo.8015394).
## II Background
This section explains key concepts used in this work.
### _ML-based Code Generation and Infilling_
_ML-based code generation_ leverages NLP techniques to generate source code from a given context. The context can be a combination of natural language and source code. It can also include file/variable names, other files in the software system, _etc._ The code generation problem was previously tackled as a _sequence-to-sequence (seq2seq) learning_ problem [22]. Prior works used Recurrent Neural Networks (RNN) and neural networks based on Long Short-Term Memory (LSTM) [22, 23] to generate source code. To memorize specific portions of the training input for learning, the LSTM and RNN use a feedback loop during training.
The attention-based transformer architecture revolutionized the field of language learning in 2017 [5]. The transformer is based on an encoder-decoder architecture that leverages the self-attention mechanism to weigh the importance of each input data point [5]. There are several transformer-based deep learning models, like BERT (Bidirectional Encoder Representations from Transformers) [24], T5 (Text-to-Text Transformer) [25] and GPT-3 (Generative Pre-trained Transformer) [26]. These language learning models can be fine-tuned with code-related datasets for source code completion [27, 28, 29], search [30], and summarization [31]. Examples of this type of model include CodeBERT [30], CodeT5 [32], and Codex [9].
Transformer-based code generation techniques use left-to-right (causal) autoregressive language modeling objectives [9, 26] or, as BERT does [24, 30], use masked language modeling objectives. That means that the generative model will take context from the _left_ side of the cursor and will not take any context _after_ the cursor to generate source code. However, developers edit earlier parts of the code, and this edit action may depend on both sides of the cursor. Generating code by taking context from _both_ sides is referred to as _code infilling_. InCoder [33] is an example of a code-infilling model.
### _Code Quality and Code Smells_
_Code quality_ is a broad term used to refer to code snippets that are free of bugs and conformant to its requirements [34]. The quality of a source code is usually expressed in terms of _defect rate_ (_i.e._, measured as the number of defects per unit, _e.g._, lines of code, function points, _etc._) and _reliability_ (measured in terms of failure occurrences, _e.g._, the number of failures during a period, mean time to failure, _etc._). However, the main indicator of high-quality code that complies with specifications depends on the vendors' demands. To ensure code quality, a common _coding standard_ needs to be adopted by every contributor. Different languages adopt specific coding practice protocols. For example, PEP-8 [35] is a well-known guide for coding practice standards for Python. It provides an extensive guide for code layout, whitespace usage, and naming conventions _etc._ For instance, according to the guide, the coding layout should have 4 spaces per indentation level, and spaces are the preferred indentation method.
An indication of poor system design and implementation practices is a _code smell_ (also known as a "bad code smell" or "smell") [36]. These code smells can introduce software maintenance problems. Furthermore, they contravene fundamental software design rules, reducing the product's potential effectiveness. These issues increase the possibility of future errors or failures or can hinder software development [37], affecting the software's reliability. For example, the code in Listing 1 may throw a ValueError if the input at line 3 cannot be parsed to int. Although there is a try-except block to catch the exception (highlighted), the exception is not handled, which is an example of a code smell [38].
Insecure coding, flaws with design choices, and coding standard violations all fall under the umbrella phrase "code smell". _Security code smells_ (or simply "security smells") are a _subset_ of code smells. They have frequently used programming patterns that could result in vulnerabilities [39, 40]. Security smells point to the possibility of a vulnerability, even if they may not constitute vulnerabilities entirely by themselves [41]. The code (shown on the right) in Listing 1 is taken from CodeQL examples [42]. It uses the md5 hash function that is unsafe, which is related to _CWE-327: Use of a Broken or Risky Cryptographic Algorithm_[10].
In our work, we focus on the _quality of automatically generated code_ with respect to following _code standards_, as well as the absence of _code smells_ and _security smells_.
### _Motivating Example_
A code generation model can generate multiple (_ranked_) suggestions for a given prompt. However, some produced suggestions may contain _quality issues_ (_e.g._, security smells) [10]. For example, consider that we provide the prompt below (highlighted lines) to GitHub Copilot [2]. The generated code (lines 6-9) on the first position of GitHub Copilot's rank is _functionally correct_ but contains a _SQL injection vulnerability_. It uses a formatted string to construct the query (line 7).
```
1fromgraph.cominputConnection
2fornlow.user.tegeguer.schema.
3
4create.create.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get.get..get.get.get.get.get..get.get.get.get.get..get.get.get.get.get.get..get.get.get.get..get.get.get.get.get.get..get.get.get.get.get.get.get..get.get.get.get..get.get.get..get.get.get.get..get.get.get.get..get.get.get.get.get.get..get.get..get.get.get..get.get..get.get..get.get..get.get..get.get..get.get.get..get.get..get.get.get..get.get.get..get.get.get..get.get..get.get.get..get.get.get..get.get.get..get.get..get.get..get.get.get..get..get.get.get..get.get..get.get..get.get.get..get..get.get..get.get..get.get..get.get..get.get..get.get..get.get..get.get..get.get..get.get.get..get.get.get..get.get.get..get.get..get.get.get..get.get.get..get.get..get.get.get..get.get..get.get..get.get..get.get..get.get..get..get.get..get..get.get..get.get..get.get..get.get..get..get.get..get.get..get.get..get.get..get.get..get..get.get..get..getget..get..get..get.get.get..getget..get..get.get..get..get.get..get.get..get..get.get..get..get.get..get.get..get.get..get..get.get..get..get.get..get.get..get..get.get..get.get..getget..get..get..get..get..get.get..get..get.get..get.get..get..get..get.get..get..get..get..get.get..get.get..get..get..get.get..get.get..get.get..get..get.get..get..get..get.get..get.get..get..get..get.get..get..get..get.get..get..get..get.get..get..get..get..get.get..get..get.get..get..get.get..get..get.get..get..get.get..get..get.get..get..get..get..get..get..get.get..get..get..get.get..get..get.get..get..get.get..get..get..get.get..get..get..get.get..get..get..get.get..get..get..get.get..get..get.get..get..get..get.get..get.get..get.get..get.get..get..get.get..get..get.get..get..get.get..get..get..get..get..get.get..get..get..get.get..get..get.get..get.get..get.get..get.get..get..get.get..get.get...get.get..get.get..get.get..get..get.get..get..get..get..get.get..get..get..get.get..get..get..get.get..get..get..get..get..get..get..get.get..get..get..get.get..get...get..get.get.get...get..get..get..get.get..get..get..get..get..get.get..get..get..get..get..get..get.get..get..get..get..get..get..get..get..get..get..get..get..get..get..get..get.get..get..get..get..get..get..get..get.get..get..get..get..get..get..get..get.get..get..get...get..get.get..get..get..get..get.get..get..get..get.get..get..get..get..get..get..get..get.get..get..get..get..get..get..get..get.get..get..get..get..get..get..get..get..get..get..get..get...get.get...get.get..get..get..get..get..get..get..get..get..get..get..get.get..get..get..get.get..get..get..get.get..get..get.get..get..get..get..get.get..get..get..get..get..get.get..get..get..get...get.get..get..get..get..get..get..get..get..get.get..get..get..get..get..get..get..get..get.get..get..get..get..get.get..get...get.get..get..get..get...get.get..get.get.get...get.get..get.get..get.get..get.get..get.get.get.get..get..get.get..get.get..get.get.get..get..get..get.get..get.get..get..get..get..get..get.get..get..get.get..get.get..get.get..get.get..get..get.get..get..get.get..get.get..get.get..get..get.get..get..get..get.get..get..get..get..get..get..get..get..get.get..get..get.get..get..get.get..get..get..get.get..get.get..get.get..get..get.get..get.get.get..get.get..get.get...get.get..get..get..get..get.get..get
provides an implementation to yaml_load(filename), and contains an _unnecessary_ and _incomplete_ function (lines 10-19). This incomplete function serializes objects into a YAML file, which does not match the intent in the prompt and has a syntax error (missing closing parenthesis at line 19). Thus, in the next phase, Franc removes this unnecessary function and other types of syntax issues.
```
1importyym1=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unifunif=unif=unif=unif=unif=unifunif=unif=unifunif=unif=unif=unifunif=unifunif=unifunif=unif=unif=unif=unif=unif=unif=unif=unif=unif=unifunif=unif=unif=unif=unif=unif=unif=unifunif=unif=unif=unif=unifunif=unif=unifunif=unif=unif=unifunif=unif=unifunif=unif=unif=unif=unif=unif=unifunif=unif=unifunif=unifunif=unifunif=unif=unifunif=unifununif=unifununif=unifununif=unifunif=unif=unifunif=unifununif=unifunif=unifunif=unifunif=unifunif=unifunif=unifunif=unifunif=unifunifununif=unifunif=unifunif=unifunif=unifunif=unifunifununif=unifunif=unifunifununif=unifunifununif=unifunifununif=unifununifunif=unifununifunif=unifununifununif=unifunifunifununif=unifununifunifununifununifununifununifunununifunifununifunununifununifununifununifunifununifununifununifununifununifunununifununifunununifununifununifununifunununifununifunununifunununifununifunununifunununifunununifununununifunununifununifununununifunununifunununifununununifunununifunununifununununifununununifununununifunununifunununununifunununununifununununununifununununununununif
Python and Java code automatically generated by five different LLMs. We focused on Python and Java because these are two of the most popular programming languages (based on a recent survey [46]). Moreover, we aimed to demonstrate how Franc's configurable architecture enables a model-agnostic approach for improving the quality of generated code by choosing two languages and five different code generation models. In this evaluation, we answered the following research questions:
**RQ1**: How well does the static filter correct and remove non-compilable code from the suggestion inventory?
Code generation models can output multiple suggestions for a single prompt, but not all suggestions are compilable [47]. Hence, our framework includes a _lightweight heuristic-based static filter_ that automatically cleans the generated Python and Java code to remove any non-compilable code from the suggestion inventory (Phase 3). In this research question, we measure the effectiveness of this static filtering phase.
**RQ2**: How well does the quality-based filter ranking system work?
In Phase 4, Franc uses a quality-based ranking in order to sort eligible snippets (_i.e._, code snippets that passed the filtering in Phase 3). Thus, we investigate whether Franc's quality-based ranking performs better than the model's original ranking.
**RQ3**: Can an LLM-based code generation model effectively repair code with quality issues?
Franc includes a repairing phase () to fix a code snippet with quality issues using prompt engineering. It leverages the existing code generation model to fix the quality problem. Hence, we study the effectiveness of this LLM-based code-repairing approach.
**RQ4**: How much overhead is introduced by the framework?
Franc relies on static analyzers and is built on top of an existing code generation model. Therefore, it creates an extra overhead concerning the time to _filter_ problematic snippets, _rank_ the output from the code generation model, and _repair_ it. This question explores how much overhead the framework introduced due to these additional phases.
In the next sections, we explain the methodology we followed to answer these four questions.
### _Prompts Creation_
To answer our RQs, we retrieved prompts from existing code generation benchmarks [18, 19, 20, 9, 21] that have been used by many prior works [47, 48, 49, 50, 51, 52]. We also _created_ our own set of prompts based on questions from StackOverflow. These prompts instruct the code generation models to generate a method/function's body based on the context in the docstring/JavaDoc and the method/function's signature. We explain below each benchmark dataset used and our own dataset, which we refer to as **SOEval**.
* **AixBench**[18] is a benchmark dataset that contains **175** prompts to evaluate the generation of Java code. The natural language description in the prompts is written in English and Chinese. We extracted **175** Java prompts with descriptions written in English.
* **CodereEval**[19] is a dataset containing 230 prompts for both Java and Python, retrieved from 43 and 10 open-source Python and Java projects hosted on GitHub, respectively. We retrieved a total of **460** prompts from this dataset.
* **Multilingual HumanEval**[20] is a dataset with prompts for multiple programming languages created from the Python-based original **HumanEval** dataset [9]. We used **161** Java samples from Multilingual HumanEval and **164** from the original Python-based HumanEval dataset. Henceforth, we will refer to both datasets as simply **HumanEval**.
* **SecurityEval** is a benchmark dataset for evaluating Python code generation models from the perspective of security [21]. This dataset contains 130 prompts covering 75 entries from the Common Weakness Enumeration (CWE). However, we modified and removed prompts that explicitly instructed the model to generate vulnerable code. Thus, we have a total of **121** prompts covering 69 CWEs.
* **SOEval** is created by us by mining questions from StackOverflow. Our goal was to create a prompt dataset that reflects the real-life needs of software developers. To build this dataset, we first collected 500 popular and recent questions with Python and Java tags for each. From these 1,000 questions, we applied a set of inclusion and exclusion criteria. The inclusion criteria were: the question has to **(1)** explicitly ask _"how to do X"_ in Python or Java; **(2)** include code in its body; **(3)** have an accepted answer that includes code. We excluded questions that were **(1)** open-ended and asking for best practices/guidelines for a specific problem in Python/Java; **(2)** related to finding a specific API/module for a given task; **(3)** related to errors due to environment configuration (_e.g._, missing dependency library); **(4)** related to configuring libraries/API; **(5)** syntax-specific types of questions. By applying the criteria above to these 1K questions, we obtained **28** and **42** prompts for Java and Python, respectively.
Therefore, we had a total of **1,151** prompts, in which **594** and **557** prompts are for Java and Python, respectively.
### _Code Generation Models_
We used the code generation models listed below (the first four are open-source, whereas the last one is closed-source) to create the _suggestion inventory_. The input of these models are the prompts previously collected (SS IV-A).
* **CodeParrot**[53] is a GPT-2 [54] model trained from scratch on Python code. It is fine-tuned on a clean and
deduplicated large dataset (\(>\)100GB). It can be used for code generation and other downstream tasks (_e.g._, complexity prediction, code explanation, _etc._). It has two versions: one is _CodeParrot-small_ with 110 million parameters, and the other is the regular one with 1.5 billion parameters.
* **InCoder**[33] is a decoder-only transformer model [5] that can synthesize and edit code via infilling. It has one version with 1.3 billion parameters and another with 6.7 billion. We used the small version in our evaluation (1.3B).
* **CodeGen**[43] is a group of models for synthesizing programs using autoregressive languages. It has three types: _multi_, _mono_, and _nl_. The _multi_ type is fine-tuned with multiple programming languages. The _mono_ type is trained only with code written in Python. The _nl_ type is fine-tuned mainly on natural language. Thus, we used the _multi_ and _mono_ model types. We used two versions for these models: one with 350 million parameters and the other with 2 billion.
* **PolyCoder**[49] is a family of three large open-source language models based on the GPT-2 architecture. They have been trained on a corpus of code from 12 different programming languages (_e.g._, C/C++, C#, Go, Java, JavaScript, Python, Ruby, _etc._), using about 25K repositories per language to build the corpus. The dataset was then pre-processed, deduplicated, and filtered, resulting in a training dataset size of 424GB. The parameters of these models range from 160 million to 2.7 billion parameters. We used all three versions of this model (160M, 400M, and 2.7B).
* **ChatGPT** is a closed-source chatbot developed by OpenAI [3]. It uses the GPT-3.5-turbo model (the latest version released on May 2023) and allows back-and-forth conversation with a user to generate code [47].
#### V-C1 Suggestion Inventory Creation
We instructed each model to generate **ten** suggestions for each prompt. For **CodeParrot**, **InCoder**, **CodeGen**, and **PolyCoder**, we used the transformers library [55]. We instructed the model to generate additional \(128\) tokens after the prompt, _i.e._, the model took the prompt and context as input and then generated the next probable token (_e.g._, a keyword) that can come after this prompt and did not stop until \(128\) generated tokens. However, for **ChatGPT3.5**, we instructed it to generate \(512\) tokens. As it is optimized for conversations, it usually includes a textual explanation and the generated code with a modified version of the prompt. Hence, we extended the token size for this model and used the OpenAI API to generate results. Thus, we have **102,100** code samples in the suggestion inventory, _i.e._, **41,010** Java code snippets, and **61,090** Python code snippets.
**Token Limits Rationale**: To decide the token size limits we ran a small experiment using **SecurityEval** dataset. This dataset [21] includes an example of insecure code that can be generated from the given prompt. We tokenized these examples and found that they are 50 tokens in length, on average. Thus, we configured the open source models to generate 128 tokens (\(\approx 2.5\times\) higher than the average number of tokens). For ChatGPT, as we explained before, we increased the limit to **512** to take into account explanations that are provided as part of the output (which consumes tokens).
### _Static Filtering_
In our evaluation, we implemented six heuristics1 to clean the generated code snippets before filtering them from the suggestion inventory based on their syntax check. We adopted the heuristics \(\mathbf{H_{1}}\), \(\mathbf{H_{3}}\), and \(\mathbf{H_{6}}\) from a recent study on unit test generation using LLMs [47]. The first heuristic (\(\mathbf{H_{1}}\)) removes any text _before_ and _after_ backticks (_i.e._, "code "). The second heuristic (\(\mathbf{H_{2}}\)) adds _part_ of the prompt or the _full_ prompt if it is not found in the generated code (otherwise, the code fails the syntax check because it does not include the function/method signature, import statements, _etc._). Heuristics \(\mathbf{H_{3}}\) and \(\mathbf{H_{4}}\) are Python-specific and remove extra code after the target method. The third heuristic (\(\mathbf{H_{3}}\)) removes any code found _after_ a "\(\backslash\)n"\(\backslash\)n"\(\backslash\)n"\(\backslash\)n", or "\(\backslash\)n\(\backslash\)c/codes" pattern (_i.e._, it removes extra code). The fourth Python-specific heuristic (\(\mathbf{H_{4}}\)) removes any additional code _after_ the target method-/function. The fifth heuristic (\(\mathbf{H_{5}}\)) removes any code in an extra Java class (_i.e._, it only keeps the Java class mentioned in the prompt). The sixth heuristic (\(\mathbf{H_{6}}\)) fixes incomplete code by iteratively deleting lines (from bottom to top) and adding 1-2 curly brackets for a Java code. Moreover, if the prompt was meant to repair quality problems (SS IV-E), we apply an additional heuristic (\(\mathbf{H_{7}}\)) before applying these six heuristics. This heuristic replaces the old problematic code (_i.e._, containing a code/security smell) with the newly repaired source code.
Footnote 1: Due to space constraints, we superficially describe the heuristics in here, but we detail each of them with examples in our supplementary materials.
### _Quality-based Ranking_
Up to this point, the generated suggestions are compilable but may still have quality issues, such as code and security smells [10]. We used Bandit to discover security smells in Python code and SpotBugs to discover code smells in Java code. In our evaluation, we configured Franc to employ the quality score defined in Equation 2. By using this quality scheme, we can move up in the rank of the 10 generated suggestions for a prompt that is free of code/security smells.
\[Q(c_{i})=\begin{cases}1,&\text{if $c_{i}$ is compilable and free of smells}\\ 0,&\text{otherwise}\end{cases} \tag{2}\]
### _Repair by Prompt Engineering_
Recall that if the first suggestion is below a quality threshold (\(Q(c_{1})<\tau\)), then Franc attempts to repair \(c_{1}\) through prompt engineering. In our evaluation, we set the threshold \(\tau\) to one, which means that Franc will repair the first suggestion if it has _at least one smell_ in (_i.e._, \(Q(c_{1})<1\)). We configured Franc to use the error metadata provided by Bandit and SpotBugs to repair the generated code for the top-1 suggestion ranked by Franc in the previous phase. Specifically, we used the code smell's description and location to craft a repair prompt using three different structures (**P1**,
**P2**, and **P3**) and send it back to the code generation model to generate 10 suggestions again. To illustrate these different prompt repair structures, consider the generated code to be repaired shown in Listing 3 (where the highlighted lines were the original prompt used to generate this code). The prompt structure **P1** adds source code comments _after_ the code to be repaired in the format "Fix: At line << # w, << error msg >>n Fixed Code:\(\backslash\)n", as shown in Listing 3 (lines 10-11). The error message comes from Bandit (for Python snippets) and Spotbugs (for Java snippets).
```
1fromgapap.shimportconnection
2forthove(reguper,norman);
3
4Createa=curatortoptopuserintfromuserstableandret
is **1**. If \(c_{i}\) is compilable and free of quality problems (_i.e._, \(Q(c_{i})=1\)) but does not fully implement the prompt's intent (_i.e._, it is functionally incorrect), the score is **2**. A code snippet \(c_{i}\) that is compilable, does not have a quality problem, and is functionally correct has a relevance score equal to **3**. Given that we generated 10 suggestions per prompt, the value of \(k\) is equal to \(10\). The \(IDCG_{@k}\) in Eq. 3 is the _ideal discounted cumulative gain_, which is a normalization factor used to ensure the output ranges from 0 to 1. It is equal to the highest possible value achieved when all results are correct, _i.e._, \(IDCG_{@10}=\frac{3}{log_{2}(1+1)}+...+\frac{3}{log_{2}(10+1)}\approx 13.631\).
In total, we have **2,271** prompts with _at least one quality issue_ from different models and dataset combinations that need to be repaired. Since it would be time-consuming to manually analyze 22,710 code snippets (_i.e._, 2,271 prompts \(\times\) 10 suggestions), we instead randomly chose a subset of **326** prompts (95% confidence level) to manually analyze. The number of prompts per model kept the same proportions as the original set of **2,271** prompts.
The relevance scores \(0\) and \(1\) are automatically computed based on the syntax check and quality score computed in Phases 3 and 4, respectively. For the remaining snippets (_i.e._, that are compilable and free of smells), the relevance score is assigned manually by two researchers. They independently provide the rating by judging the intention of the prompt and their reflection on the generated suggestion. The Cohen's kappa score of the inter-raters' agreement is \(0.815\), which indicates a strong agreement between the raters [58]. We resolved the disagreements through discussion. After resolving these disagreements, we computed the \(NDCG_{@10}\) for Franc and compared it with the \(NDCG_{@10}\) for the original rank produced by the underlying model.
Table II shows the average \(NDCG_{@10}\) for each model before using Franc (_i.e._, the original rank) and after using it. Franc increases the \(NDCG_{@10}\) for all models and languages. The highest improvement for Java was from **0.3019** to **0.4143** (CodeGen 2B-multi). For Python, the highest increase was from **0.3630** to **0.4671** (CodeGen 350M-mono). We observe a similar improvement trend for both languages. The average \(NDCG_{@10}\) difference between Franc and the original models' ranking output is **0.096** for Java and **0.07** for Python. A paired t-test comparing the \(NDCG_{@10}\) shows a statistically significant difference for both languages (p<0.0001).
Table II also shows how many prompts we manually analyzed per model and how many prompts in which \(rel_{1}=3\) (_i.e._, \(c_{1}\) is compilable, functionally correct, and smell-free) before and after using Franc. We find that the number of prompts with \(rel_{1}=3\) also increases after using Franc.
**RQ2 Findings:** Franc's quality-based ranking increases the \(NDCG_{@10}\) (statistically significant difference) for both Python and Java. Franc also increases the number of prompts in which the first suggestion is compilable, functionally correct, and free of smells.
### _RQ3: Code Repair Effectiveness_
After ranking the suggestions based on quality (Phase 4), if the re-sorted top-1 suggestion (_i.e._, \(c_{1}\)) still has a quality issue (_i.e._, \(Q(c_{i})=0\)), then Franc repairs the problematic top-1
Fig. 2: Prompt Engineering-based Repair for Java and Python Benchmarks
suggestion through prompt engineering (_i.e._, it creates a repair prompt and sends it back to the model). Recall that we studied three different prompt repair structures (P1-P3). Figure 2 depicts the percentage of prompts with at least one repaired suggestion per model. Prompts with at least one snippet \(c_{i}\) in which \(Q(c_{i})=1\) are referred to as _"good prompts"_.
The prompt structure **P2** is not performing well in repairing Java code. It is not generating any good prompts for InCoder and PolyCoder models. For Java, **P3** is the best-performing prompt repair structure. It can generate good prompts from 2.2% to 56.1% of the time. However, we see a different scenario for Python in Figure 2. For Python, **P2** is performing better except for GPT-3.5, though the difference between **P2** and **P1** is very close. The prompt structure **P2** can produce good prompts from 35.71% to 79.2%. We also observe that, overall, the models perform better in repairing Python samples than Java samples. One possible explanation for this observation is that these LLMs are heavily trained/fine-tuned with Python samples [9].
**RQ3 Findings:**Franc can effectively repair a top-1 suggestion that does not have the highest quality. Different programming languages may need different prompting engineering structures, but Franc can produce up to 80% prompts with at least one good suggestion.
### _RQ4: Extra Overhead from the Framework_
Table III summarizes the extra overhead taken by Franc. To ensure consistency of measurements, we used the same machine while running the experiment (an Apple M1 Chip with 8 GB RAM). We can see that running all the phases for Java; takes **1.984502** seconds on average (standard deviation of \(\sigma=0.06\)). For Python, it takes **0.084635** seconds, on average, (\(\sigma=0.003\)). The _ranking phase_ was the most time-consuming phase, as it needed to run external tools to compute the quality score for each code snippet \(c_{i}\). Although Franc adds an overhead to filter, rank and repair code snippets, it does not require any fine-tuning of the models, which would be time-consuming, costly, and resource-hungry.
**RQ4 Findings:** Although Franc adds an overhead, the extra time needed is less than 2 seconds (on average). The ranking phase is Franc's most time-consuming phase.
### _Infilling vs Synthesis vs Chat-based Generation_
Our work investigates five models with six prompt datasets crafted from different sources (_e.g._, programming problems [9], StackOverflow, _etc._). CodeParrot [53], PolyCoder [49], and CodeGen [43] focuses on _program synthesis_, _i.e._, they take a prompt as input to generate code _after_ the prompt. InCoder [33] models focuses on _infilling_; it fills up with code between a prompt by taking from _both_ sides. GPT-3.5 [3] focuses on _conversation-style_ code generation. Although Franc is geared towards program synthesis, it still improves the performance of infilling models like InCoder [33]. For example, our results showed that without a static filter, no compilable suggestions were produced by InCoder (SS V-A). Franc was able to clean up InCoder's output using heuristics such that 22% of prompts had at least one compilable snippet. Though the result of this infilling model is not as great as other models (_i.e._, lower \(NDCG_{010}\) improvement), Franc helped to show code with higher quality in the first position.
GPT-3.5 [3] is optimized for multi-turn style conversation with human feedback. It performs better than most of the models for generating suggestions, and Franc significantly improves the ranking of the suggestions. This model better understands different repair scenarios, whereas open-source models respond to different scenarios depending on the programming language.
### _Code Repairing using LLM_
We used prompt engineering techniques to repair code and security smells using LLMs. We found that for Python, LLMs are better able to solve issues related to XML validation vulnerabilities (_i.e._, CWE-20: _Improper Input Validation_), using APIs from subprocess library (_i.e._, CWE-78: _OS Command Injection_) and a Flask application with debug=True (_i.e._, CWE-94: _Code Injection_). Conversely, LLMs were less capable of solving issues related to the _Use of a Broken or Risky Cryptographic Algorithm_ (CWE-327), _Path Traversal_ (CWE-22), and _Incorrect Permission Assignment for Critical Resource_ (CWE-732).
For Java, LLMs can resolve code smells related to the _inocation of toString on an array_, _suspicious reference comparison_, and _return value of method without side effect is ignored_. How
ever, LLMs can hardly repair _infinite loops_, _array indexing out of bounds_, and _useless control flow to next line_.
## VII Threats to Validity
A threat to the _external validity_ of this work is that we only investigated transformed-based [5] LLMs. However, current commercial products are built based on these types of models [2, 3]. Another external validity threat is that we used default hyperparameters values with 128 tokens to generate source code for the open-source LLMs and 512 tokens for ChatGPT. Thus, we acknowledge that our results may not generalize for other inference hyperparameters. However, our work established the importance of a framework, Franc, and showed it was independent of the code generation models. We also used two external tools (Bandit [59], and Spotbugs [60]) in the framework. Practitioners and researchers widely use these tools [10, 61].
A threat to the _internal validity_ of this work is that we manually analyzed ranked code snippets to compute their relevance score (SS V-C), which is prone to biases. However, to ensure that biases are removed, we conducted a peer review of these analyses and reported our Cohen's kappa score [58] (which showed strong agreement). Another _internal validity_ threat is that we manually curated a new prompt dataset called SOEval. However, since StackOverflow is a popular Q&A website used by developers, this dataset can be a proxy for real-world developers' prompts.
## VIII Related Work
Program synthesis refers to automatically generating a program that satisfies the user's intent given as high-level specifications or input-output examples [62]. One of the foundations of program synthesis is deductive synthesis [63, 64], where a task specification is transformed into constraints, and the program is extracted after demonstrating its ability to satisfy the constraints [62]. An example of this approach comes from Yin _et al._[65]. This work mapped text to abstract syntax trees using recurrent networks. These abstract syntax trees were then coded using attention.
Many LLMs have been produced with the goal of generating code, such as CodeBert [30], Codex [9], and CodeT5 [32]. Another form of a code generation model that produces code for competitive programming challenges is AlphaCode [66]. GitHub Copilot [2], a closed-source tool for code generation, uses the upgraded version of Codex to develop an improved auto-complete mechanism. Additionally, other recent works [6, 7, 8] focused on optimizing the process to create, fine-tune, and infer the LLM-based code generation technique. However, our work focuses on being a _lightweight_ approach to filter, rank, and repair code snippets in an existing model's output from a _code quality_ perspective.
After GitHub Copilot's commercial release to users [2], prior works [9, 32, 66] expressed their concern about security, privacy, and bias in the generated code. A recent study found that GitHub Copilot can produce unsafe (vulnerable) code [11]. Another study by Siddiq _et al._[10] observed the presence of code smells (including security smells) in code generation training sets and their leakage to the models' output. Another recent work [12] conducted a user study to investigate the security implication of GitHub Copilot as a large language model-based code assistant. Rather than performing an empirical study on the quality or security issues in the code generation paradigm, our work introduces a novel framework to address quality problems by reducing how often vulnerable and low-quality code is exposed to developers from code generation models.
Other works have aimed to improve these code generation models' output by directly changing them. He and Vechev [67] used property-specific continuous vectors to guide program generation toward the given property without modifying the LLM's weights; specifically, they tried to generate secure source code without compromising the program's correctness. However, this prior work is limited to certain CWEs and a code generation model that needs fine-tuning. In contrast, our work studies how to resolve quality issues without re-training or fine-tuning an LLM and is not limited to a certain quality attribute; developers can configure Franc's quality score ( SS III-D) in any way they wish to give more weight to certain quality attributes over the others.
Chen _et al._[68] modified these code generation models by introducing human feedback in natural language to the language model during training by using the language model's natural ability to incorporate feedback. When the model generates a code snippet, it will use feedback from the user to repair the snippet if it is faulty. Our work differs from these works because rather than changing the model itself, which would be costly, we use the original model and filter out vulnerable and low-quality code from the possible responses the model could produce.
## IX Conclusion and Future Work
Although automated code generation tools can help developers to speed up software development, the generated source codes must also be maintainable, high quality, and free of code smells. As automated generated codes are mixed with human-written code, it is necessary to ensure the quality of the generated code so that it does not introduce reliability issues. Our framework, Franc, helps get vulnerability-free output and comparatively high-quality source code. Our work introduced the first-of-its-kind lightweight framework for improving the quality of the generated code with practical usage of code repairing using LLMs to remove quality issues. We demonstrated how our framework performs by using it to improve the quality of the Java/Python code generated by five LLMs. In the future, we will evaluate Franc using other programming languages and learning models. |
2304.11221 | Can an AI-tool grade assignments in an introductory physics course? | Problem solving is an integral part of any physics curriculum, and most
physics instructors would likely agree that the associated learner competencies
are best assessed by considering the solution path: not only the final solution
matters, but also how the learner arrived there. Unfortunately, providing
meaningful feedback on written derivations is much more labor and resource
intensive than only grading the outcome: currently, the latter can be done by
computer, while the former involves handwritten solutions that need to be
graded by humans. This exploratory study proposes an AI-assisted workflow for
grading written physics-problem solutions, and it evaluates the viability of
the actual grading step using GPT-4. It is found that the AI-tool is capable of
providing feedback that can be helpful in formative assessment scenarios, but
that for summative scenarios, particularly those that are high-stakes, it
should only be used for an initial round of grading that sorts and flags
solution approaches. | Gerd Kortemeyer | 2023-04-21T19:00:13Z | http://arxiv.org/abs/2304.11221v1 | # Can an AI-tool grade assignments in an introductory physics course?
###### Abstract
Problem solving is an integral part of any physics curriculum, and most physics instructors would likely agree that the associated learner competencies are best assessed by considering the solution path: not only the final solution matters, but also how the learner arrived there. Unfortunately, providing meaningful feedback on written derivations is much more labor and resource intensive than only grading the outcome: currently, the latter can be done by computer, while the former involves handwritten solutions that need to be graded by humans. This exploratory study proposes an AI-assisted workflow for grading written physics-problem solutions, and it evaluates the viability of the actual grading step using GPT-4. It is found that the AI-tool is capable of providing feedback that can be helpful in formative assessment scenarios, but that for summative scenarios, particularly those that are high-stakes, it should only be used for an initial round of grading that sorts and flags solution approaches.
## I Introduction
### Generative Pre-trained Transformer
In fall 2022, Generative Pre-trained Transformer (GPT) [1] rapidly gained the World stage as a publicly available AI-tool with surprising capabilities. Chat-GPT is a text-based interface to this underlying Large-Language Model (LLM). GPT is essentially a tool that produces plausible fiction using a neural-network-based autocomplete algorithm; in that respect, it is similar to the autocomplete on smartphones that suggests likely next words when the user is composing text messages. However, GPT was trained based on a massive text corpus gleaned from public and proprietary sources, and it was extensively fine-tuned by humans. Also, GPT does not work with words, but with finer-grained tokens, which are similar but not identical to syllables in a word.
ChatGPT allows for text-based dialogues: prompts by the user, responses by GPT. These dialogues are remarkably human-like, and the system would likely pass the Turing test for many situations [2]. While being used, apart from continued human training by the company behind GPT, OpenAI, the system does not learn anymore as a whole. However, it learns within the confines of a particular dialogue, so it can refer to statements made earlier in the same dialogue. At some point, though, it hits its internal so-called "token limit" -- it can only keep a limited number of tokens in memory, comparable to the working memory limitations of a human. Dialogues thus cannot become too extensive before either an error message occurs or the system simply appears to forget what was stated earlier in the dialogue (which will turn out to be a limitation for this study).
The massive training effort behind GPT resulted in the emergence of capabilities that are not necessarily expected from a language model. GPT-4 has been found to pass several standardized exams in the upper percentiles [3; 4; 5]. A previous version, GPT-3, could already pass an introductory physics course at a nominal level [6], and there are indications that GPT-4 performs even better on physics concepts [7].
While ChatGPT is limited to text-based input and output, GPT-4 itself is multimodal, so it can accept image input [5]. Among other examples, in advertising videos, the company demos how handwriting and hand-written mathematical formulas are turned into machine-readable documents. Using this feature, however, requires access to its Application Programming Interface (API), which is unfortunately restricted by OpenAI -- as of April 2023, there is a long waiting list, and thus this functionality could not be tested in this study.
### Solving Physics Problems
The idea of grading problem solutions using AI has been around for a while [8], but was oftentimes hampered by the need to train the AI for specific problems; in other words, the AI learned from humans how to grade a specific problem, and in the end it mimics their scoring of that problem. GPT and similar "pre-trained" systems promise a more universal solution, which is able to _ad hoc_ grade problems it has not encountered before. The fulfillment of this promise seems within reach, since these same "pre-trained" systems are also able to solve a wide variety of problems they have not encountered before [6].
Strategically solving physics problems requires logical, conceptual, and mathematical competencies [9; 10], and hardly a topic in Physics Education Research has been investigated more extensively [11]. While the final solution to physics problems, as well as pre-determined, scaffolded steps along the way, can be assessed to varying degrees using computer systems [12; 13; 14; 15; 16; 17; 18; 19], a deeper analysis of the complex problem-solving competencies of learners requires an evaluation of the solution path and derivation [20; 21]. However, grading written solutions currently requires human effort, and the effect of this more
meaningful feedback on final learning outcomes may be curbed by having fewer such opportunities due to limited human-grading resources and the time delay before such feedback is returned [22]. The purpose of this study is to explore if meaningful feedback can be received from written homework and exam problems while in addition having the benefits of more frequent formative assessment with immediate feedback traditionally afforded by solution-focussed online systems.
## II Scenario
The proposed scenario has multiple steps, illustrated in Fig. 1. In one possible setting, learners are writing problem solutions on paper; while this might seem quaint, most anybody would agree that typing mathematical formulas is cumbersome, and assembling them in formula editors like the ones built into Microsoft Word is time-consuming and tends to be frustrating. The established standard for typing mathematics is LaTeX, and while some experts can "think" in LaTeX, paper and pencil are far more efficient and intuitive (there are even arguments that this materiality is essential [23]). This scenario is particular well-suited for exams, which can then take place in traditional, supervised, "offline" modes.
1. The first step is simply scanning the assignments papers into a PDF or image format; this could be accomplished with most copy machines using automatic paper feeding of the student worksheets, where the real challenge are likely paper jams from crumbled up exam sheets.
2. The PDF is then processed by AI-based optical character recognition and translated into a machine-readable format, for example LaTeX. In this study, due to lack of access to the appropriate API, these steps are simulated; the reliability of this step would be the subject of a separate, future study.
3. The LaTeX documents is then graded by an AI-tool, in this study GPT-4. This step might entail several, independent rounds of grading, which are subsequently summarized. Variations include flagging solutions where grading results diverge for subsequent human arbitration, or categorizing the solutions into similarity classes as a preliminary step to more rapid human grading.
A possible second scenario for the first step could be using pen-computing, where learners are directly drawing on a screen. Some modern tablets and laptops are mimicking the paper feel with the appropriate friction, and students appear to be comfortable with this technology, seeing how many of them are using this for lecture notes. A third scenario for the first step would be to photograph or scan the solutions with a smartphone and uploading them to an online system.
These latter two scenarios are less fit for exam settings, as being online means having access to all kinds of resources, communication channels, and online tools (including AI-tools!). These additional affordances of being online would either need to be incorporated into the tasks themselves (possibly making them more demanding) or attempted to be blocked by lock-down technologies [24].
The AI-component of a system like this is very unlikely installable on-premise, and likely only commercial cloud-based solutions are viable. This will raise privacy and data-security concerns that need to be addressed, particularly when dealing with mandatory assignments [25].
## III Methodology
### Generation of Sample Solutions
As an example for this study, a simple time-dependent RC-circuit problem was chosen. This problem, shown as the prompt in Fig. 2, involves some conceptual, strategic, and mathematical challenges, but is likely in one form or the other a part of most calculus-based introductory physics courses. The initial potential difference across the capacitor is given, even though it is not needed; this superfluous information makes it tempting to immediately calculate the initial current through the capacitor, leading to a less than straightforward solution, since that current later drops out. Particularly when not working symbolically, but immediately "plugging and chugging," this involves unnecessary steps [26].
Since GPT uses a probabilistic algorithm, presenting the same prompt twice will lead to different responses. This property was used to generate a set of 25 unique sample solutions for this study. GPT-4 has better reasoning capabilities than its predecessors, so it more likely produces correct solutions. To also have plausible, but incorrect solutions in the sample, the majority of the solutions were generated by earlier releases.
At the time of this study (mid-April 2023), three different models of GPT were available through ChatGTP: GPT-4 (March 23, 2023 release [27]), Default GPT-3.5, and Legacy GPT-3.5. In particular, based on the prompt in Fig. 2. Solutions 1-5 were generated by GPT-4, Solutions 6-13 by Default GPT-3.5, and Solutions 14-25 by Legacy GPT-3.5; these solutions are listed in Figs. 2-7.
tive enough of what students might submit. This spans the gamut from solutions 1 and 12, which are almost perfect, to solutions 9 and 19, which are completely missing the mark, and it includes the expected unnecessary calculations and transfer of numerical values from one formula to the next. As an aside, while generated to simulate human problem solving, this sample set illustrates the progression in reasoning capabilities between Legacy GPT-3.5 and GPT-4, achieved in just a few months. These solutions now provide the base for the actual grading study (last step in Fig. 1).
### Grading of Solutions
GPT-4 solved the problem correctly in all instances, so it seems appropriate to use it for grading. The solutions were scored on a scale of 0 (worst) to 4 (best) on a rubric of correctness of approach, correctness of symbolic derivations, correctness of the numerical result, and straightforwardness. In addition, for each solution, a one-sentence feedback was requested. The rubric scores were combined to a total score with a stronger weight on the final, numerical result. Finally, the system was prompted to generate a correlation table between the solutions based on similarity-of-approach; Fig. 8 shows an example of the associated prompt.
Since GPT is probabilistic, it will not arrive at one deterministic score combination for each solution. Similar to having more than one human grader look at the same solution, each solution was scored several times. Due to the limitations of GPT's token limit, not all 25 solutions could be fed into the system at the same time. They were thus processed in 75 dialogues of randomly compiled batches of 5 solutions, leading to an average of 15 grading rounds for each solution. Unfortunately, more often than not, ChatGPT did not provide the tables in the requested CSV-format, which necessitated some subsequent manual reformatting of the responses in a text editor and Excel. The narrative one-sentence feedback messages from the on the average 15 "graders" were summarized using GPT-4 into one longer statement reflecting the majority, using the prompt shown in Fig. 9.
The same solutions were independently graded by the author on the same rubric of correctness of argument, symbolic derivations, and numerical results, as well as straightforwardness. Of course, human grading decisions are also somewhat arbitrary, but this was not further considered in this study. The reader can, however, form his or her own judgement for each solution, which can be found in Figs. 2-7.
Figure 1: Possible workflow for AI-assisted grading of handwritten work.
Figure 2: Prompt and Solution 1 for this study (generated by GPT-4).
**Solution 2:** We can use the RC circuit equations to find the time when the current is half the initial value. The current in the circuit can be described by the equation:
\(I(t)=I_{0}e^{-t/RC}\)
Where \(I(t)\) is the current at time \(t\), \(I_{0}\) is the initial current, and \(RC\) is the time constant of the circuit. The initial current, \(I_{0}\), can be found using Ohm's law:
\(I_{0}=\frac{V}{R}\)
We're looking for the time \(t_{2}\) when the current is half the initial current:
\(I(t_{2})=\frac{1}{2}I_{0}\)
Now we substitute the values for \(R\), \(V\), and \(C\):
\(I_{0}=\frac{5V}{100}=0.5A\)
Plugging in the value of \(I_{0}\) into the equation for \(I(t_{2})\):
\(0.25A=\frac{1}{2}(0.5A)e^{-t_{2}/(10\Omega\cdot 100\times 10^{-3}F)}\)
We want to find \(t_{2}\). First, we can simplify the equation by dividing both sides by \(0.5A\):
\(0.5=e^{-t_{2}/(10\Omega\cdot 100\times 10^{-3}F)}\)
Now, take the natural logarithm of both sides:
\(\ln(0.5)=-t_{2}/(10\Omega\cdot 100\times 10^{-3}F)\)
Then, multiply both sides by the time constant, \(RC\):
\(t_{2}=-\ln(0.5)(10\Omega\cdot 100\times 10^{-3}F)\)
And finally, calculate the value of \(t_{2}\):
\(t_{2}\approx 0.693\cdot(10\Omega\cdot 100\times 10^{-3}F)\approx 0.693\cdot 1s \approx 0.693s\)
So, at \(t_{2}\approx 0.693\) seconds, half the initial current flows through the resistor.
**Solution 3:** To solve this problem, we will first find the equation for the current in the circuit as a function of time, and then we will find the time when the current is half of its initial value.
The current flowing through the resistor at any given time t can be determined using the capacitor discharge equation:
\(I(t)=I_{0}e^{-t/RC}\)
Where \(I(t)\) is the current at time t, \(I_{0}\) is the initial current, R is the resistance, C is the capacitance, and t is the time elapsed.
We need to find \(I_{0}\) first, which is the initial current flowing through the resistor at \(t=0\). We can find it using Ohm's law: \(I_{0}=\frac{V}{R}\)
Given values are \(R=10,\mathrm{Ohms}\) and \(V=5,\mathrm{V}\).
\(I_{0}=\frac{5,\mathrm{V}}{10,\mathrm{Ohms}}=0.5,\mathrm{A}\)
Now we have the equation for the current:
\(I(t)=0.5e^{-t/RC}\)
We want to find the time \(t_{2}\) when the current is half of its initial value:
\(I(t_{2})=\frac{1}{2}I_{0}=0.25,\mathrm{A}\)
So,
\(0.25=0.5e^{-t_{2}/RC}\)
Divide both sides by \(0.5\):
\(0.5=e^{-t_{2}/RC}\)
Now take the natural logarithm of both sides:
\(\ln 0.5=-\frac{t_{2}}{RC}\)
Given capacitance \(C=100,\mathrm{mF}=100\times 10^{-3},\mathrm{F}\), we can now solve for \(t_{2}\):
\(t_{2}=-RC\ln 0.5\)
\(t_{2}=-(10,\mathrm{Ohms})(100\times 10^{-3},\mathrm{F})\ln 0.5\)
\(t_{2}\approx 0.693\times 10^{-3},\mathrm{s}\)
So, at \(t_{2}\approx 0.693,\mathrm{ms}\), half the initial current flows through the resistor.
**Solution 4:** To find the time when half the initial current flows through the resistor, we first need to find the initial current and the equation for the current through the resistor as a function of time. The initial current (\(I_{0}\)) can be found using Ohm's Law:
\(I_{0}=\frac{V}{R}\)
where \(V\) is the initial voltage across the capacitor (5V) and \(R\) is the resistance (10 Ohms).
\(I_{0}=\frac{5V}{10\mathrm{\ Ohms}}=0.5\mathrm{A}\)
The current through the resistor as a function of time can be found using the formula for the current in an RC circuit:
\(I(t)=\frac{V}{R}e^{-\frac{t_{2}}{RC}}\)
where \(I(t)\) is the current at time \(t\), \(V\) is the initial voltage across the capacitor, \(R\) is the resistance, \(C\) is the capacitance, and \(t\) is the time. In our case, \(V=5\mathrm{V}\), \(R=10\mathrm{\ Ohms}\), and \(C=100\mathrm{mF}=0.1\mathrm{F}\).
Now we need to find the time \(t_{2}\) when half the initial current flows through the resistor, i.e., when \(I(t_{2})=0.5I_{0}=0.25\mathrm{A}\). We can set up the equation:
\(0.25\mathrm{A}=\frac{5V}{10\mathrm{\ Ohms}}e^{-\frac{t_{2}}{(10\mathrm{\ Ohms})(0.1\mathrm{F})}}\)
Now we need to solve for \(t_{2}\). First, we can simplify the equation:
\(0.5=e^{-\frac{t_{2}}{11}}\)
Next, take the natural logarithm of both sides:
\(1n(0.5)=-\frac{t_{2}}{1s}\)
Finally, solve for \(t_{2}\):
\(t_{2}=-1s\cdot\ln(0.5)\approx 0.6931\mathrm{s}\)
So, at \(t_{2}\approx 0.6931\mathrm{s}\), half the initial current flows through the resistor.
**Solution 5:** To solve this problem, we'll first find the initial current and then determine the time it takes for the current to decrease to half that value.
The initial current (\(I_{0}\)) flowing through the resistor can be found using Ohm's law:
\(I_{0}=\frac{V}{R}\)
With \(V=5V\) and \(R=10,\mathrm{Ohms}\):
\(I_{0}=\frac{5}{10}=0.5,\mathrm{A}\)
Now, let's find the time constant (\(\tau\)) of the RC circuit:
\(\tau=RC\)
With \(R=10,\mathrm{Ohms}\) and \(C=100,\mathrm{mF}=0.1,\mathrm{F}\):
\(\tau=10\times 0.1=1,\mathrm{s}\)
The current in the circuit at any time \(t\) can be expressed as:
\(I(t)=I_{0}e^{-t/\tau}\)
We want to find the time \(t_{2}\) when the current is half the initial current:
\(\frac{1}{2}I_{0}=I_{0}e^{-t_{2}/\tau}\)
Divide both sides by \(I_{0}\):
\(\frac{1}{2}=e^{-t_{2}/\tau}\)
Now, take the natural logarithm of both sides:
\(\ln\left(\frac{1}{2}\right)=-t_{2}/\tau\)
\(t_{2}=-\tau\ln\left(\frac{1}{2}\right)\)
Since \(\tau=1,\mathrm{s}\):
\(t_{2}=-1\ln\left(\frac{1}{2}\right)\)
\(t_{2}\approx 0.693,\mathrm{s}\)
So, at \(t_{2}\approx 0.693,\mathrm{s}\), half the initial current flows through the resistor.
Figure 3: Solutions 2 and 5 to the prompt in Fig. 2.
**Solution 6:** The current in the circuit at time t is given by the equation: \(i(t)=\frac{V}{R}e^{-\frac{t_{2}}{RC}}\), where \(RC\) is the time constant of the circuit.
At time \(t_{2}\), half of the initial current flows through the resistor, so:
\(\frac{1}{2}i(0)=\frac{V}{R}e^{-\frac{t_{2}}{RC}}\)
\(\Rightarrow\frac{1}{2}\cdot\frac{V}{R}=\frac{V}{R}e^{-\frac{t_{2}}{RC}}\)
Taking the natural logarithm of both sides, we get:
\(\ln\frac{1}{2}=-\frac{t_{2}}{RC}\)
\(\Rightarrow t_{2}=-RC\ln\frac{1}{2}\)
Substituting the values given in the problem, we get:
\(t_{2}=-10\cdot 0.1\cdot\ln\frac{1}{2}\approx 0.69\) seconds
Therefore, half the initial current flows through the resistor at \(t_{2}\approx 0.69\) seconds.
**Solution 7:** The current through a capacitor is given by \(I(t)=C\frac{dV}{dt}\), and the voltage across a capacitor is given by \(V(t)=V_{0}e^{-\frac{t}{RC}}\) where \(V_{0}\) is the initial voltage across the capacitor, \(R\) is the resistance of the circuit, and \(C\) is the capacitance of the capacitor.
At \(t=0\), the voltage across the capacitor is \(V_{0}=5\) V, so we have:
\(I(0)=C\frac{dV}{dt}\bigg{|}t=0=C\frac{d}{dt}\Big{[}V_{0}e^{-\frac{t}{RC}} \bigg{|}\bigg{|}t=0=-\frac{V_{0}}{R}=-0.5\) A
where the negative sign indicates that the current is flowing in the opposite direction to the assumed direction.
To find the time \(t_{2}\) at which half the initial current flows through the resistor, we need to solve the equation:
\(I(t_{2})=-\frac{0.5}{2}=-0.25\) A
We can use the equation for the voltage across the capacitor to solve for \(t_{2}\):
\(V(t_{2})=V_{0}e^{-\frac{t_{2}}{RC}}=2.5\) V
Taking the natural logarithm of both sides and rearranging, we get:
\(\ln\left(\frac{V_{0}}{2.5}\right)=\frac{t_{2}}{RC}\)
Substituting in the values of \(V_{0}\), \(C\), and \(R\), we get:
\(\ln\left(\frac{5}{2.5}\right)=\frac{t_{2}}{10\times 10^{-3}}\)
Simplifying, we get:
\(t_{2}=10\times 10^{-3}\ln(2)\approx 6.93\) ms
Therefore, half the initial current flows through the resistor at a time of \(t_{2}\approx 6.93\) ms.
**Solution 8:** The current through the circuit is given by the equation:
\(i(t)=\frac{V}{R}e^{-\frac{t}{RC}}\)
where \(i(t)\) is the current at time \(t\), \(V\) is the initial voltage across the capacitor, \(R\) is the resistance, \(C\) is the capacitance, and \(e\) is the base of the natural logarithm.
To find the time \(t_{2}\) when half the initial current flows through the resistor, we need to solve for \(t\) in the equation:
\(\frac{i(t_{2})}{(t_{0})}=\frac{1}{2}\)
Substituting in the equation for \(i(t)\) and simplifying, we get:
\(e^{-\frac{t_{2}}{RC}}=\frac{1}{2}\)
Taking the natural logarithm of both sides, we get:
\(t_{2}=-RC\ln\left(\frac{1}{2}\right)=RC\ln(2)\approx 0.69\) s
Therefore, half the initial current flows through the resistor at approximately \(t_{2}=RC\ln(2)\).
**Solution 9:** The current through a capacitor in a circuit can be calculated as \(i(t)=C\frac{dv(t)}{dt}\), where \(v(t)\) is the voltage across the capacitor at time \(t\).
In this case, the voltage across the capacitor is given as \(V=5V\), and the capacitance is \(C=100mF=0.1F\). Therefore, the initial current flowing through the resistor is:
\(i_{0}=C\frac{dV}{dt}=0\)
This is because at time \(t=0\), the capacitor is fully charged, so there is no current flowing through the circuit.
The current at any later time \(t\) can be calculated using:
\(i(t)=C\frac{dV(t)}{dt}\)
The voltage across the capacitor at any time \(t\) can be found using:
\(V(t)=Ve^{-\frac{t}{RC}}\)
where \(RC\) is the time constant of the circuit, which is equal to the product of resistance \(R\) and capacitance \(C\). In this case, \(RC=1\)s.
Therefore, the current at time \(t\) is:
\(i(t)=C\frac{dV(t)}{dt}=\frac{V}{R}e^{-\frac{t}{RC}}\)
To find the time \(t_{2}\) at which half the initial current flows through the resistor, we can set \(i(t_{2})=\frac{i_{0}}{2}\) and solve for \(t_{2}\):
\(\frac{V}{R}e^{-\frac{t_{2}}{RC}}=\frac{i_{0}}{2}=0\)
\(e^{-\frac{t_{2}}{RC}}=0\)
This equation has no solution, which means that half the initial current never flows through the resistor.
The reason for this is that a capacitor does not allow sudden changes in voltage or current, but instead resists them by storing or releasing charge. When the circuit is first connected, the capacitor is fully charged and there is no current flowing. As the voltage across the capacitor decreases, the current flowing through the resistor increases, but it never reaches half the initial current. The current continues to decrease exponentially over time, approaching zero as the voltage across the capacitor approaches zero.
**Solution 10:** The current flowing through the resistor and capacitor is given by the equation:
\(i(t)=\frac{V}{R}e^{-\frac{t}{RC}}\)
where \(RC\) is the time constant of the circuit, which is equal to \(RC=10^{-1}\) seconds in this case.
The initial current \(i(0)\) can be found by plugging in \(t=0\):
\(i(0)=\frac{V}{R}=0.5\) A
To find the time \(t_{2}\) when half the initial current is flowing through the resistor, we need to solve for \(t\) in the equation:
\(\frac{i(t)}{(0)}=\frac{1}{2}\)
Substituting in the expression for \(i(t)\) and solving for \(t\), we get:
\(e^{-\frac{t_{2}}{RC}}=\ln\left(\frac{1}{2}\right)\)
Solving for \(t_{2}\), we get:
\(t_{2}=-RC\ln\left(\frac{1}{2}\right)=RC\ln(2)\approx 0.69\) s
Therefore, half the initial current flows through the resistor at
Figure 4: Solutions 6 through 10 to the prompt in Fig. 2.
**Solution 11:** The current through a capacitor is given by:
\(i(t)=C\frac{dv(t)}{dt}\)
where \(i(t)\) is the current at time \(t\) and \(v(t)\) is the voltage across the capacitor at time \(t\).
The voltage across the capacitor decreases exponentially over time according to:
\(v(t)=V_{0}e^{-\frac{t}{RC}}\)
where \(V_{0}\) is the initial voltage across the capacitor, \(R\) is the resistance of the resistor, \(C\) is the capacitance of the capacitor, and \(t\) is the time elapsed since the capacitor was fully charged. The current flowing through the resistor is given by Ohm's \(i_{R}(t)=\frac{v(t)}{R}\)
Substituting the equation for \(v(t)\) into this equation yields:
\(i_{R}(t)=\frac{V_{0}}{R}e^{-\frac{t}{RC}}\)
To find the time \(t_{2}\) when half of the initial current flows through the resistor, we need to solve the following equation for \(t\):
\(i_{R}(t_{2})=\frac{1}{2}i_{R}(0)\)
Substituting the expressions for \(i_{R}(t_{2})\) and \(i_{R}(0)\), we get:
\(\frac{V_{0}}{R}e^{-\frac{t_{2}}{RC}}=\frac{1}{2}\frac{V_{0}}{R}\)
Simplifying this equation yields:
\(e^{-\frac{t_{2}}{RC}}=\frac{1}{2}\)
Taking the natural logarithm of both sides, we get:
\(-\frac{t_{2}}{RC}=\ln\frac{1}{2}\)
Solving for \(t_{2}\), we get:
\(t_{2}=-RC\ln\frac{1}{2}\)
Substituting the values for \(R\) and \(C\), we get:
\(t_{2}=-10\ \Omega\times 100\ \mu\text{F}\times\ln\frac{1}{2}\)
Calculating this expression yields:
\(t_{2}\approx 0.69\) s
Therefore, half of the initial current flows through the resistor at approximately \(t_{2}=0.69\) s.
**Solution 12:** The current flowing through the resistor and capacitor at time \(t\) is given by the equation:
\(i(t)=\frac{V}{R}e^{-\frac{t}{RC}}\)
where \(RC\) is the time constant of the circuit.
At time \(t=0\), the current is:
\(i(0)=\frac{V}{R}=\frac{5}{10}=0.5\) A
To find the time \(t_{2}\) when half the initial current flows through the resistor, we need to solve the equation:
\(i(t_{2})=\frac{1}{2}i(0)\)
Substituting the expression for \(i(t)\) and simplifying, we get:
\(\frac{V}{RC}=\frac{1}{2}\frac{1}{2}\frac{V}{R}\)
\(e^{-\frac{t_{2}}{RC}}=\frac{1}{2}\)
Taking the natural logarithm of both sides, we get:
\(-\frac{t_{2}}{RC}=\ln\frac{1}{2}=-\ln 2\)
Substituting the values of \(R\) and \(C\), we get:
\(t_{2}=(10\ \Omega)(100\ \mu\text{F})\ln 2=0.693\) s
Therefore, half the initial current flows through the resistor after \(t_{2}=0.693\) s.
Several measures of similarity between solutions were considered as basis for clustering: similarity between the solution scores on the rubric for both human and AI grades, as well as the similarity-of-approach provided in the response of the AI-tool; since due to the token limit, only five solutions could be graded at a time, the overall \(25\times 25\)-matrix was compiled by averaging the \(5\times 5\)-matrices. Clustering was accomplished using the gplots-package [28] within R [29].
Figure 5: Solutions 11 through 14 to the prompt in Fig. 2.
**Solution 15:** The voltage across the capacitor \(V_{C}\) and the current through the resistor \(I\) are related by the equation:
\(V_{C}=IR\)
Since the voltage across the capacitor and the capacitance are constant, the current through the resistor can be expressed as:
\(I=\frac{dQ}{dt}=C\frac{dV_{C}}{dt}=\frac{V}{R}=\frac{5V}{10\Omega}=0.5A\)
where \(Q\) is the charge on the capacitor.
The current through the resistor decreases as the voltage across the capacitor decreases, so at time \(t_{2}\), the current through the resistor will be half of its initial value:
\(I(t_{2})=0.5\cdot 0.5A=0.25A\)
The voltage across the capacitor can be expressed as:
\(V_{C}=V_{0}e^{-\frac{t}{R\Omega}}=5V\cdot e^{-\frac{t}{100m\Omega\cdot 10^{-3}F}}\)
Setting \(I(t_{2})=0.25A\) and solving for \(t\) gives:
\(0.25A=\frac{t_{2}}{t_{2}}0.5A\cdot e^{-\frac{t_{2}}{100m\Omega\cdot 10^{-3}F}}\)
\(\frac{t_{2}}{t_{2}}=\frac{1}{100m\Omega\cdot 10^{-3}F}=\frac{1}{2}\)
\(t_{2}=100m\Omega\cdot 10^{-3}F\cdot\ln\frac{1}{2}=100\cdot 10^{-3}\cdot\ln \frac{1}{2}\approx 0.693\cdot 100\cdot 10^{-3}\approx 69.3 \cdot 10^{-3}s=69.3ms\)
So the time \(t_{2}\) at which half the initial current flows through the resistor is approximately 69.3ms.
**Solution 16:** The voltage across the capacitor and the resistor can be expressed as:
\(V(t)=V_{0}e^{-t/(RC)}\)
where \(V_{0}\) is the initial voltage across the capacitor.
The current flowing through the resistor can be expressed as:
\(I(t)=\frac{dV}{dt}=-\frac{V_{0}}{RC}e^{-t/(RC)}\)
The time \(t_{2}\) at which half the initial current flows through the resistor can be found by setting \(I(t_{2})=\frac{I_{0}}{2}\) and solving for \(t_{2}\):
\(\frac{I_{0}}{RC}=-V_{0}e^{-t_{2}/(RC)}\)
\(\frac{I_{0}}{2V_{0}}=e^{-t_{2}/(RC)}\)
\(\ln\left(\frac{I_{0}}{2V_{0}}\right)=-\frac{t_{2}}{RC}\)
\(t_{2}=-RC\ln\left(\frac{I_{0}}{2V_{0}}\right)\)
Substituting the values \(I_{0}=\frac{V_{0}}{R}=\frac{5}{10}=0.5\) A, \(R=10\) Ohms, and \(C=100\) mF, we get:
\(t_{2}=-10\cdot 100\cdot 10^{-3}\ln\left(\frac{0.5}{5}\right)=-10\cdot 100\cdot 10^{- 3}\ln 0.1\approx 2.302\) s
**Solution 17:** The initial current \(i(0)\) through the resistor at time \(t=0\) can be calculated using Ohm's law:
\(i(0)=\frac{V}{R}=\frac{5V}{10\Omega}=0.5A\)
Let \(t_{2}\) be the time when half the initial current, \(i(0)/2\), is flowing through the resistor.
The current through a resistor in an RC circuit is given by:
\(i(t)=i(0)\cdot e^{-\frac{t}{RC}}\)
Setting \(i(t_{2})=i(0)/2\) and solving for \(t_{2}\):
\(i(t_{2})=i(0)\cdot e^{-\frac{t_{2}}{RC}}\)
\(e^{-\frac{t_{2}}{RC}}=\frac{1}{2}\)
\(t_{2}=-RC\ln\frac{1}{2}\)
Substituting the values of \(R\), \(C\), and \(i(0)\):
\(t_{2}=-10\Omega\cdot 100\cdot 10^{-3}F\cdot\ln\frac{1}{2}\approx 11.43\)ms
So, half the initial current is flowing through the resistor after approximately 11.43 milliseconds.
**Solution 18:** The initial current \(I(0)\) through the resistor at time \(t=0\) can be calculated using Ohm's Law:
\(I(0)=\frac{V}{R}=\frac{5}{10}=0.5\) A
The current through the resistor at any time \(t\) can be calculated using the exponential decay equation for an RC circuit:
\(I(t)=I(0)\ast e^{-\frac{t}{RC}}\)
Where \(RC\) is the time constant of the circuit, which is equal to \(RC=R+C=10\ast 10^{-5}=10^{-4}\) seconds.
To find the time \(t_{2}\) when half the initial current is flowing through the resistor, we can set \(I(t_{2})=\frac{I(0)}{2}\) and solve for \(t_{2}\):
\(\frac{I(0)}{2}=I(0)\ast e^{-\frac{t_{2}}{RC}}\)
\(\frac{1}{2}=e^{-\frac{t_{2}}{10^{-4}}}\)
\(\ln(\frac{t}{2})=-\frac{t_{2}}{10^{-4}}\)
\(t_{2}=-10^{-4}\ast\ln(\frac{1}{2})\)
Using a calculator, we can find that \(t_{2}\approx 0.693/10^{-4}\approx 693\) ms. So, half the initial current flows through the resistor after approximately 693 milliseconds.
**Solution 19:** The current through the resistor at any time \(t\) can be given by the equation \(I(t)=C\frac{dV}{dt}\). At time \(t=0\), the voltage across the capacitor is 5V and the current is given by \(I(0)=C\frac{dV}{dt}=\frac{5}{10-6}=5\cdot 10^{6}\) Amps.
The half-life of the current can be calculated as the time at which the current through the resistor is half of its initial value, or \(I(t_{2})=\frac{1}{2}I(0)\).
Using Ohm's law \(V=IR\), the voltage across the resistor at time \(t_{2}\) can be calculated as \(V(t_{2})=I(t_{2})R=\frac{1}{2}I(0)R=\frac{1}{2}\cdot 5\cdot 10^{6}\cdot 10=2.5 \cdot 10^{7}\) Volts.
Finally, the time \(t_{2}\) can be calculated using the equation for the current, \(I(t)=C\frac{dV}{dt}\), and the voltage across the capacitor, \(V(t)=V_{0}e^{-\frac{t}{RC}}\), as follows:
\(I(t_{2})=C\frac{d}{dt}=C\frac{d}{dt}(V_{0}e^{-\frac{t}{RC}})=-CV_{0}\frac{1}{RC} e^{-\frac{t}{RC}}\)
\(\frac{1}{2}I(0)=-CV_{0}\frac{1}{RC}e^{-\frac{t_{2}}{RC}}\)
\(t_{2}=RC\ln\left(\frac{5\cdot 10^{-6}\cdot 5}{2\cdot 5\cdot 10^{6}}\right)=RC\ln \left(\frac{5}{2}\right)=RC\ln 10\)
\(t_{2}=10\cdot\ln 10\approx 22.9\)ms
So, the time at which half the initial current flows through the resistor is approximately 22.9ms.
**Solution 20:** The initial current flow through the resistor can be calculated using Ohm's law: \(I(0)=\frac{V}{R}=\frac{5}{10}=0.5\) A.
The current through the resistor at time \(t\) can be expressed as \(I(t)=I(0)\ast e^{-\frac{t}{RC}}\).
Half of the initial current would flow through the resistor when \(I(t_{2})=\frac{1}{2}\cdot I(0)=0.25\) A.
Therefore, we can find \(t_{2}\) by solving for \(t\) in the equation \(I(t)=0.25\) A:
\(0.25=0.5\cdot e^{-\frac{t_{2}}{RC}}\)
\(e^{-\frac{t_{2}}{RC}}=\frac{1}{2}\)
\(t_{2}=-RC\cdot\ln\left(\frac{1}{2}\right)\)
\(t_{2}=-(10\) Ohms\()(100\) mF\()\()\()\()\()\()\()\()\)\(\)\(\)\(\)\(\)\(\)\(\)
So, after approximately 7.96 seconds, half of the initial current will flow through the resistor.
Figure 6: Solutions 15 through 20 to the prompt in Fig. 2.
**Solution 21:** The initial current through the resistor is given by the equation \(i(t)=C\frac{dv}{dt}\), where \(v(t)\) is the voltage across the capacitor. When the capacitor is fully charged, the voltage across it is \(5\)V. At time \(t=0\), the voltage across the capacitor begins to decrease as charge flows through the resistor, so \(\frac{dv}{dt}<0\).
The time \(t_{2}\) at which half the initial current is flowing through the resistor can be found by solving for \(i(t)=\frac{1}{2}i(0)\) and using the above equation for \(i(t)\).
\(i(0)=C\frac{dv}{dt}=C\frac{V}{R}\), so \(i(t)=C\frac{dv}{dt}=\frac{1}{2}C\frac{V}{R}=\frac{1}{2}i(0)\).
Rearranging and solving for \(t\) gives us \(t_{2}=\frac{R}{2}\frac{C}{i(0)}=\frac{10\text{ Ohms}}{\frac{2}{100\text{ mF}}}=100\) ms.
So after \(100\) milliseconds, half the initial current will be flowing through the resistor.
**Solution 22:** The time constant \(\tau\) of an RC circuit is given by \(\tau=RC\). In this case, \(\tau=RC=(10\text{ Ohms})(100\text{ mF})=1\) s.
The voltage across the capacitor as a function of time can be described by the equation \(V_{C}(t)=V_{0}e^{-t/\tau}\), where \(V_{0}\) is the initial voltage across the capacitor.
The current through the resistor is given by \(I(t)=C\frac{dV_{C}}{dt}\). Substituting the expression for \(V_{C}(t)\), we get \(I(t)=\frac{V_{0}C}{\tau}e^{-t/\tau}\). At time \(t=0\), the current is \(I(0)=\frac{V_{0}C}{\tau}=\frac{(5\text{ V})(100\text{ mF})}{1\text{ s}}=500\) mA.
Half the initial current is \(I(t)=\frac{1}{2}I(0)=250\) mA. Setting this equal to the expression for \(I(t)\) and solving for \(t\), we get:
\(\frac{1}{2}I(0)=\frac{V_{0}C}{\tau}e^{-t/\tau}\implies e^{-t/\tau}=\frac{1} {2}\implies-\frac{t}{\tau}=\ln\frac{1}{2}\implies t_{2}=\tau\ln 2=1\text{ s}\ln 2 \approx 0.693\) s
So, half the initial current flows through the resistor at approximately \(t_{2}=0.693\) s
**Solution 23:** The current \(i\) in an RC circuit at any time \(t\) can be described by the equation \(i(t)=C\frac{dV}{dt}\), where \(V\) is the voltage across the capacitor. The voltage across the capacitor can be described by the equation \(V(t)=V_{0}e^{-\frac{t}{RC}}\), where \(V_{0}\) is the initial voltage.
Substituting the second equation into the first equation, we get:
\(i(t)=C\frac{d}{dt}(V_{0}e^{-\frac{t}{RC}})=-\frac{V_{0}}{RC}e^{-\frac{t}{RC}}\)
We want to find the time \(t_{2}\) such that \(i(t_{2})=\frac{1}{2}i(0)\), where \(i(0)=\frac{V_{0}}{R}\). Substituting the values for \(V_{0}\), \(R\), and \(C\) into this equation and solving for \(t_{2}\) gives us:
\(\frac{1}{2}i(0)=\frac{1}{2}\frac{V_{0}}{R}=-\frac{V_{0}}{2RC}e^{-\frac{t_{2}} {RC}}\)
\(e^{-\frac{t_{2}}{RC}}=-\frac{R}{2C}\frac{-R}{2}i(0)=-\frac{1}{10}\)
\(-\frac{t_{2}}{RC}=\ln\left(-\frac{1}{10}\right)\)
\(t_{2}=RC\ln\left(\frac{1}{10}\right)=10\text{s}\ln\left(\frac{1}{10}\right) \approx 16.4\)s
So the time \(t_{2}\) at which half the initial current flows through the resistor is approximately \(16.4\) seconds.
**Solution 24:** The voltage across the capacitor and the resistor is the same at any given time. We can use the voltage across the capacitor to determine the current flowing through the resistor. The equation for the voltage across a capacitor is given by:
\(V=V_{0}\left(1-e^{-\frac{t}{RC}}\right)\)
where \(V_{0}\) is the initial voltage and \(t\) is the time elapsed.
The current flowing through the resistor is given by:
\(I=\frac{V}{R}=\frac{V_{0}}{R}e^{-\frac{t}{RC}}\)
We want to find the time \(t_{2}\) at which the current is half of the initial current. Thus,
\(\frac{V_{0}}{R}e^{-\frac{t_{2}}{RC}}=\frac{1}{2}\cdot\frac{V_{0}}{R}\)
Solving for \(t_{2}\), we get:
\(t_{2}=RC\ln\left(\frac{1}{e}\right)\approx 6.93RC\)
Plugging in the values for \(R\), \(C\), and \(V_{0}\), we get:
\(t_{2}\approx 6.93\cdot 10\cdot 0.0001=0.00693\) seconds.
**Solution 25:** The voltage across the capacitor as a function of time \(t\) is given by \(V(t)=V_{0}e^{-t/RC}\), where \(V_{0}\) is the initial voltage and \(RC\) is the time constant.
The current flowing through the resistor as a function of time \(t\) is given by \(I(t)=C\frac{dV}{dt}=-\frac{V_{0}}{RC}e^{-t/RC}\).
We want to find the time \(t_{2}\) at which the current is half the initial current, \(I(t_{2})=\frac{1}{2}I_{0}\), where \(I_{0}=C\frac{dV}{dt}\big{|}_{t=0}=\frac{V_{0}}{R}\).
Solving for \(t_{2}\), we get:
\(\frac{1}{2}I_{0}=-\frac{V_{0}C}{RC}e^{-t_{2}/RC}\)
\(\frac{1}{2}\frac{V_{0}}{R}=-\frac{R}{RC}e^{-t_{2}/RC}\)
\(e^{-t_{2}/RC}=-\frac{R}{2C}\)
\(-t_{2}/RC=\ln\left(-\frac{R}{2C}\right)\)
\(t_{2}=RC\ln\left(\frac{2C}{R}\right)\)
Substituting \(R=10\text{ Ohms and }C=100\text{ mF}=0.1\text{ F}\), we get:
\(t_{2}=10\text{ Ohms}\cdot 0.1\text{ F}\ln\left(\frac{0.2\text{ F}}{10\text{ Ohms}}\right)\approx 13.3\text{ ms}\)
So it takes approximately \(13.3\text{ ms}\) for half the initial current to flow through the resistor.
Figure 7: Solutions 21 through 25 to the prompt in Fig. 2.
**Prompt:** Consider the following numbered solutions to this problem:
"At $t=0$, a resistor (resistance $R8=10 Ohms) is connected across a fully charged capacitor (voltage $V8=5V, capacitance $C$=100mF). At what time $t_2$ does half the initial current flow through the resistor?"
Grade them from 0 (worst) to 4 (best) in terms of each correctness of argumentation, symbolic correctness, numerical correctness, and straightforwardness. Provide these grades in a comma-separated table with an additional column that gives a one-sentence feedback. Also, provide a comma-separated matrix of similarity between the solutions based on approaches, with the solution number as header row.
================= Solution 1:
We have a resistor (R = 10 Ohms) connected across a fully charged capacitor (V = 5V, C = 100mF).
...
Therefore, the time $t_2$ when half the initial current flows through the resistor is approximately $0.693,\(\backslash\)text{$\$}\$.
...
================= Solution 22:
The time constant $\(\backslash\)tau$ of an RC circuit is given by $\(\backslash\)tau = RC$. In this case, $\(\backslash\)tau = RC = (10\(\backslash\)text{{}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{}{}{}{}{}{}{}{}{}{}{}{}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}}{}{}{}{}{}{}{}{} }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{}{}{}{}{}{}{}{}{}{}{}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}}{{}}{}{}{} }{{}}{{}}{{}}{}{}{}{}{}{}{}{}{}{}{}{}{}}{{}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{{}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}}{{}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{{}}{{}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}}{{}}{{}}{{}}{}{}{}{}{}{}{}{}{}{} }{{}}{{}}{{}}{{}}{{}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{{}}{{}}{{}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}}{}{}{}{}{}{}{}{}{}{}{}{}{} }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{{}{}{ }{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{ }{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{ }{}{{}{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{ }{{}{{}{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{}{}{{}{}{}{}{{} }{{}{}{{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{{} }{{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{} }{{}{{}{}{}{{}{}{{}{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{} }{{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{} }{{{}{}{{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{{} }{{{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{}{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{{} }{{{}{}{{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{}{{}{}{{}{}{{}{}{}{{}{{}{}{{}{}{}{{}{}{{} {{}{}{{}{{}{}{}{}{{}{
\begin{table}
\begin{tabular}{c c c c c c} Sol. & Arg. & Symb. & Num. & Str.forw. & Feedback \\ \hline
1 & \(4.0\pm 0.0\) & \(4.0\pm 0.0\) & \(3.9\pm 0.3\) & \(4.0\pm 0.0\) & The majority of graders agree that Solution 1 is correct, straightforward, and well-structured. The solution uses the correct formula for the current in an RC circuit, derives the expression for the time \(t_{2}\), and solves for the required time when half the initial current flows through the resistor. One grader notes that the final answer should be in milliseconds, not seconds. Overall, the solution is clear, concise, and logically explained. \\
2 & \(3.6\pm 0.9\) & \(3.7\pm 0.6\) & \(2.8\pm 1.6\) & \(3.8\pm 0.4\) \\
2 & \(3.6\pm 0.9\) & \(3.7\pm 0.6\) & \(2.8\pm 1.6\) & \(3.8\pm 0.4\) \\
3 & \(3.7\pm 0.5\) & \(3.7\pm 0.5\) & \(2.2\pm 1.6\) & \(3.7\pm 0.5\) \\
4 & \(3.7\pm 0.5\) & \(3.8\pm 0.4\) & \(3.3\pm 1.2\) & \(3.8\pm 0.4\) \\
5 & \(3.9\pm 0.4\) & \(3.9\pm 0.4\) & \(3.9\pm 0.4\) & \(3.9\pm 0.4\) \\
6 & \(3.8\pm 0.4\) & \(3.8\pm 0.4\) & \(3.3\pm 1.1\) & \(3.8\pm 0.4\) \\
7 & \(3.1\pm 0.9\) & \(3.6\pm 0.6\) & \(3.1\pm 1.1\) & \(3.1\pm 0.9\) \\
8 & \(3.8\pm 0.4\) & \(3.8\pm 0.4\) & \(3.4\pm 1.1\) & \(3.8\pm 0.4\) \\
9 & \(0.2\pm 0.4\) & \(1.0\pm 0.9\) & \(0.0\pm 0.0\) & \(0.8\pm 0.7\) \\
10 & \(2.8\pm 1.1\) & \(3.1\pm 0.8\) & \(1.3\pm 1.3\) & \(2.9\pm 0.9\) \\ \end{tabular}
\end{table}
Table 1: AI-grading of Solutions 1–10.
\begin{table}
\begin{tabular}{c c c c c c} \hline Sol. & Arg. & Symb. & Num. & Str.forw. & Feedback \\ \hline
11 & \(3.5\pm 0.8\) & \(3.5\pm 0.7\) & \(2.8\pm 1.4\) & \(3.5\pm 0.8\) & The majority of the graders agree that Solution 11 has a correct approach by using the exponential decay equation for the RC circuit and deriving the equation for i(t). However, there seems to be a consensus that there are errors in the calculations, leading to an incorrect numerical result for \(t_{2}\). The argumentation, symbolic correctness, and straightforwardness could be improved. Despite these errors, some graders still consider the solution to be well-structured and clear. Solution 12 is a correct, well-structured, and straightforward approach to the problem. The majority of graders praised the clear argumentation, appropriate use of symbols and equations, and accurate numerical results. The solution effectively uses the RC circuit formula and provides a concise explanation. The majority of the graders agree that Solution 13 is correct, clear, and straightforward. The solution uses the appropriate RC circuit equations, symbols, and approach to find the time when half the initial current flows through the resistor. While there is mention of a sign error and incorrect final result by a couple of graders, the overall consensus supports the solution’s correctness and organization. The majority of graders agree that Solution 14 has a correct approach and argumentation, with symbolic correctness and straightforwardness. However, there are inconsistencies in the numerical results, likely due to errors in calculations or sign errors. The majority of graders agree that the solution to problem 15 is mostly accurate in terms of approach and equations used, but there are some minor mistakes in argumentation, symbolic correctness, and numerical calculations. The solution is considered to be somewhat straightforward and well-explained, but with a few errors in derivation and numerical results. The majority of the graders agree that Solution 16 has a correct approach and demonstrates accurate symbolic representation, argumentation, and use of formulas. However, a calculation mistake in determining the initial current and subsequent errors in numerical calculations lead to an incorrect numerical answer for \(t_{2}\). Despite these numerical issues, the solution is well-organized and clear. Solution 17 demonstrates the correct approach, argumentation, and symbolic representation, but makes an error in the numerical calculation, specifically a sign error in the time constant, which leads to an incorrect final result. Despite this, the solution is considered straightforward and directly addresses the problem. Solution 18 has the correct approach and argumentation, but the majority of graders point out an incorrect time constant calculation, which leads to an incorrect final numerical result. Despite this, the solution is well-structured and uses proper symbols and numerical values. The majority of the graders agree that Solution 19 contains multiple errors, including incorrect initial current and voltage calculations, as well as an incorrect approach for finding the time \(t_{2}\) when half of the initial current flows through the resistor. These errors lead to an incorrect final result. The solution also has issues with argumentation, symbolic representation, and straightforwardness. Solution 20 generally employs the correct approach and uses the exponential decay equation for solving the problem, but there are errors in the numerical calculations, specifically in the time constant and initial current. These inaccuracies lead to an incorrect final result. While some graders appreciate the clear and straightforward steps, the majority point out the numerical errors affecting the outcome.
\begin{table}
\begin{tabular}{c c c c c} \hline Sol. & Arg. & Symb. & Num. & Str.forw. & Feedback \\ \hline
11 & \(3.5\pm 0.8\) & \(3.5\pm 0.7\) & \(2.8\pm 1.4\) & \(3.5\pm 0.8\) & The majority of the graders agree that Solution 11 has a correct approach by using the exponential decay equation for the RC circuit and deriving the equation for i(t). However, there seems to be a consensus that there are errors in the calculations, leading to an incorrect numerical result for \(t_{2}\). The argumentation, symbolic correctness, and straightforwardness could be improved. Despite these errors, some graders still consider the solution to be well-structured and clear. Solution 12 is a correct, well-structured, and straightforward approach to the problem. The majority of graders praised the clear argumentation, appropriate use of symbols and equations, and accurate numerical results. The solution effectively uses the RC circuit formula and provides a concise explanation. The majority of the graders agree that Solution 13 is correct, clear, and straightforward. The solution uses the appropriate RC circuit equations, symbols, and approach to find the time when half the initial current flows through the resistor. While there is mention of a sign error and incorrect final result by a couple of graders, the overall consensus supports the solution’s correctness and organization. The majority of graders agree that Solution 14 has a correct approach and argumentation, with symbolic correctness and straightforwardness. However, there are inconsistencies in the numerical results, likely due to errors in calculations or sign errors. The majority of graders agree that the solution to problem 15 is mostly accurate in terms of approach and equations used, but there are some minor mistakes in argumentation, symbolic correctness, and numerical calculations. The solution is considered to be somewhat straightforward and well-explained, but with a few errors in derivation and numerical results. The majority of the graders agree that Solution 16 has a correct approach and demonstrates accurate symbolic representation, argumentation, and use of formulas. However, a calculation mistake in determining the initial current and subsequent errors in numerical calculations lead to an incorrect numerical answer for \(t_{2}\). Despite these numerical issues, the solution is well-organized and clear. Solution 17 demonstrates the correct approach, argumentation, and symbolic representation, but makes an error in the numerical calculation, specifically a sign error in the time constant, which leads to an incorrect final result. Despite this, the solution is considered straightforward and directly addresses the problem. Solution 18 has the correct approach and argumentation, but the majority of graders point out an incorrect time constant calculation, which leads to an incorrect final numerical result. Despite this, the solution is well-structured and uses proper symbols and numerical values. The majority of the graders agree that Solution 19 contains multiple errors, including incorrect initial current and voltage calculations, as well as an incorrect approach for finding the time \(t_{2}\) when half of the initial current flows through the resistor. These errors lead to an incorrect final result. The solution also has issues with argumentation, symbolic representation, and straightforwardness. Solution 20 generally employs the correct approach and uses the exponential decay equation for solving the problem, but there are errors in the numerical calculations, specifically in the time constant and initial current. These inaccuracies lead to an incorrect final result. While some graders appreciate the clear and straightforward steps, the majority point out the numerical errors affecting the outcome.
Often this includes remarks about numerical inaccuracies, even if the result is correct. However, particularly for the solution attempts that were completely incorrect, the feedback can be useful, for example, "Solution 9 demonstrates an incorrect understanding of the initial current, mistakenly assuming it to be zero."
### Agreement with Manual Grading
Fig. 10 shows the correlation between the rubric scores resulting from manual and AI-grading. The scores are clearly positively correlated, with a particularly high \(R^{2}\) for the correctness of the argument and the numerical answer. This finding is surprising, since the correctness of the argument is more subjective than for example the correctness of the symbolic operations, and the scoring of the correctness of the numerical answers is the one with the highest standard deviation between grading cycles. Symbolic correctness has the lowest \(R^{2}\) and is generally rated a lot higher by the AI than by the author.
Fig. 11 shows the total scores on the problem, where the rubric items approach, symbolic correctness, and straightforwardness are weighted 20%, and the correct final numerical answer 40%, respectively. In addition, the correspondingly combined standard deviations for the AI-gradings are given by error bars, and the data
Figure 10: Correlation between manual and AI-grading rubric scores.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Sol. & Arg. & Symb. & Num. & Str.forw. & Feedback \\ \hline
21 & \(1.6\pm 1.1\) & \(1.9\pm 1.0\) & \(0.7\pm 0.8\) & \(1.8\pm 1.1\) & The majority of the graders agree that Solution 21 has an incorrect approach, incorrect or incomplete argumentation, and incorrect numerical calculations. The relationship between i(t) and dv/dt is also incorrect, leading to a wrong calculation for \(t_{2}\). Some graders acknowledge clear explanation, symbolism, or straightforwardness in parts of the solution, but overall, the solution is deemed incorrect and weak.
The majority of graders agree that Solution 22 is correct and straightforward in its approach to finding the time when half the initial current flows through the resistor. The solution uses the time constant and the voltage across the capacitor as a function of time. However, a few graders noted some minor issues with the numerical calculation and representation of the time constant. Overall, the solution is well-explained and clear, with accurate derivation and symbolic correctness.
The majority of the graders find Solution 23 to have a correct approach and clear step-by-step argumentation. However, there are concerns about the incorrect final numerical result and potential errors in the calculations involving the exponential term and natural logarithm. Some graders also mention an incorrect use of the time constant, incorrect equation, and wrong numerical value.
The majority of graders agree that Solution 24 uses the correct approach, equations, and symbolic representation to find the time at which half the initial current flows through the resistor. However, there are numerical errors in the calculation, particularly regarding the time constant and the voltage across the capacitor. Additionally, some graders noted unnecessary complexity in the solution. Overall, the solution is partially correct but suffers from incorrect numerical values and minor inconsistencies.
The majority of the graders agree that Solution 25 has an incorrect numerical result for \(t_{2}\) due to inconsistencies, errors, and incorrect values in the derivation and equation. The approach is mostly correct and some steps are symbolically accurate, but there are sign errors and the method is not straightforward.
points are labeled by solution number.
The scores for the best solutions agree fairly well with the linear interpolation (\(R^{2}\approx 0.83\)), but fluctuations are higher in partial credit situations. On the average, GPT assigns total scores that are almost 0.9 points higher than those by the author
### Clustering
Figure 12 shows dendrograms and heat maps of the similarities between the solutions based on manual grading, AI-grading, and AI-approach similarity, respectively. Based on these, three clusters emerge for the manual and the AI-grading. For the AI-similarity-of-approach measure (determined by GPT in response to the prompt in Fig. 8), either two clusters could be identified, or the dendrogram could be cut at a deeper level (indicated by dashed lines in Fig. 12), resulting in four clusters. However, the fourth cluster only contains Solution 23, so a decision to not treat it separately could be justified.
These clusters are shown in Table. 4; it turns out that they roughly align with the total points in Fig. 11, with the low, medium, and highly scored problems forming the clusters.
All similarity measures identify the almost entirely wrong solutions 9, 19, and 21 as members of the "low" cluster. The manual grading has Solution 23 in the middle cluster, as the only mistake is \(\frac{CV_{0}}{-RC}=\frac{V_{0}}{RC}\), that is, an error in the calculation rather than a fundamental error; this solution was singled out in the clustering according to similarity-of-approach. Solution 25 not once but twice includes the same kind of error in calculating fractions.
Solutions 11 and 18 made it into the highest cluster for manual grading, but are not found in the corresponding cluster for AI-grading. Both solutions arrive at the correct result (even though Solution 18 is somewhat non-chalant with the units in the second-to-last expression).
Overall, clustering the solutions based on the similarity of the scores within the grading rubric provided similar information for manual and AI-grading, while clustering based on the similarity-of-approach is less informative.
Figure 11: Correlation between manual and AI-grading total scores. For each data point, the solution number and combined AI-grading standard deviations are indicated.
Figure 12: Dendrograms and heat maps of similarities between solutions based on manual grading (top panel), AI-grading (middle panel), and AI-approach similarity (bottom panel). The cluster cuts are indicated by purples lines; the dashed line indicates possible cuts.
The automatically formed clusters based on AI could be used for sorting solutions in preparation for human grading.
## V Discussion
### Potential for Assisting in Grading
GPT shows great promise as a tool for grading written solutions to physics problems, as evidenced by the results in Fig. 11. The system performs best at the extreme ends of the grading spectrum: clearly correct and clearly incorrect solutions are generally reliably recognized, but overall, the system currently falls short of the stringency required for standalone, unsupervised exam grading. Also, the system generally assigns more points than the author would.
While there are proposals to mitigate these shortcomings by providing "high transparency" assessments with multiple attempts and regrade requests [30], students will likely only invoke these for false negatives, i.e., scores that are lower than what they expected. This could lead to grade inflation. Instead of waiting for student complaints, instructional staff will need to verify the scores before the exam is returned.
As typical for the tool, feedback sounds plausible, but might be misleading. In grading problems, GPT-4 still exhibits some of the same weaknesses with symbolic and numerical calculations that its predecessor GPT-3 did in solving problems [6]. OpenAI promises the optional incorporation of Wolfram Alpha [31], which may solve this issue, but just like for the API, there is also a waiting list for the required plugin.
### Potential for Assisting in Learning
If not used for grading exams, especially in connection with smartphone-based scanning, GPT or similar tools could be used for assisting students in learning. This is a far more classic domain for AI in physics education [32; 33]
In that context it is important if the system could pinpoint particular errors. At the moment, though, this is still somewhat doubtful. For example, when prompted to find the mistake in Solution 14, the system responds:
The mistake in the solution is in the definition of the time constant. The correct definition is \(RC=R\cdot C\). The rest of the solution is correct. We need to correct the time constant and then find the correct value of \(t_{2}\).
The time constant is defined as \(RC=R\cdot C\). In this case, \(R=10,\mathrm{Ohms}\) and \(C=100,\mathrm{mF}=100\cdot 10^{-3},\mathrm{F}\), so the time constant is:
\(RC=R\cdot C=10,\mathrm{Ohms}\cdot 100\cdot 10^{-3},\mathrm{F}=1,\mathrm{s}\)
Now we can substitute the corrected time constant into the equation for \(t_{2}\):
\(t_{2}=-RC\ln\frac{1}{2}=-1,\mathrm{s}\ln\frac{1}{2}\approx 0.693,\mathrm{s}\)
So the correct time \(t_{2}\) at which half the initial current flows through the resistor is approximately \(0.693\) seconds.
The error is not really in the "definition" of the time constant, but simply in the calculation of the powers of ten.
When prompted to pinpoint the error in Solution 23, the response is:
The mistake in the solution is in the derivation of the equation \(i(t_{2})=\frac{1}{2}i(0)\). The correct equation should be:
\(i(t_{2})=\frac{1}{2}i(0)=\frac{1}{2}\frac{V_{0}}{R}\)
However, the solution incorrectly equates this expression to the equation for \(i(t)\):
\(-\frac{V_{0}}{2RC}e^{-\frac{t_{2}}{RC}}\)
The correct equation for \(i(t)\) should be:
\(i(t)=\frac{V_{0}}{R}e^{-\frac{t}{RC}}\)
Now, we can set up the correct equation for \(i(t_{2})\):
\(\frac{1}{2}i(0)=\frac{1}{2}\frac{V_{0}}{R}=\frac{V_{0}}{R}e^{-\frac{t_{2}}{RC}}\)
Now, we solve for \(t_{2}\):
\(e^{-\frac{t_{2}}{RC}}=\frac{1}{2}\)
\(-\frac{t_{2}}{RC}=\ln\left(\frac{1}{2}\right)\)
\(t_{2}=RC\ln 2=10\mathrm{s}\ln 2\approx 6.93\mathrm{s}\)
So the time \(t_{2}\) at which half the initial current flows through the resistor is approximately \(6.93\) seconds.
The system correctly found that the \(C\) was not cancelled, but then actually proceeded to wrongly calculate the answer. The feedback given by the system can be helpful,
\begin{table}
\begin{tabular}{l r r r} & Low & Medium & High \\ \hline Manual & 9 19 21 & 7 10 14 15 16 17 20 23 24 25 & 1 2 3 4 5 6 8 11 12 13 18 22 \\ AI-grading & 9 19 21 23 25 & 7 10 11 14 15 16 17 18 20 24 & 1 2 3 4 5 6 8 12 13 22 \\ AI-similarity & 9 19 21 (23) & 10 14 15 18 25 & 1 2 3 4 5 6 7 8 11 12 13 16 17 20 22 24 \\ \end{tabular}
\end{table}
Table 4: Clustering based on the dendrograms and heat maps in Fig. 12.
but needs to be evaluated critically by the learner -- which in itself is an increasingly valuable competency, as AI will almost invariably become ubiquitous in everyday life.
In any case, the system should not attempt to give away the solution; unfortunately, using the prompt "Act as a socratic teacher and assist a learner with this solution attempt" made the reply more personable and supportive ("You're on the right track, but there is a small mistake..."), but did not stop the system from providing a solution, either.
## VI Limitations
This study is limited to one particular tool, GPT-4, at one particular point in time, mid-April 2023. Due to the rapid development of AI-tools, it can thus provide nothing more than a snapshot of what is minimally possible. The initial step, conversion of handwritten solutions involving mathematical expressions to a machine-readable format could not be investigated here, due to lack of access to the required Application Programming Interface.
The author had to make choices when providing prompts to GPT. While some experimentation went into formulating the prompts, due to the probabilistic nature of the responses, some decisions were made based on anecdotal evidence, and better prompts framing the solutions could well have resulted in more reliable results.
GPT is only one of the rapidly evolving tools becoming broadly available. There are competing solutions by Google [34], NVIDIA [35], and Microsoft [36], which may perform better or worse, but could not be evaluated here.
## VII Outlook
The next step to this exploratory study would be enabled by access to the API (either of GPT or of other tools on the market) and consist of fully implementing the workflow in Fig. 1; performance of the optical character recognition is also crucial to the reliability of future systems.
A future study should involve authentic student work instead of GPT grading itself, for example from exams in large-enrollment physics courses, and compare the results from AI-grading to those from a traditional grader pool. Particularly in high-stake exams, grading usually involves more than one round, so inter-rater reliability could also be established for the human graders. As a welcome side-effect, earlier steps in Fig. 1 could provide better workflow management even for human graders, as less paper would be shuffled around and grades could more easily be adjusted.
## VIII Conclusion
In this exploratory study, GPT has shown considerable potential for grading freeform student work in physics. While AI-assigned grades have a strong correlation to manually assigned grades, they are currently not reliable enough for summative assessments, such as high-stake exams. The system, however, is reliable enough to assist human graders by pre-sorting or clustering solutions and by providing preliminary scores. GPT still remains hampered by its limited capabilities and inconsistencies carrying out symbolic and numerical calculations, so currently several independent grading rounds are needed. The narrative feedback provided by the system seems plausible, but currently still frequently falls short of being reliable. The system can be helpful in formative assessment, but also in that scenario, learners need to critically evaluate its responses.
###### Acknowledgements.
The author would like to thank Christine Kortemeyer for helpful feedback.
|
2306.08109 | Provable Accelerated Convergence of Nesterov's Momentum for Deep ReLU
Neural Networks | Current state-of-the-art analyses on the convergence of gradient descent for
training neural networks focus on characterizing properties of the loss
landscape, such as the Polyak-Lojaciewicz (PL) condition and the restricted
strong convexity. While gradient descent converges linearly under such
conditions, it remains an open question whether Nesterov's momentum enjoys
accelerated convergence under similar settings and assumptions. In this work,
we consider a new class of objective functions, where only a subset of the
parameters satisfies strong convexity, and show Nesterov's momentum achieves
acceleration in theory for this objective class. We provide two realizations of
the problem class, one of which is deep ReLU networks, which --to the best of
our knowledge--constitutes this work the first that proves accelerated
convergence rate for non-trivial neural network architectures. | Fangshuo Liao, Anastasios Kyrillidis | 2023-06-13T19:55:46Z | http://arxiv.org/abs/2306.08109v2 | Accelerated Convergence of Nesterov's Momentum for Deep Neural Networks under Partial Strong Convexity
###### Abstract
Current state-of-the-art analyses on the convergence of gradient descent for training neural networks focus on characterizing properties of the loss landscape, such as the Polyak-Lojaciewicz (PL) condition and the restricted strong convexity. While gradient descent converges linearly under such conditions, it remains an open question whether Nesterov's momentum enjoys accelerated convergence under similar settings and assumptions. In this work, we consider a new class of objective functions, where only a subset of the parameters satisfies strong convexity, and show Nesterov's momentum achieves acceleration in theory for this objective class. We provide two realizations of the problem class, one of which is deep ReLU networks, which -to the best of our knowledge-constitutes this work the first that proves accelerated convergence rate for non-trivial neural network architectures.
## 1 Introduction
**Theory of gradient descent in neural networks.** Training neural networks with gradient-based methods has shown surprising empirical success (Lecun et al., 1998; LeCun et al., 2015; Zhang et al., 2017; Goodfellow et al., 2016); yet, it is not lucid why such a simple algorithm works (Zhang et al., 2018; Li et al., 2020). Overall, it has been a mystery why such routines designed originally for convex problems can consistently find a good minimum for complicated non-convex objectives, as that of neural network training (Yun et al., 2019; Auer et al., 1995; Safran and Shamir, 2018).
Focusing on the case of neural networks, a major advance in understanding is the analysis through the so-called Neural Tangent Kernel (NTK) (Jacot et al., 2020). The use of NTK shows that, when the width of the neural network approaches infinity, the training process can be treated as a kernel machine. Inspired by the NTK analysis, a large body of work has focused on showing the convergence of gradient descent for various neural network architectures under finite over-parameterization requirements (Du et al., 2019, 2019; Allen-Zhu et al., 2019; Zou and Gu, 2019; Zhang et al., 2019; Awasthi et al., 2021; Ling et al., 2023; Allen-Zhu et al., 2019; Song and Yang, 2020; Su and Yang, 2019). In the meantime, the analysis was also extended to other training algorithms and settings that differ from
gradient descent, such as stochastic gradient descent (Oymak and Soltanolkotabi, 2019; Ji and Telgarsky, 2020; Xu and Zhu, 2021; Zou et al., 2018), drop-out (Liao and Kyrillidis, 2022; Mianjy and Arora, 2020), federated training (Huang et al., 2021), and adversarial training (Li et al., 2022).
A recent line of work follows a different route and characterizes the loss landscape of neural networks using the local Polyak-Lojaciewicz (PL) condition (Song et al., 2021; Liu et al., 2020; Nguyen, 2021; Ling et al., 2023). Building upon the well-established theory of how gradient descent converges under the PL condition (Karimi et al., 2020), this line of work decouples the neural network structure from the dynamics of loss function, along the optimization trajectory. This way, these works were able to show a more fine-grained analysis of the relationship between regulatory conditions, such as the PL condition, and the neural network structure. Such analysis not only resulted in further relaxed over-parameterization requirements (Song et al., 2021; Nguyen, 2021; Liu et al., 2020), but was also shown to be easily extended to deep architectures (Ling et al., 2023), suggesting that it is more suitable in practice.
**Theory of accelerated gradient descent in neural networks.** In contrast to the fast-growing research devoted to vanilla gradient descent, there is limited work on the behavior of momentum methods in training neural networks. In particular, the acceleration of both the Heavy Ball method and Nesterov's momentum is shown only for shallow ReLU neural networks (Wang et al., 2021; Liu et al., 2022; Zhang et al., 2022) and deep linear neural networks (Wang et al., 2021; Liu et al., 2022; Wang et al., 2022). For these architectures, (Wang et al., 2022) shows that training reduces to a case very similar to optimizing a quadratic objective. Thus, it is not clear whether these analyses can be extended to neural networks with more layers, or more complicated structure.
In addition, showing an accelerated convergence rate under only the PL condition has been a long-standing difficulty. For the Heavy Ball method, Danilova et al. (Danilova et al., 2018) established a linear convergence rate under the PL condition, but no acceleration is shown without assuming strong convexity. Wang et al. (Wang et al., 2022) proved an accelerated convergence rate; yet, the authors assume the \(\lambda^{\star}\)-average out condition, which cannot always be easily justified for neural networks. To the best of our knowledge, any proof of convergence for Nesterov's momentum under the PL condition in non-convex settings is currently missing. In the continuous limit, acceleration is proved in a limited scenario (Apidopoulos et al., 2022), and the analysis does not easily extend to the discrete case (Wang et al., 2022; Shi et al., 2022). Finally, Yue et al. (Yue et al., 2022) shows that gradient descent already achieves an optimal convergence rate for functions satisfying smoothness and the PL condition. This suggests that, to prove the acceleration of momentum methods on a wider class of neural networks, we need to leverage properties beyond the PL condition.
Based on prior work (Liu et al., 2020) that shows over-parameterized systems are essentially non-convex in any neighborhood of the global minimum, we aim at developing a relaxation to the (strong) convexity that is suitable for such systems and enables the momentum methods to achieve acceleration. In particular, we consider the minimization of a new class of objectives:
\[\min_{\mathbf{x}\in\mathbb{R}^{d_{1}},\mathbf{u}\in\mathbb{R}^{d_{2}}}f( \mathbf{x},\mathbf{u}), \tag{1}\]
where \(f\) satisfies the strong convexity with respect to \(\mathbf{x}\), among other assumptions (more in text below). Intuitively, our construction assumes that the parameter space can be partitioned into two sets, and only one of the two sets enjoys rich properties, such as strong convexity. In this paper, we focus on Nesterov's momentum. Our contribution can be summarized as:
* Focusing on the general problem class in (1), we prove that Nesterov's momentum enjoys an accelerated linear convergence where the convergence rate is \(1-\Theta\left(\nicefrac{{1}}{{\sqrt{\kappa}}}\right)\), compared to \(1-\Theta\left(\nicefrac{{1}}{{\kappa}}\right)\) in the case of gradient descent. Here, \(\kappa\) is a variant of the condition number of \(f\). Our result holds even when \(f\) is non-convex and non-smooth.
* We provide two realizations of our problem class. In Section 5.1, we first consider fitting an additive model under the MSE loss. We prove the acceleration of Nesterov's momentum as long as the non-convex component of the additive model is small enough to control the upper bounds of Lipschitz-type of assumptions.
* Next, we turn to the training of a deep ReLU neural network in Section 5.2. We show that when the width of the neural network trained with \(n\) samples is \(\Omega\left(n^{4}\right)\), under proper initialization, Nesterov's momentum converges to zero training loss with rate \(1-\Theta\left(\nicefrac{{1}}{{\sqrt{\kappa}}}\right)\). To the best of our knowledge, this is the first result that establishes accelerated convergence of deep ReLU networks.
## 2 Related Works
**Convergence in neural network training.** Recent studies on the convergence guarantees of gradient-based methods in neural network training mainly focus on two techniques: the Neural Tangent Kernel (Jacot et al., 2020) and the mean-field analysis (Mei et al., 2019). The NTK-based analysis builds upon the idea that, when the width approaches infinity, training neural networks behaves like training a kernel machine. Various techniques have been developed to control the error that occurs when the width becomes finite. In particular, (Du et al., 2019) tracks the change of activation patterns in ReLU-based neural networks, and often requires a large over-parameterization. Later works improve the over-parameterization requirement by leveraging matrix concentration inequalities (Song and Yang, 2020) for a more fine-grained analysis on the change of Jacobians (Oymak and Soltanolkotabi, 2019), and building their analysis upon the separability assumption of the data in the reproducing Hilbert space of the neural network (Ji and Telgarsky, 2020).
A noticeable line of work focuses on establishing that the PL condition is satisfied by neural networks, where the coefficient of the PL condition is based on the eigenvalue of the NTK matrix. (Nguyen, 2021) shows the PL condition is satisfied by deep ReLU neural networks, by considering the dominance of the gradient with respect to the weight in the last layer. (Liu et al., 2020) proves the PL condition by upper bounding the Hessian for deep neural networks with smooth activation functions. (Song et al., 2021) further reduces the over-parameterization, while maintaining the PL condition, via the expansion of the activation function with the Hermite polynomials. Lastly, (Banerjee et al., 2023) establishes the restricted strong convexity of neural networks within a sequence of ball-shaped regions, centered around the weights per iteration; yet, the coefficient of the strong convexity is not explicitly characterized in theory.
**Convergence of Nesterov's Momentum.** The original proof of Nesterov's momentum (Nesterov, 2018) builds upon the idea of estimating sequences for both convex smooth objectives and strongly convex smooth objectives. Later work in (Bansal and Gupta, 2019) provides an alternate proof within the same setting by constructing a Lyapunov function. In the non-convex setting, a large body of works focuses on variants of Nesterov's momentum that lead to a guaranteed convergence, by employing techniques such as negative curvature exploitation (Carmon et al., 2017), cubic regularization (Carmon and Duchi, 2020), and restarting schemes (Li and Lin, 2022). For neural networks, (Liu et al., 2022, 2022) are the only works that study the convergence of Nesterov's momentum. However, with the over-parameterization requirement considered, the objective is similar to a quadratic function.
Deviating from Nesterov's momentum, (Wang et al., 2021) studies the convergence of the Heavy-ball method under similar over-parameterization requirements. A recent work (Wu et al., 2023) proves the convergence of the Heavy-ball method under the mean-field limit; such a limit is not the focus of our study in this paper. Lastly, (Jelassi and Li, 2022) shows that momentum-based methods improve the generalization ability of neural networks. However, there is no explicit convergence guarantee for the training loss.
## 3 Preliminary
**Notations.** We use bold lower-case letters (e.g. \(\mathbf{a}\)) to denote vectors, bold upper-case letters (e.g. \(\mathbf{A}\)) to denote matrices and standard lower-case letters (e.g. \(a\)) to denote scalars. For a vector \(\mathbf{a}\), we use \(a_{i}\) to denote its \(i\)-th entry. For a matrix \(\mathbf{A}\), we use \(a_{ij}\) to denote its \((i,j)\)-th entry. \(\left\|\mathbf{a}\right\|_{2}\) denotes the \(\ell_{2}\)-norm of vector \(\mathbf{a}\), and \(\left\|\mathbf{A}\right\|_{F}\) denotes the Frobenius norm of \(\mathbf{A}\). For two vectors \(\mathbf{a}_{1},\mathbf{a}_{2}\), we use \((\mathbf{a}_{1},\mathbf{a}_{2})\) to denote the concatenation of \(\mathbf{a}_{1}\) and \(\mathbf{a}_{2}\). For a matrix \(\mathbf{A}\) with columns \(\mathbf{a}_{1},\ldots,\mathbf{a}_{n}\), we use \(\forall(\mathbf{A})=(\mathbf{a}_{1},\ldots,\mathbf{a}_{n})\) to denote the vectorized form of \(\mathbf{A}\).
### Problem Setup and Assumptions
Optimization literature often focuses on the constraint-free minimization of a function \(\hat{f}:\mathbb{R}^{d}\rightarrow\mathbb{R}\). In this paper, we reformulate this problem using the following definition.
**Definition 1**.: _(Partitioned Equivalence) A function \(f:\mathbb{R}^{d_{1}}\times\mathbb{R}^{d_{2}}\rightarrow\mathbb{R}\) is called a partitioned equivalence of \(\hat{f}:\mathbb{R}^{d}\rightarrow\mathbb{R}\), if \(i)\)\(d_{1}+d_{2}=d\), and \(ii)\) there exists a permutation function \(\pi:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) over the parameters of \(\hat{f}\), such that \(\hat{f}(\mathbf{w})=f(\mathbf{x},\mathbf{u})\) if and only if \(\pi(\mathbf{w})=(\mathbf{x},\mathbf{u})\). We say that \((\mathbf{x},\mathbf{u})\) is a partition of \(\mathbf{w}\)._
Despite the difference in the representation of their parameters, \(\hat{f}\) and \(f\) share the same properties, and any algorithm for \(\hat{f}\) would produce the same result for \(f\). Therefore, we turn our focus from the minimization problem of \(\hat{f}\) to the minimization problem in Equation (1). We further assume that \(f\) is a composition of a loss function \(g:\mathbb{R}^{\hat{d}}\rightarrow\mathbb{R}\) and a possibly non-smooth and non-convex model function \(h:\mathbb{R}^{d_{1}}\times\mathbb{R}^{d_{2}}\rightarrow\mathbb{R}^{\hat{d}}\), for some dimension \(\hat{d}\in\mathbb{Z}_{+}\); i.e., \(f(\mathbf{x},\mathbf{u})=g(h(\mathbf{x},\mathbf{u}))\). We obey the following notation with respect to gradients of \(f\):
\[\nabla_{1}f(\mathbf{x},\mathbf{u})=\frac{\partial f(\mathbf{x},\mathbf{u})}{ \partial\mathbf{x}};\quad\nabla_{2}f(\mathbf{x},\mathbf{u})=\frac{\partial f (\mathbf{x},\mathbf{u})}{\partial\mathbf{u}};\;\nabla f(\mathbf{x},\mathbf{u })=\left(\nabla_{1}f(\mathbf{x},\mathbf{u}),\nabla_{2}f(\mathbf{x},\mathbf{u })\right). \tag{2}\]
We consider Nesterov's momentum with constant step size \(\eta\) and momentum parameter \(\beta\):
\[(\mathbf{x}_{k+1},\mathbf{u}_{k+1}) =(\mathbf{y}_{k},\mathbf{v}_{k})-\eta\nabla f(\mathbf{y}_{k}, \mathbf{u}_{k}) \tag{3}\] \[(\mathbf{y}_{k+1},\mathbf{v}_{k+1}) =(\mathbf{x}_{k+1},\mathbf{u}_{k+1})+\beta\left((\mathbf{x}_{k+1},\mathbf{u}_{k+1})-(\mathbf{x}_{k},\mathbf{u}_{k})\right)\]
with \(\mathbf{y}_{0}=\mathbf{x}_{0}\) and \(\mathbf{v}_{0}=\mathbf{u}_{0}\). Let \(\mathcal{B}^{(1)}_{R_{\mathbf{x}}}\subseteq\mathbb{R}^{d_{1}}\) and \(\mathcal{B}^{(2)}_{R_{\mathbf{u}}}\subseteq\mathbb{R}^{d_{2}}\) denote the balls centered as \(\mathbf{x}_{0}\):
\[\mathcal{B}^{(1)}_{R_{\mathbf{x}}}=\{\mathbf{x}\in\mathbb{R}^{d_{1}}:\left\| \mathbf{x}-\mathbf{x}_{0}\right\|_{2}\leq R_{\mathbf{x}}\};\quad\mathcal{B}^{ (2)}_{R_{\mathbf{u}}}=\{\mathbf{u}\in\mathbb{R}^{d_{2}}:\left\|\mathbf{u}- \mathbf{u}_{0}\right\|_{2}\leq R_{\mathbf{u}}\}\]
With the definition of \(\mathcal{B}^{(1)}_{R_{\mathbf{x}}}\) and \(\mathcal{B}^{(2)}_{R_{\mathbf{u}}}\), We make the following assumptions:
**Assumption 1**.: \(f\) _is \(\mu\)-strongly convex with \(\mu>0\) with respect to the first part of its parameters:_
\[f(\mathbf{y},\mathbf{u})\geq f(\mathbf{x},\mathbf{u})+\left\langle\nabla_{1}f( \mathbf{x},\mathbf{u}),\mathbf{y}-\mathbf{x}\right\rangle+\frac{\mu}{2}\left\| \mathbf{y}-\mathbf{x}\right\|_{2}^{2},\quad\forall\mathbf{x},\mathbf{y}\in \mathbb{R}^{d_{1}};\;\mathbf{u}\in\mathcal{B}^{(2)}_{R_{\mathbf{u}}}.\]
**Assumption 2**.: \(f\) _is \(L_{1}\)-smooth with respect to the first part of its parameters:_
\[f(\mathbf{y},\mathbf{u})\leq f(\mathbf{x},\mathbf{u})+\left\langle\nabla_{1}f( \mathbf{x},\mathbf{u}),\mathbf{y}-\mathbf{x}\right\rangle+\frac{L_{1}}{2} \left\|\mathbf{y}-\mathbf{x}\right\|_{2}^{2},\quad\forall\mathbf{x},\mathbf{y }\in\mathbb{R}^{d_{1}};\;\mathbf{u}\in\mathcal{B}^{(2)}_{R_{\mathbf{u}}}.\]
Based on Assumption 1 and 2, we define the condition number of \(f\).
**Definition 2**.: _(Condition Number) The condition number \(\kappa\) of \(f\) is given by \(\kappa=\nicefrac{{L_{1}}}{{\mu}}\)._
**Assumption 3**.: \(g\) _satisfies \(\min_{\mathbf{s}\in\mathbb{R}^{d}}g(\mathbf{s})=\min_{\mathbf{x}\in\mathbb{R}^{ d_{1}},\mathbf{u}\in\mathbb{R}^{d_{2}}}f(\mathbf{x},\mathbf{u})\), and is \(L_{2}\)-smooth:_
\[g(\mathbf{s}_{1})\leq g(\mathbf{s}_{2})+\langle\nabla g(\mathbf{s}_{1}), \mathbf{s}_{2}-\mathbf{s}_{1}\rangle+\frac{L_{2}}{2}\left\|\mathbf{s}_{2}- \mathbf{s}_{1}\right\|_{2}^{2},\quad\forall\mathbf{s}_{1},\mathbf{s}_{2}\in \mathbb{R}^{\hat{d}}.\]
Assumptions 1 and 2 are relaxed versions of the smoothness and strong convexity, accordingly, also made in classical optimization literature: Instead of assuming that the objective is smooth and strongly convex over the whole set of parameters, we only assume such property to hold with respect to a subset of the parameters, while the rest of the parameters are in a ball near initialization. Assumption 3 is standard in prior literature (Liu et al., 2020; Song et al., 2021) on the convergence of neural network training, and holds for a large number of loss functions such as the MSE loss and the logistic loss.
**Assumption 4**.: \(h\) _satisfies \(G_{1}\)-Lipschitzness with respect to the second part of its parameters:_
\[\left\|h(\mathbf{x},\mathbf{u})-h(\mathbf{x},\mathbf{v})\right\|_{2}\leq G_{ 1}\left\|\mathbf{u}-\mathbf{v}\right\|_{2},\quad\forall\mathbf{x}\in\mathcal{ B}_{R_{\mathbf{x}}}^{(1)};\;\mathbf{u},\mathbf{v}\in\mathcal{B}_{R_{\mathbf{u}}}^{(2)}.\]
**Assumption 5**.: _The gradient of \(f\) with respect to the first part of its parameter, \(\nabla_{1}f(\mathbf{x},\mathbf{u})\), satisfies \(G_{2}\)-Lipschitzness with respect to the second part of its parameters:_
\[\left\|\nabla_{1}f(\mathbf{x},\mathbf{u})-\nabla_{1}f(\mathbf{x},\mathbf{v}) \right\|_{2}\leq G_{2}\left\|\mathbf{u}-\mathbf{v}\right\|_{2},\quad\forall \mathbf{x}\in\mathcal{B}_{R_{\mathbf{x}}}^{(1)};\;\mathbf{u},\mathbf{v}\in \mathcal{B}_{R_{\mathbf{u}}}^{(2)}.\]
**Assumption 6**.: _All the minimum values of \(f\), when restricted to the optimization over \(\mathbf{x}\), equal to the global minimum value:_
\[\min_{\mathbf{x}\in\mathbb{R}^{d_{1}}}f(\mathbf{x},\mathbf{u})=f^{\star}:=\min _{\mathbf{x}\in\mathbb{R}^{d_{1}},\mathbf{u}\in\mathbb{R}^{d_{2}}}f(\mathbf{x},\mathbf{u});\quad\forall\mathbf{u}\in\mathcal{B}_{R_{\mathbf{u}}}^{(2)}.\]
Notice that we do not assume \(f\) to be either convex or smooth with respect to \(\mathbf{u}\). As such, there is no reason to believe that the updates in (3) on \(\mathbf{u}\) will make positive progress towards finding the global minimum. We treat the change in the second part of the parameters as errors induced by the updates. Assumptions 4 and 5 are made such that the effect on the change of the model output \(h(\mathbf{x},\mathbf{u})\) and the gradient with respect to \(\mathbf{x}\) caused by the change of \(\mathbf{u}\) can be controlled. Moreover, without Assumption 6, it is possible that the change of \(\mathbf{u}\) will cause the optimization trajectory to get stuck at some point of \(\mathbf{u}\), such that the global minimum value cannot be achieved even when \(\mathbf{x}\) is fully optimized. As a sanity check, we can show that Assumptions 1-6 are satisfied by a smooth and strongly convex function:
**Theorem 1**.: _Let \(\tilde{f}\) be \(\tilde{\mu}\)-strongly convex and \(\tilde{L}\)-smooth. Then \(\tilde{f}\) satisfies Assumptions 1-6 with:_
\[R_{\mathbf{x}}=R_{\mathbf{u}}=\infty;\;\mu=\tilde{\mu};\;L_{1}=L_{2}=\tilde{L} ;\;G_{1}=G_{2}=0.\]
Theorem 1 shows that the combination of Assumptions 1-6 is no stronger than the assumption that the objective is smooth and strongly convex. Therefore, the minimization of the class of functions satisfying Assumptions 1-6 does not lead to better lower complexity bound than the class of smooth and strong convex functions. That is, the best convergence rate we can achieve is \(1-\Theta\left(\nicefrac{{1}}{{\sqrt{\kappa}}}\right)\).
**Remark 1**.: _Notice that Nesterov's momentum in (3) is agnostic to the partition of the parameters, i.e., it executes exactly the same operator on \(\mathbf{x}\) and \(\mathbf{u}\). Therefore, while our result requires the existence of a parameter partition \((\mathbf{x},\mathbf{u})\) that satisfies Assumptions 1-6, it does not need to explicitly identify such a partition. Moreover, in Sections 5.1 and 5.2, we will show that Assumptions 1-6 can be satisfied by two concrete models, where training of deep ReLU networks is one of them._
Accelerated Convergence under Partial Strong Convexity
### Warmup: Convergence of Gradient Descent
We first focus on the convergence of gradient descent. To start, we notice that Assumptions 1 and 6 impliy the PL condition:
**Lemma 1**.: _Suppose that Assumption 1, 6 hold. Then, for all \(\mathbf{x}\in\mathbb{R}^{d}\) and \(\mathbf{u}\in\mathcal{B}^{(2)}_{R_{\mathbf{u}}}\), we have:_
\[\left\|\nabla f(\mathbf{x},\mathbf{u})\right\|_{2}^{2}\geq\left\|\nabla_{1}f( \mathbf{x},\mathbf{u})\right\|_{2}^{2}\geq 2\mu\left(f(\mathbf{x},\mathbf{u})-f^{ \star}\right).\]
Recall that, due to the minimum assumption made on the relationship between \(f(\mathbf{x},\mathbf{u})\) and \(\mathbf{u}\), we treat the change of \(\mathbf{u}\) during the iterates as an error. Thus, we need the following lemma, which bounds how much \(f\) is affected by the change of \(\mathbf{u}\).
**Lemma 2**.: _Suppose that Assumptions 3, 4 hold. For any \(\hat{\mathcal{Q}}>0\) and \(\mathbf{x}\in\mathcal{B}^{(1)}_{R_{\mathbf{x}}},\mathbf{u},\mathbf{v}\in \mathcal{B}^{(2)}_{R_{\mathbf{u}}}\), we have:_
\[f(\mathbf{x},\mathbf{u})-f(\mathbf{x},\mathbf{v})\leq\hat{\mathcal{Q}}^{-1}L_ {2}\left(f(\mathbf{x},\mathbf{v})-f^{\star}\right)+\frac{G_{1}^{2}}{2}\left(L _{2}+\hat{\mathcal{Q}}\right)\left\|\mathbf{u}-\mathbf{v}\right\|_{2}^{2}.\]
With the help of Lemmas 1 and 2, we can show the linear convergence of gradient descent:
**Theorem 2**.: _Suppose that Assumptions 1-4 and 6 hold with \(G_{1}^{4}\leq\frac{\mu^{2}}{8L_{2}^{2}}\) and_
\[R_{\mathbf{x}}\geq 16\eta\kappa\sqrt{L_{1}}\left(f(\mathbf{x}_{0},\mathbf{u}_{ 0})-f^{\star}\right)^{\frac{1}{2}};\ R_{\mathbf{u}}\geq 16\eta\kappa G_{1} \sqrt{L_{2}}\left(f(\mathbf{x}_{0},\mathbf{u}_{0})-f^{\star}\right)^{\frac{1} {2}}.\]
_Then, there exists constant \(c>0\) such that the gradient descent on \(f\) given by:_
\[(\mathbf{x}_{k+1},\mathbf{u}_{k+1})=(\mathbf{x}_{k},\mathbf{u}_{k})-\eta \nabla f(\mathbf{x}_{k},\mathbf{u}_{k}),\]
_with \(\eta=\nicefrac{{c}}{{L_{1}}}\), converges linearly according to:_
\[f(\mathbf{x}_{k},\mathbf{u}_{k})-f^{\star}\leq\left(1-\frac{c}{4\kappa} \right)^{k}\left(f(\mathbf{x}_{0},\mathbf{u}_{0})-f^{\star}\right).\]
I.e., Theorem 2 shows that gradient descent applied to \(f\) converges linearly with a rate of \(1-\Theta(\nicefrac{{1}}{{\kappa}})\) within our settings. The proofs for Lemma 1 and 2, and Theorem 2 are deferred to Appendix B.
### Technical Difficulty
We will now study the convergence property of Nesterov's momentum in equation 3 under only Assumptions 1-6. Our analysis bears two major difficulties.
**Difficulty 1**.: _Most previous analyses of Nesterov's momentum use the global minimum as a reference point to construct the Lyapunov function; see Bansal and Gupta (2019), d'Aspremont et al. (2021). In the original proof of Nesterov, the construction of estimating sequence also assumes the uniqueness of global minimum Nesterov (2018). However, our objective function is non-convex and allows the existence of multiple global minima, which prevents us from directly applying the Lyapunov function or estimating sequence as in previous works._
While the non-convexity of \(f\) introduces the possibility of multiple global minima, Assumption 1 implies that, with a fixed \(\mathbf{u}\), there exists a unique \(\mathbf{x}^{\star}(\mathbf{u})\) that minimizes \(f(\mathbf{x},\mathbf{u})\). Moreover, Assumption 6 implies that, for all \(\mathbf{u}\in\mathcal{B}^{(2)}_{R_{\mathbf{u}}}\), the local minimum \((\mathbf{x}^{\star}(\mathbf{u}),\mathbf{u})\) is also a global minimum. Thus, we can resolve the difficulty of accounting for multiple global minima by tracking the change of \(\mathbf{x}^{\star}(\mathbf{u})\) with the change of \(\mathbf{u}\). The following lemma gives a characterization of this property.
**Lemma 3**.: _Let \(\mathbf{x}^{\star}(\mathbf{u})=\arg\min_{\mathbf{x}\in\mathbb{R}^{d_{1}}}f( \mathbf{x},\mathbf{u})\). Suppose Assumptions 1 and 5 hold. Then, we have:_
\[\|\mathbf{x}^{\star}(\mathbf{u}_{1})-\mathbf{x}^{\star}(\mathbf{u}_{2})\|_{2} \leq\frac{G_{2}}{\mu}\left\|\mathbf{u}_{1}-\mathbf{u}_{2}\right\|_{2},\quad \forall\mathbf{u}_{1},\mathbf{u}_{2}\in\mathcal{B}_{R_{\mathbf{u}}}^{(2)}.\]
Lemma 3 indicates that, if we view \(\mathbf{x}^{\star}(\mathbf{u})\) as a function of \(\mathbf{u}\), then this function is \(\frac{G_{2}}{\mu}\)-Lipschitz. For a fixed \(\mathbf{u}\), given the nice properties on \(\mathbf{x}\), the iterates of Nesterov's momentum will guide \(\mathbf{x}\) to the minimum, based on the current \(\mathbf{u}\). Lemma 3 guarantees that the progress toward the minimum induced by \(\mathbf{u}_{1}\) does not deviate much from the progress toward the minimum induced by \(\mathbf{u}_{2}\).
**Difficulty 2**.: _Lemmas 2 and 3 show the importance of bounding the change of \(\mathbf{u}\) during iterates, namely \(\left\|\mathbf{u}_{k+1}-\mathbf{u}_{k}\right\|_{2}\). In previous works that focus on the convergence of neural network training, it is shown that this change of \(\mathbf{u}\) is closely related to the norm of the gradient. Under the assumption of smoothness, the norm of the gradient is bounded by a factor times the optimality gap at the current point, which is shown to enjoy a linear convergence by induction Du et al. (2019b); Nguyen (2021). However, in the case of Nesterov's momentum, we cannot directly utilize this relationship, since the gradient is evaluated at the intermediate step \((\mathbf{y}_{k},\mathbf{v}_{k})\), and, while we know that the optimality gap at \((\mathbf{x}_{k},\mathbf{u}_{k})\) converges linearly, we have very little knowledge about the optimality gap at \((\mathbf{y}_{k},\mathbf{v}_{k})\)._
To tackle this difficulty, our analysis starts with a careful bound on \(\|\mathbf{x}_{k+1}-\mathbf{x}_{k}\|_{2}^{2}\) by utilizing the convexity with respect to \(\mathbf{x}\) to characterize the inner product \(\langle\nabla_{1}f(\mathbf{y}_{k},\mathbf{v}_{k}),\mathbf{x}_{k}-\mathbf{x}_ {k-1}\rangle\). After that, we derive a bound of \(\|\nabla_{1}f(\mathbf{y}_{k},\mathbf{v}_{k})\|_{2}^{2}\) using a combination of \(\left\|\mathbf{x}_{k+1}-\mathbf{x}_{k}\right\|_{2}^{2}\) and \(\left\|\mathbf{x}_{k}-\mathbf{x}_{k-1}\right\|_{2}^{2}\). Lastly, we relate \(\|\nabla_{2}f(\mathbf{y}_{k},\mathbf{v}_{k})\|_{2}^{2}\) to \(\|\nabla_{1}f(\mathbf{y}_{k},\mathbf{v}_{k})\|_{2}^{2}\) using the following gradient dominance property.
**Lemma 4**.: _Suppose that Assumptions 1, 3, 4, and 6 hold. Then, we have:_
\[\|\nabla_{2}f(\mathbf{x},\mathbf{u})\|_{2}^{2}\leq\frac{G_{1}^{2}L_{2}}{\mu} \left\|\nabla_{1}f(\mathbf{x},\mathbf{u})\right\|_{2}^{2},\quad\forall \mathbf{x}\in\mathcal{B}_{R_{\mathbf{x}}}^{(1)};\;\mathbf{u}\in\mathcal{B}_{R _{\mathbf{u}}}^{(2)}.\]
Lemma 4 establishes the upper bound of \(\|\nabla_{2}f(\mathbf{x},\mathbf{u})\|_{2}^{2}\) using \(\|\nabla_{1}f(\mathbf{x},\mathbf{u})\|_{2}^{2}\). Direct application of this result contributes to the bound on \(\left\|\mathbf{u}_{k+1}-\mathbf{u}_{k}\right\|_{2}\). Intuitively, this lemma implies that the effect of gradient update on \(\mathbf{u}\) is less significant than the effect of gradient update on \(\mathbf{x}\).
### Acceleration of Nesterov's Momentum
After resolving the two technical difficulties stated above, we are ready to derive our main result. We will first state the main result theorem, and provide a sketch of the proof. The detailed proof will be deferred to Appendix D.
**Theorem 3**.: _Let Assumptions (1)-(6) hold. Consider Nesterov's momentum given by (3) with initialization \(\{\mathbf{x}_{0},\mathbf{u}_{0}\}=\{\mathbf{y}_{0},\mathbf{v}_{0}\}\). There exists absolute constants \(c,C_{1},C_{2}>0\), such that, if \(\mu,L_{1},L_{2},G_{1},G_{2}\) and \(R_{\mathbf{x}},R_{\mathbf{u}}\) satisfy:_
\[G_{1}^{4} \leq\frac{C_{1}\mu^{2}}{L_{2}(L_{2}+1)^{2}}\left(\frac{1-\beta}{ 1+\beta}\right)^{3};\quad G_{1}^{2}G_{2}^{2}\leq\frac{C_{2}\mu^{3}}{L_{2}(L_{ 2}+1)\sqrt{\kappa}}\left(\frac{1-\beta}{1+\beta}\right)^{2}; \tag{4}\] \[R_{\mathbf{x}} \geq\frac{36}{c}\sqrt{\kappa}\left(\frac{\eta(L_{2}+1)}{1-\beta} \right)^{\frac{1}{2}}(f(\mathbf{x}_{0},\mathbf{u}_{0})-f^{\star})^{\frac{1}{2}};\] \[R_{\mathbf{u}} \geq\frac{36}{c}\sqrt{\kappa}\left(\frac{\eta G_{1}^{2}L_{2}(L_{2 }+1)(1+\beta)^{3}}{\mu\beta(1-\beta)^{3}}\right)^{\frac{1}{2}}(f(\mathbf{x}_ {0},\mathbf{u}_{0})-f^{\star})^{\frac{1}{2}},\]
_and, if we choose \(\eta=\nicefrac{{c}}{{L_{\star}}}\), \(\beta=\nicefrac{{(4\sqrt{\kappa}-\sqrt{c})}}{{(4\sqrt{\kappa}+\tau\sqrt{c})}}\), then \(\mathbf{x}_{k},\mathbf{y}_{k}\in\mathcal{B}_{R_{\mathbf{x}}}^{(1)}\) and \(\mathbf{u}_{k},\mathbf{v}_{k}\in\mathcal{B}_{R_{\mathbf{u}}}^{(2)}\) for all \(k\in\mathbb{N}\), and Nesterov's recursion converges according to:_
\[f(\mathbf{x}_{k},\mathbf{u}_{k})-f^{\star}\leq 2\left(1-\frac{c}{4\sqrt{ \kappa}}\right)^{k}(f(\mathbf{x}_{0},\mathbf{u}_{0})-f^{\star}). \tag{5}\]
Theorem 3 shows that, under Assumptions 1-6 with a sufficiently small \(G_{1}\) and \(G_{2}\) as in (4), Nesterov's iteration enjoys an accelerated convergence, as in (5). Moreover, the iterates of Nesterov's momentum \(\{(\mathbf{x}_{k},\mathbf{y}_{k})\}_{k=1}^{\infty}\) and \(\{(\mathbf{u}_{k},\mathbf{v}_{k})\}_{k=1}^{\infty}\) stay in a ball around initialization with radius in (4).
To better interpret our result, we first focus on (4). By our choice of \(\beta\), we have that \(1-\beta=\Theta\left(\nicefrac{{1}}{{\sqrt{\kappa}}}\right)\) and \(1+\beta=\Theta\left(1\right)\). Therefore, the requirement of \(G_{1},G_{2}\) in (4) can be simplified to \(G_{1}^{4}\leq O\left(\nicefrac{{\mu^{\gamma/2}}}{{L_{1}^{3/2}L_{2}^{3}}}\right)\) and \(G_{1}^{2}G_{2}^{2}\leq O\left(\nicefrac{{\mu^{\beta/2}}}{{L_{1}^{3/2}L_{2}^{3 }}}\right)\). This simplified condition implies that we need a smaller \(G_{1}\) and \(G_{2}\), if \(\mu\) is small and if \(L_{1}\) and \(L_{2}\) are large. For the requirement on \(R_{\mathbf{x}}\) and \(R_{\mathbf{u}}\) in (4), we can simplify with \(\eta=O\left(\nicefrac{{1}}{{L_{1}}}\right)\) and \(\beta=\Theta\left(1\right)\). In this way, \(R_{\mathbf{x}}\) and \(R_{\mathbf{u}}\) reduce to \(\Omega\left(\nicefrac{{L_{1}^{3/4}L_{2}^{3/2}}}{\mu^{3/4}}\right)\cdot(f( \mathbf{x}_{0},\mathbf{u}_{0})-f^{\star})^{\frac{1}{2}}\) and \(\Omega\left(G_{1}\nicefrac{{L_{1}^{3/4}L_{2}}}{\mu^{7/4}}\right)\cdot(f( \mathbf{x}_{0},\mathbf{u}_{0})-f^{\star})^{\frac{1}{2}}\), respectively. Both quantities grow with a larger \(L_{1}\) and \(L_{2}\) and a smaller \(\mu\). Noticeably \(R_{\mathbf{u}}\) also scales with \(G_{1}\). Focusing on the convergence property in (5), we can conclude that Nesterov's momentum achieves an accelerated convergence rate of \(1-\Theta\left(\nicefrac{{1}}{{\sqrt{\kappa}}}\right)\) compared with the \(1-\Theta\left(\nicefrac{{1}}{{\kappa}}\right)\) rate in Theorem 2. We sketch the proof of Theorem 3 below.
Sketch of Proof.: Setting \(\mathbf{y}_{-1}=\mathbf{y}_{0}\) and \(\mathbf{v}_{-1}=\mathbf{v}_{0}\), our proof utilize the following Lyapunov function:
\[\phi_{k}=f(\mathbf{x}_{k},\mathbf{u}_{k})-f^{\star}+\mathcal{Q}_{1}\left\| \mathbf{z}_{k}-\mathbf{x}_{k-1}^{\star}\right\|_{2}^{2}+\frac{\eta}{8}\left\| \nabla_{1}f(\mathbf{y}_{k-1},\mathbf{v}_{k-1})\right\|_{2}^{2};\quad\mathbf{x }_{k}^{\star}=\operatorname*{arg\,min}_{\mathbf{x}\in\mathbb{R}^{d_{1}}}f( \mathbf{x},\mathbf{v}_{k}),\]
with properly chosen \(\mathcal{Q}_{1}\) and \(\mathbf{z}_{k}\). The proof recursively establishes the following three properties:
\[\left(1-\frac{c}{2\sqrt{\kappa}}\right)^{-1}\phi_{k+1}-\phi_{k}\leq\frac{c}{4 \sqrt{\kappa}}\left(1-\frac{c}{4\sqrt{\kappa}}\right)^{k}\phi_{0}; \tag{6}\]
\[\left\|\mathbf{x}_{k}-\mathbf{x}_{k-1}\right\|_{2}^{2}\leq\mathcal{Q}_{2}\left( 1-\frac{c}{4\sqrt{\kappa}}\right)^{k}\phi_{0};\quad\left\|\mathbf{u}_{k}- \mathbf{u}_{k-1}\right\|_{2}^{2}\leq G_{1}^{2}\mathcal{Q}_{3}\left(1-\frac{c}{4 \sqrt{\kappa}}\right)^{k}\phi_{0}. \tag{7}\]
By our choice of \(\phi_{k}\), Equation (6) implies that:
\[f(\mathbf{x}_{k},\mathbf{u}_{k})-f^{\star}\leq\phi_{k}\leq\left(1-\frac{c}{4 \sqrt{\kappa}}\right)^{k}\phi_{0}, \tag{8}\]
which further implies the convergence in Equation (5). The following lemma shows that, if Equation (5) holds for all \(k\leq\hat{k}\), we can guarantee Equation (7) with \(k=\hat{k}+1\).
**Lemma 5**.: _Let the Assumptions of Theorem 3 hold. If Equation (8) holds for all \(k\leq\hat{k}\), then Equation (7) holds for \(k=\hat{k}+1\) with \(\mathcal{Q}_{2}=\frac{6\eta L_{2}(L_{2}+1)}{1-\beta}\) and \(\mathcal{Q}_{3}=\frac{6\eta L_{2}(L_{2}+1)(1+\beta)^{3}}{\mu\beta(1-\beta)^{3}}\)._
With Lemma 5, we can first guarantee that up to iteration \(\hat{k}+1\), \(\mathbf{x}_{k}\) and \(\mathbf{u}_{k}\) stay in \(\mathcal{B}_{R_{\mathbf{x}}}^{(1)}\) and \(\mathcal{B}_{R_{\mathbf{u}}}^{(2)}\). Thus, we can utilize Assumptions 1-6 in showing Equation (6) for iteration \(\hat{k}+1\). Moreover, the bound on \(\left\|\mathbf{u}_{k}-\mathbf{u}_{k-1}\right\|_{2}\) connects the following lemma to Equation (6).
**Lemma 6**.: _Let the Assumptions of Theorem (3) hold. Then, we have:_
\[\left(1-\frac{c}{2\sqrt{\kappa}}\right)^{-1}\phi_{k+1}-\phi_{k}\leq c\beta^{2} \sqrt{\kappa}\left(G_{1}^{2}L_{2}+\frac{8\mathcal{Q}_{1}G_{1}^{2}G_{2}^{2}}{\mu^{2 }}\right)\left\|\mathbf{u}_{k}-\mathbf{u}_{k-1}\right\|_{2}^{2}\]
Lemmas 5, 6 together with a small enough \(G_{1}\) and \(G_{2}\) imply Equation (6).
Examples
We consider two examples that satisfy Assumptions 1-6. By enforcing the requirement in Theorem 3, we show that, although being nonconvex and possibly non-smooth, the two models enjoy accelerated convergence when trained with Nesterov's momentum.
### Toy example: Additive model
Given matrices \(\mathbf{A}_{1}\in\mathbb{R}^{m\times m},\mathbf{A}_{2}\in\mathbb{R}^{m\times d}\) and a non-linear function \(\sigma:\mathbb{R}^{m}\rightarrow\mathbb{R}^{m}\), we consider \(h(\mathbf{x},\mathbf{u})=\mathbf{A}_{1}\mathbf{x}+\sigma(\mathbf{A}_{2} \mathbf{u})\) as the summation of a linear model and a non-linear model. If we train \(h(\mathbf{x},\mathbf{u})\) over the loss \(g(\mathbf{s})=\frac{1}{2}\left\|\mathbf{s}-\mathbf{b}\right\|_{2}^{2}\) for some label \(\mathbf{b}\in\mathbb{R}^{m}\), the objective can be written as
\[f(\mathbf{x},\mathbf{u})=g(\mathbf{A}_{1}\mathbf{x}+\sigma\left(\mathbf{A}_{2 }\mathbf{u}\right))=\frac{1}{2}\left\|\mathbf{A}_{1}\mathbf{x}+\sigma\left( \mathbf{A}_{2}\mathbf{u}\right)-\mathbf{b}\right\|_{2}^{2}. \tag{9}\]
Due to the non-linearity of \(\sigma\), \(f(\mathbf{x},\mathbf{u})\) is in general non-convex. If we further choose \(\sigma\) to be some non-smooth function such as ReLU, i.e., \(\sigma(\mathbf{x})_{i}=\max\{0,x_{i}\}\), the objective can also be non-smooth. Yet, assuming that \(\sigma\) is Lipschitz, we can show that \(f(\mathbf{x},\mathbf{u})\) satisfies Assumptions 1-6.
**Lemma 7**.: _Suppose that \(\sigma\) is \(B\)-Lipschitz and \(\sigma_{\min}(\mathbf{A}_{1})>0\). Then \(f(\mathbf{x},\mathbf{u})\) satisfies Assumptions 1-6 with_
\[\begin{split} R_{\mathbf{x}}=& R_{\mathbf{u}}= \infty;\;\mu=\sigma_{\min}\left(\mathbf{A}_{1}\right)^{2};\;L_{1}=\sigma_{ \max}\left(\mathbf{A}_{1}\right)^{2};\;L_{2}=1\\ G_{1}=& B\sigma_{\max}\left(\mathbf{A}_{2}\right); \;G_{2}=B\sigma_{\max}\left(\mathbf{A}_{1}\right)\sigma_{\max}\left(\mathbf{A }_{2}\right).\end{split} \tag{10}\]
Notice that while \(\mu\) and \(L_{1}\) depend entirely on the property of \(\mathbf{A}_{1}\), both \(G_{1}\) and \(G_{2}\) can be made smaller by choosing \(\mathbf{A}_{2}\) with a small enough \(\sigma_{\max}(\mathbf{A}_{2})\). Intuitively, this means that \(G_{1}\) and \(G_{2}\) can be controlled, as long as the component that introduces the non-convexity and non-smoothness \(\sigma\left(\mathbf{A}_{2}\mathbf{u}\right)\) is small enough. Therefore, we can apply Theorem 3 to the minimization of Equation (9).
**Theorem 4**.: _Consider the problem in Equation (9) and assume using Nesterov's momentum to find the minimizer of \(f(\mathbf{x},\mathbf{u})\), where \(\sigma\) is a \(B\)-Lipschitz function. Let \(\kappa=\nicefrac{{\sigma_{\max}\left(\mathbf{A}_{1}\right)^{2}}}{{\sigma_{ \min}\left(\mathbf{A}_{1}\right)^{2}}}\), and suppose:_
\[\sigma_{\min}\left(\mathbf{A}_{1}\right)\geq\tilde{C}\sigma_{\max}\left( \mathbf{A}_{2}\right)B\kappa^{0.75}, \tag{11}\]
_for some large enough constant \(\tilde{C}>0\). Then there exists a constant \(c>0\) such that, if we choose \(\eta=\nicefrac{{c}}{{\sigma_{\max}\left(\mathbf{A}_{1}\right)^{2}}}\) and \(\beta=\nicefrac{{(4\sqrt{\kappa}-\sqrt{c})}}{{(4\sqrt{\kappa}+7\sqrt{c})}}\), Nesterov's momentum in Equation (3) converges according to:_
\[f(\mathbf{x}_{k},\mathbf{u}_{k})\leq 2\left(1-\frac{c}{4\sqrt{\kappa}}\right)^{k }f(\mathbf{x}_{0},\mathbf{u}_{0}).\]
Notice that the requirement in Equation (11) favors a larger \(\sigma_{\min}(\mathbf{A}_{1})\) and smaller \(\sigma_{\max}(\mathbf{A}_{2}),B\) and \(\kappa\). Using this example, we empirically verify the theoretical result of Theorem 3, as shown in Figure 1. The three rows correspond to the cases of \(\sigma_{\max}\left(\mathbf{A}_{2}\right)\in\{0.01,0.3,5\}\), respectively. The lines in the plots denote the average loss/distance among 10 trials, while the shaded region denotes the standard deviation. Observing the plots in the left column, we can see that in all three cases, Nesterov's momentum achieves a faster convergence compared with gradient descent, while the case with the largest \(\sigma_{\max}\left(\mathbf{A}_{2}\right)\) introduces the largest variance between results of the ten trials. Recall that \(\sigma_{\max}\left(\mathbf{A}_{2}\right)\) in our example controls the magnitude of \(G_{1}\) and \(G_{2}\). Thus, this phenomenon shows that when \(G_{1}\) and \(G_{2}\) become larger, the theoretical guarantee in Theorem 3 begins to break down and the
result depends more on the initialization. Figures in the right column plots the evolution of \(\|\mathbf{x}_{k}-\mathbf{x}_{k-1}\|_{2}^{2}\) and \(\|\mathbf{u}_{k}-\mathbf{u}_{k-1}\|_{2}^{2}\). All three figures show that the two quantity decrease linearly. This phenomenon supports the linear decay of the two quantities, as shown in Lemma 5. Moreover, as \(\sigma_{\max}\left(\mathbf{A}_{2}\right)\) increases, the relative magnitude of \(\|\mathbf{u}_{k}-\mathbf{u}_{k-1}\|_{2}^{2}\) to \(\|\mathbf{x}_{k}-\mathbf{x}_{k-1}\|_{2}^{2}\) also increase (the line of "\(\|\Delta\mathbf{u}_{k}\|_{2}\)" get closer to "\(\|\Delta\mathbf{x}_{k}\|_{2}\)"). This corresponds to the fact that \(\|\mathbf{u}_{k}-\mathbf{u}_{k-1}\|_{2}^{2}\) scales with \(G_{1}\), as shown in Lemma 5.
### Deep ReLU Neural Networks
Consider the \(\Lambda\)-layer ReLU neural network with layer widths \(\{d_{\ell}\}_{\ell=0}^{\Lambda}\). Denoting the number of training samples with \(n\), we consider the input and label of the training data given by \(\mathbf{X}\in\mathbb{R}^{n\times d_{0}}\) and \(\mathbf{Y}\in\mathbb{R}^{n\times d_{\Lambda}}\). Let the weight matrix in the \(\ell\)-th layer be \(\mathbf{W}_{\ell}\). We use \(\boldsymbol{\theta}=\{\mathbf{W}_{\ell}\}_{\ell=1}^{\Lambda}\) to denote the collection of all weights and \(\sigma(\mathbf{A})_{ij}=\max\{0,a_{ij}\}\) to denote the ReLU function. Then, the output of each layer is given by:
\[\mathbf{F}_{\ell}\left(\boldsymbol{\theta}\right)=\begin{cases}\mathbf{X},& \text{if }\ell=0;\\ \sigma\left(\mathbf{F}_{\ell-1}\left(\boldsymbol{\theta}\right)\mathbf{W}_{ \ell}\right),&\text{if }\ell\in[\Lambda-1];\\ \mathbf{F}_{\Lambda-1}\left(\boldsymbol{\theta}\right)\mathbf{W}_{\Lambda},& \text{if }\ell=\Lambda.\end{cases} \tag{12}\]
We consider the training of \(\mathbf{F}_{\Lambda}\left(\boldsymbol{\theta}\right)\) over the MSE loss, as in \(\mathcal{L}(\boldsymbol{\theta})=\frac{1}{2}\left\|\mathbf{F}_{\Lambda} \left(\boldsymbol{\theta}\right)-\boldsymbol{Y}\right\|_{F}^{2}\). We can interpret the scenario using our partition model in (1). Let \(g(\mathbf{s})=\frac{1}{2}\left\|\mathbf{s}-\mathtt{V}\left(\mathbf{Y}\right) \right\|_{2}^{2}\). If we partition the parameter \(\boldsymbol{\theta}\) into \(\mathbf{x}=\mathtt{V}\left(\mathbf{W}_{\Lambda}\right)\) and \(\mathbf{u}=\left(\mathtt{V}\left(\mathbf{W}_{1}\right),\ldots,\mathtt{V}\left( \mathbf{W}_{\Lambda-1}\right)\right)\), we can write:
\[h(\mathbf{x},\mathbf{u})=\mathtt{V}\left(\mathbf{F}_{\Lambda}\left(\boldsymbol {\theta}\right)\right);\quad f(\mathbf{x},\mathbf{u})=\frac{1}{2}\left\| \mathtt{V}\left(\mathbf{F}_{\Lambda}\left(\boldsymbol{\theta}\right)\right)- \mathtt{V}\left(\mathbf{Y}\right)\right\|_{2}^{2}=\frac{1}{2}\left\|\mathbf{F} _{\Lambda}\left(\boldsymbol{\theta}\right)-\mathbf{Y}\right\|_{F}^{2}. \tag{13}\]
For some given \(\mathbf{x}\) and \(\mathbf{u}\), we let \(\mathbf{W}_{\Lambda}(\mathbf{x})\) be the matrix such that \(\mathbf{x}=\mathtt{V}\left(\mathbf{W}_{\Lambda}(\mathbf{x})\right)\); similarly, we let \(\mathbf{W}_{\ell}(\mathbf{u})\) with \(\ell\in[L-1]\) be the matrices such that \(\mathbf{u}=\left(\mathtt{V}\left(\mathbf{W}_{1}(\mathbf{u})\right),\ldots, \mathtt{V}\left(\mathbf{W}_{\Lambda-1}(\mathbf{u})\right)\right)\). Denote \(\lambda_{\Lambda}=\sup_{\mathbf{x}\in\mathcal{B}_{\mathbf{Rx}}^{(1)}}\sigma_{ \max}\left(\mathbf{W}_{\Lambda}(\mathbf{x})\right)\) and \(\lambda_{\ell}=\sup_{\mathbf{u}\in\mathcal{B}_{\mathbf{Rx}}^{(2)}}\sigma_{ \max}\left(\mathbf{W}_{\ell}(\mathbf{u})\right)\) for \(\ell\in[\Lambda-1]\). Moreover, denote \(\lambda_{i\to j}=\prod_{\ell=i}^{j}\lambda_{\ell}\). Then we can show that \(f(\mathbf{x},\mathbf{u})\) defined in (13) satisfies Assumptions 1-6.
**Lemma 8**.: _Let \(\boldsymbol{\theta}(0)\) be the initialization of the ReLU network in Equation (12) and \(\alpha_{0}=\sigma_{\min}\left(\mathbf{F}_{\Lambda-1}\left(\boldsymbol{\theta}(0) \right)\right)\). Assume that each \(\mathbf{W}_{\ell}\) is initialized such that \(\|\mathbf{W}_{\ell}(0)\|_{2}\leq\frac{\lambda_{\ell}}{2}\) and \(\alpha_{0}>0\). Then, \(f(\mathbf{x},\mathbf{u})\) satisfies Assumptions 1-6 with:_
\[R_{\mathbf{x}}\leq\tfrac{\lambda_{\Lambda}}{2};\;R_{\mathbf{u}}\leq\tfrac{1}{2} \min_{\ell\in[\Lambda-1]}\lambda_{\ell}\cdot\min\left\{1,\tfrac{\alpha_{0}}{2 \sqrt{\Lambda}\|\mathbf{X}\|_{F}\lambda_{1\to\Lambda-1}}\right\};\;\mu=\tfrac{ \alpha_{0}^{2}}{2};\]
\[L_{1}=\left\|\mathbf{X}\right\|_{F}^{2}\lambda_{1\to\Lambda-1}^{2};\;L_{2}=1;\;G_ {1}= \tfrac{\sqrt{\Lambda}\|\mathbf{X}\|_{F}\lambda_{1\to\Lambda-1}}{\min_{ \ell\in[\Lambda-1]}\lambda_{\ell}};\;G_{2}=\left(2\left\|\mathbf{X}\right\|_{F} \lambda_{1\to\Lambda}+\left\|\mathbf{Y}\right\|_{F}\right)G_{1}\]
As shown in Lemma 3.3 by Nguyen Nguyen (2021), with sufficient over-parameterization, we can guarantee that \(\alpha_{0}>0\). In order to show the accelerated convergence of Nesterov's momentum when training 12, we need to \(i)\) guarantee that the condition of \(R_{\mathbf{x}}\) and \(R_{\mathbf{u}}\) in (4) satisfies the upper bound in Lemma 8, and \(ii)\) the quantities \(\mu,L_{1},L_{2},G_{1}\) and \(G_{2}\) defined in Lemma 8 satisfy the requirement in (4). Enforcing the two conditions with sufficient over-parameterization gives us the following theorem.
**Theorem 5**.: _Consider training the ReLU neural network in Equation (12) using the MSE loss, or equivalently, minimizing \(f(\mathbf{x},\mathbf{u})\) defined in Equation (13) with Nesterov's momentum:_
\[\mathbf{W}_{\ell}(k+1) =\hat{\mathbf{W}}_{\ell}(k)-\eta\nabla_{\mathbf{W}_{\ell}}\mathcal{ L}(\hat{\boldsymbol{\theta}}(k))\] \[\hat{\mathbf{W}}_{\ell}(k+1) =\mathbf{W}_{\ell}(k+1)+\beta(\mathbf{W}_{\ell}(k+1)-\mathbf{W}_{ \ell}(k))\]
_with \(\hat{\mathbf{\theta}}(k+1)=\{\hat{\mathbf{W}}_{\ell}(k+1)\}_{\ell=1}^{\Lambda},\eta= \nicefrac{{c}}{{L_{1}}}\) and \(\beta=\frac{4\sqrt{\kappa-\sqrt{c}}}{4\sqrt{\kappa+\gamma}\zeta}\), where \(\kappa=\nicefrac{{2L_{1}}}{{a_{0}^{2}}}\) and \(\alpha_{0},L_{1}\) as defined in Lemma 8. If the width of the neural network satisfies:_
\[d_{\ell}=\Theta\left(m\right)\quad\forall\ell\in[\Lambda-1];\quad d_{L-1}= \Theta\left(n^{4}m^{2}\right), \tag{14}\]
_for some \(m\geq\max\{d_{0},d_{\Lambda}\}\), and we initialize the weights according to:_
\[\left[\mathbf{W}_{\ell}(0)\right]_{ij}\sim\mathcal{N}\left(0,d_{\ell-1}^{-1} \right),\quad\forall\ell\in[\Lambda-1];\quad\left[\mathbf{W}_{\ell}(0)\right] _{ij}\sim\mathcal{N}\left(0,d_{\Lambda-1}^{-\frac{3}{2}}\right).\]
_Then, with high probability over the initialization, there exists an absolute constant \(c>0\) such that:_
\[\mathcal{L}\left(\mathbf{\theta}(k)\right)\leq 2\left(1-\tfrac{c}{4\sqrt{\kappa}} \right)^{k}\mathcal{L}(\mathbf{\theta}(0)). \tag{15}\]
As in prior work Nguyen (2021), we treat the depth of the neural network \(\Lambda\) to be a constant when computing the over-parameterization requirement. Next, we compare our result with Theorem 2.2 and Corollary 3.2 of Nguyen (2021). To start, in order to deal with the additional complexity of Nesterov's momentum as introduced in Section 4.2, our over-parameterization of \(d_{\Lambda-1}=\Theta\left(n^{4}m^{2}\right)\) is slightly larger than the over-parameterization of \(d_{\Lambda-1}=\Theta\left(n^{3}m^{3}\right)\) in Corollary 3.2 of Nguyen (2021). Moreover, in Theorem 2.2 of Nguyen (2021), since their choice of \(\eta\) is also \(O\left(\nicefrac{{1}}{{L_{1}}}\right)\), the convergence rate they achieve for gradient descent is \(1-\Theta\left(\nicefrac{{1}}{{\kappa}}\right)\). Compared with this rate, Theorem 5 achieves a faster convergence of \(1-\Theta\left(\nicefrac{{1}}{{\sqrt{\kappa}}}\right)\). This shows that Nesterov's momentum enjoys acceleration when training deep ReLU neural networks.
## 6 Conclusion and Broader Impact
We consider the minimization of a new class of objective functions, namely the partition model, where the set of parameters can be partitioned into two groups, and the function is smooth and strongly convex with respect to only one group. This class of objectives is more general than the class of smooth and strongly convex functions. We prove the convergence of both gradient descent and Nesterov's momentum on this class of objectives under certain assumptions and show that Nesterov's momentum achieves an accelerated convergence rate of \(1-\Theta\left(\nicefrac{{1}}{{\sqrt{\kappa}}}\right)\) compared with the \(1-\Theta\left(\nicefrac{{1}}{{\kappa}}\right)\) convergence rate of gradient descent. Moreover, we considered the training of the additive model and deep ReLU networks as two concrete examples of the partition model and showed the acceleration of Nesterov's momentum when training these two models.
Our work bears two limitations. First, since we do not make convexity or smoothness assumptions for the subset \(\mathbf{u}\) of parameters, we have to make a strong assumption of the optimality condition as in Assumption 6. It is possible that this assumption can be relaxed if one assumes a better structure on \(\mathbf{u}\). Second, in the examples we provided, we only considered simple parameter partition schemes including partition by submodel as in Section
Figure 1: Experiment of learning additive model with gradient descent and Nesterov’s momentum.
5.1 and partition by layer as in Section 5.2. Considering the complexity of modern neural networks, it remains a valid open question whether more complicated partition schemes, such as selecting a subset of the weights in each layer, will still satisfy the assumptions in our framework. Future works can focus on resolving these two limitations, as well as extending the analysis to different neural network architectures.
This paper focuses mainly on the theoretical understanding of optimization and machine learning methods. Due to the nature of our work, We do not anticipate negative social impacts. Further impact on society depends on the application of machine learning models.
|
2305.11108 | MiraBest: A Dataset of Morphologically Classified Radio Galaxies for
Machine Learning | The volume of data from current and future observatories has motivated the
increased development and application of automated machine learning
methodologies for astronomy. However, less attention has been given to the
production of standardised datasets for assessing the performance of different
machine learning algorithms within astronomy and astrophysics. Here we describe
in detail the MiraBest dataset, a publicly available batched dataset of 1256
radio-loud AGN from NVSS and FIRST, filtered to $0.03 < z < 0.1$, manually
labelled by Miraghaei and Best (2017) according to the Fanaroff-Riley
morphological classification, created for machine learning applications and
compatible for use with standard deep learning libraries. We outline the
principles underlying the construction of the dataset, the sample selection and
pre-processing methodology, dataset structure and composition, as well as a
comparison of MiraBest to other datasets used in the literature. Existing
applications that utilise the MiraBest dataset are reviewed, and an extended
dataset of 2100 sources is created by cross-matching MiraBest with other
catalogues of radio-loud AGN that have been used more widely in the literature
for machine learning applications. | Fiona A. M. Porter, Anna M. M. Scaife | 2023-05-18T16:52:43Z | http://arxiv.org/abs/2305.11108v1 | # MiraBest: A Dataset of Morphologically Classified Radio Galaxies for Machine Learning
###### Abstract
The volume of data from current and future observatories has motivated the increased development and application of automated machine learning methodologies for astronomy. However, less attention has been given to the production of standardised datasets for assessing the performance of different machine learning algorithms within astronomy and astrophysics. Here we describe in detail the MiraBest dataset, a publicly available batched dataset of 1256 radio-loud AGN from NVSS and FIRST, filtered to \(0.03<z<0.1\), manually labelled by Miraghaei and Best (2017) according to the Fanaroff-Riley morphological classification, created for machine learning applications and compatible for use with standard deep learning libraries. We outline the principles underlying the construction of the dataset, the sample selection and pre-processing methodology, dataset structure and composition, as well as a comparison of MiraBest to other datasets used in the literature. Existing applications that utilise the MiraBest dataset are reviewed, and an extended dataset of 2100 sources is created by cross-matching MiraBest with other catalogues of radio-loud AGN that have been used more widely in the literature for machine learning applications.
keywords: astronomical data bases - methods: data analysis - radio continuum: galaxies
## 1 Introduction
In radio astronomy, morphological classification using convolutional neural networks (CNNs) and deep learning is becoming increasingly common for object classification, in particular with respect to the classification of radio galaxies (see e.g. Aniyan and Thorat, 2017; Alger et al., 2018; Wu et al., 2018; Lukic et al., 2018; Lukic et al., 2019; Tang et al., 2019; Wang et al., 2021; Nruwetsile and Geach, 2021; Bowles et al., 2021; Sadeghi et al., 2021; Scaife and Porter, 2021; Becker et al., 2021; Mohan et al., 2022; Sliepcevic et al., 2022, etc.). Many of these works have focused on the morphological classification of radio galaxies following the Fanaroff-Riley classification scheme (FR; Fanaroff and Riley, 1974), used to group radio-loud active galactic nuclei (AGN) by examining the locations of their regions of greatest luminosity relative to overall source extent. The initial scheme posited that there were two major populations of such sources - those which were core-brightened, with their peak luminosity concentrated at a radius of less than half than the overall angular size of the source from its centre (FR Type I), and those which were edge-brightened, with their peak luminosity concentrated at a radius of more than half the angular size of the source (FR Type II), and that there was a division in luminosity between the two populations at approximately \(10^{25}\) Watts Hz\({}^{-1}\) sr\({}^{-1}\), with edge-brightened sources having a higher intrinsic luminosity than core-brightened sources. As described, this taxonomy requires that an AGN is associated with well-resolved extended emission external to the AGN core in order to be classifed as either FRI or FRI.
While the Fanaroff-Riley scheme was initially viewed as having a very straightforward luminosity boundary between morphological classes (Fanaroff and Riley, 1974), further study has shown that this is not the case, see (e.g.) Hardcastle and Croston (2020) for a review. In recent studies, sources have been detected which have raised questions about the use of this boundary; for example, Mingo et al. (2019) found that around 20% of FRII galaxies in their sample had radio luminosity below the traditional cutoff, some by as much as two orders of magnitude, meaning that there is some range of luminosities in which both classes can be found. As well as this, increasing survey sensitivity has allowed for the discovery of several classes with features that do not match the classic morphologies, including hybrid (e.g. Gopal-Krishna and Wiita, 2000; Kapirska et al., 2017) and restarting sources (e.g. Lara et al., 1999; Mahatma et al., 2019). Clearly, the dichotomy between Fanaroff-Riley classes is not only more complex than originally believed, but sufficiently complete that it is still not fully understood. While it is generally accepted that the same underlying mechanism likely powers all FR galaxies, with the different morphologies arising as a result of jet interactions with surrounding environments of different densities (Gopal-Krishna and Wiita, 2000; Kaiser and Best, 2007; Tchekhovskoy and Bromberg, 2016; Mingo et al., 2019; Hardcastle and Croston, 2020), the precise requirements of host galaxy characteristics, jet power, the properties of the inter-cluster medium and disruptions in that medium are still in question (Mingo et al., 2019; Hardcastle and Croston, 2020). To gain a stronger understanding of radio galaxies, it is necessary that we obtain more examples of FR sources.
The new generation of radio surveys with telescopes such as LOFAR (Shimwell et al., 2017; Shimwell et al., 2019), MeerKAT (Jonas
& Team, 2016; Jarvis et al., 2016) and ASKAP (Johnston et al., 2008; McConnell et al., 2020) are already providing additional data with which to expand our catalogues of radio galaxies, as does the advent of surveys with the Square Kilometre Array (SKA) telescope (Braun et al., 2015; Grainge et al., 2017). While the unprecedented depth of field and resolution will permit the detection of sources that previously could not be observed, the volume of data produced by the SKA and its precursors will be such that robust machine learning classification will be absolutely vital to allow sources of interest to be identified and classified.
Currently, the archival data sets available for training radio galaxy classifiers are of comparable size to many of those used in computer vision (e.g. CIFAR; Krizhevsky, 2009), with around \(10^{5}\) samples available. However, a fundamental difference is the degree of domain knowledge needed for creating _labelled_ data sets, which has a much higher cost for radio astronomy data. As a result of this, labels are sparse in radio galaxy data sets, with labels only included for a small fraction of data. Indeed, the majority of existing catalogues (and hence labelled datasets) of FR galaxies contain only a small number of sources - typically several hundred at most (Gendre & Wall, 2008; Capetti et al., 2017, 2017, 2018; Koziel-Wierzbowska et al., 2020), with Mingo et al. (2019) as a recent exception. Consequently, when constructing a machine learning dataset, the use of the largest practical number of images of each class is preferable, ensuring that the full variability of features within each class is captured by the dataset; without this, there is a risk that any models using that dataset will have a limited ability to generalise to images which show features that are not adequately represented within it.
In this work we describe the MiraBest batched dataset1, a publicly available FR-labelled machine learning data set of radio galaxies. With 1256 samples, MiraBest is currently the largest publicly available machine learning dataset for radio galaxy classification. The structure of this paper is as follows: in Section 2 we outline the general principles underlying the construction of astronomical machine learning datasets; in Section 3 we describe the sample selection and dataset structure for MiraBest; in Section 4 we describe the pre-processing applied to MiraBest data samples, and in Section 5 we provide an analysis of the overall dataset composition; in Section 7 we compare the MiraBest dataset to other radio galaxy machine learning datasets in the literature, and in Section 6 we describe additional supplementary datasets that are provided with the core MiraBest dataset; in Section 10 we outline existing applications of the MiraBest dataset in the literature and in Section 11 we draw our conclusions.
Footnote 1: The MiraBest dataset can be downloaded from: [https://doi.org/10.5281/zenodo.4288837](https://doi.org/10.5281/zenodo.4288837).
## 2 Constructing datasets of astronomical sources
While a large number of astronomical catalogues are accessible for general use, not all are suitable to be used to create machine learning datasets. Astronomical catalogues are constructed to serve a variety of specific purposes; depending on the field that seeks to collate them, which properties are considered useful and which are irrelevant can vary significantly, and a scientifically useful catalogue may lack the features needed to produce a useful machine learning dataset. In the case of a dataset intended for source classification with supervised learning, the following properties should be considered when attempting to build a dataset from an existing catalogue.
### Number of sources
An obvious area in which machine learning datasets and and astronomical catalogues may differ is size. A catalogue of a few dozen rare astronomical objects may be enough to glean a number of astrophysical properties and constraints, and allow astronomers to identify methods by which they might be able to find more of these rare objects, see e.g. Lyne et al. (2017), Hartley et al. (2017), Titus et al. (2020), Pleunis et al. (2021), Rezaei et al. (2022); indeed, the small number of sources available might allow for each to be studied in more depth, see e.g. Young et al. (2013), Fermi LAT Collaboration et al. (2015), CHIME/FRB Collaboration (2020). A machine learning dataset of a dozen images, however, is of very dubious use; individual classes within a dataset being this small has been shown to result in poor classification ability (Cho et al., 2015), and while there is no definitive answer for the minimum quantity of data required for a machine learning model, a suggested rule of thumb is that the number of samples in a dataset should be at least a factor of fifty larger than the number of classes (Alwosheel et al., 2018). Machine learning datasets hence have a much higher minimum population requirement to produce "good" science - a dataset of several hundred images is considered very small, and while data augmentation can artificially increase the size of a small dataset, making it more likely to be useful, a larger quantity of unique data is significantly more likely to produce accurate and generalisable results than a smaller augmented dataset (Brigato & Iocchi, 2021).
### Availability of labelled data
A vital component of a machine learning dataset intended for supervised learning is that all sources included within it must be labelled accurately, and a lack of appropriately labelled data is accordingly a significant issue (Raghu & Schmidt, 2020). Manual classification to create an appropriately-sized dataset is time-consuming and requires sufficient knowledge of the classes involved that it is ensured they are labelled correctly, depending on the type of source involved, even multiple human classifiers with expert knowledge might not agree on what the correct class is for a particular object (Nair & Abraham, 2010; Walmsley et al., 2022), especially if the source population is small enough that there are relatively few examples of any given class available for comparison. For this reason, a number of astronomical machine learning datasets draw from previously-created source catalogues rather than their creators seeking out and labelling entirely new sources (e.g. Aniyan & Thorat, 2017; Tang, 2019), including MiraBest.
In addition to this, machine learning datasets require some wariness towards "label noise" - that is, the uncertainty introduced into a machine learning model as a result of some of the images in a dataset being incorrectly labelled (see e.g. Frenay & Verleysen, 2014). On the whole, this is not typically a problem in other astronomical catalogues, as a small number of atypical sources are not usually expected to have a significant impact upon results derived from a large population; however, it can negatively impact the training of a machine learning model, as the inclusion of mislabelled images can result in the model being effectively "taught wrong" with regard to which features are associated with which class (Northcutt et al., 2021). Miraghaei & Best (2017) has the unusual distinction of not only providing class labels but information on whether the authors were confident in their classification, effectively providing a measure of uncertainty on the labels themselves and allowing for the potential to study how models respond to sources for which astronomers are unsure of the appropriate morphological class.
### Inclusion of "messy" data
In general, when collating a dataset for astronomical study, there is an expectation that the data contained within will be "good data" - that is, as free as possible from artefacts, with any effects from background sources removed, etc. - and that the properties observed are exclusively from the intended source. In machine learning, though, there is merit in "bad" or "messy" data being included: it reflects the astrophysical reality of the environments of many sources (Norris et al., 2021). While artefacts might render an observation not terribly useful for determining precise properties of a source, or background sources might make it difficult to determine which emission can be attributed to which object, it is plainly apparent that these will occur in survey data nonetheless; it is hence better, when using machine learning, to have developed a model that can identify the features necessary to classify a source despite artefacts, or which has learned that background emission may be present without affecting the class of the primary source, than to use one which has never encountered these circumstances and cannot adapt to them without retraining. Focusing on "good" data not only reduces the possible size of a machine learning dataset: it limits its ability to recognise anything but other "good" data, which is a particular issue when "messy" data can be more often representative of unusual features that would warrant additional attention from astronomers (Norris et al., 2021; Gupta et al., 2022).
This also applies to the inclusion of atypical sources for a given class, such as rare subclasses; while these may be considered to add a form of "noise" to a dataset when their properties noticeably differ from the "standard" population, exclusively using sources with "standard" morphology in machine learning datasets again may limit the ability of the resulting models to recognise unusual sources within survey data. While there may not always be a sufficient number of examples of a given rare subclass to allow a model to specifically recognise that subclass's properties, there is some merit in including such sources in a dataset in a manner that lets them easily be screened out or merged into a larger class; it is far easier to remove unwanted images from a machine learning dataset than it is to add new images that perfectly match the original's data processing methods, and these sources may be used to examine model responses to unusual objects which are nonetheless part of the same overall source population.
## 3 Selection Method and Dataset Structure
The sources classified by Miraghaei & Best (2017) were identified from a catalogue of radio-loud AGN (Best & Heckman, 2012), which had cross-matched galaxies from the seventh data release of the SDSS Abazajian et al. (2009) with radio components from NVSS (Condon et al., 1998) and FIRST (Becker et al., 1995) surveys conducted at 1.4 GHz with the Very Large Array (VLA) telescope. This catalogue was filtered to obtain sources that had a lower redshift limit of \(z>0.03\), as the large angular size of nearer galaxies was deemed likely to result in catalogued SDSS parameters containing errors, and an upper limit of \(z<0.1\) to ensure spectroscopic classification would be available. A flux density cut of 40 mJy was applied to ensure that there was a signal to noise ratio large enough to detect diffuse radio emission.
To obtain a Fanaroff-Riley class, images of each source from either NVSS or FIRST were inspected visually and labelled using the original definition of the two classes (Fanaroff & Riley, 1974): whether the distance from the AGN's centre to the regions of brightest emission on either side was less (FRI) or more (FRII) than half of the angular extent of the radio emission.
Due to the limitations of FIRST and NVSS's angular resolution and ability to detect faint emission, some objects could not be labelled with confidence; as a result, these sources were flagged as "uncertain" in label. If a galaxy was noted to show a non-standard FR morphology, this was likewise flagged. In addition to the standard FRI and FRII sources, a small number of sources were classified as hybrids - showing morphology characteristic of FRIs on one side and of FRIs on the other - or "unclassifiable", showing no obvious FR morphology.
Each source within Miraghaei & Best (2017)'s catalogue was characterised by a three-digit class label. These denote, in order, overall Fanaroff-Riley class, degree of confidence in classification, and morphological subclass. Not all possible combinations of digits are present within the catalogue; for example, double-double morphology (Schoenmakers et al., 2000) is characteristic of FRII sources, so no FRI sources exist with this label. As hybrid sources have no real "standard" or "non-standard" morphologies, all hybrid sources are deemed to be morphologically "standard". The meaning of each digit's possible values are listed in Table 1.
While the catalogue lists 1329 sources, not all were used within the MiraBest dataset. A total of 73 sources were excluded for the following reasons:
1. 40 sources were labelled as "unclassifiable" within Miraghaei & Best (2017); while it is not stated why they were deemed unclassifiable, they are assumed to show no recognisable FR morphology and hence provide no information for the classification of FR galaxies. While they do represent a population of non-FR sources that might be encountered while surveying, this population is too small to be useful within the dataset as an inbuilt "non-FR" class; as such, they were removed.
2. 28 sources were found to have an angular extent greater than 270 arcseconds. As images within the dataset were processed to have dimensions of 150 x 150 pixels, corresponding to 270 arcseconds in the FIRST survey, any sources larger than this were removed to prevent the use of partial sources being present in the dataset.
3. 4 sources were located partially or fully outwith the area of sky covered by the FIRST survey. As significant portions of each source lacked image data, they were removed to prevent the use of partial sources.
4. 1 source, with image label 103 (confidently-labelled FRI with diffuse morphology), formed a single-source subclass. As a minimum of two images per subclass were required for inclusion in the dataset - one for the training set and one for the test set - this image was removed.
With these sources removed, the remaining 1256 sources were used to create the MiraBest dataset.
\begin{table}
\begin{tabular}{l l l} \hline \hline Digit 1 & Digit 2 & Digit 3 \\ \hline
1 - FRI & 0 - Confident & 0 - Standard \\
2 - FRII & 1 - Uncertain & 1 - Double-double \\
3 - Hybrid & & 2 - Wide-angle Tail \\
4 - Unclassifiable & & 3 - Diffuse \\ & & 4 - Head-tail \\ \hline \hline \end{tabular}
\end{table}
Table 1: The three-digit classification scheme used by Miraghaei & Best (2017). Double-double sources are exclusively FRII; wide-angle, diffuse and head-tail sources are exclusively FRI.
## 4 Processing Methods
For the purposes of allowing direct comparison to and possible combination with an already extant dataset of Fanaroff-Riley galaxies - FR-DEEP, presented by Tang et al. (2019) - the data processing used was matched closely to this dataset's methods. Further information on this dataset and its processing techniques may be found in Section 7.3.
Data from the FIRST survey were obtained as.fits files using the SkyView Virtual Observatory (McGlynn et al., 1998) for a \(300\times 300\) pixel area of sky centred upon each set of coordinates provided by Miraghaei & Best (2017), equivalent to an angular extent of 540deg. A standardised naming system for each source was implemented at this stage, with a multiple properties stored within the name string using the following format: "[3-digit class label]_Isource right ascension]_[source declination]_[redshift]_[angular size]", with right ascension and declination given in decimal degrees and angular size in arcseconds. These properties were stored within the file names to ensure that source coordinates and labels could be rapidly and easily matched to their respective images.
### Noise reduction
Image data from FIRST (Becker et al., 1995) as provided by SkyView (McGlynn et al., 1998) at this stage contained sufficient radio noise that the central FR source was not always readily apparent from visual inspection. To reduce this radio noise, sigma-clipping was performed using the astropy package's sigma_clipped_stats function (Astropy Collaboration et al., 2022). Any pixel that was found to have a radio flux density less than \(3r\) from the image's mean was set to zero; this process was repeated a maximum of five times, stopping early if no pixels below the clipping threshold were found. Performing sigma-clipping at this stage - with an image size greater than that which would be used for the final dataset - allowed for the most complete possible removal of noise without loss of source information, as providing a larger quantity of noisy pixels allowed for better characterisation of the radio background. At this stage, images were considered to have been "cleaned", and were cropped to \(150\times 150\) pixels centred upon the radio galaxy.
### Removal of extraneous data and background sources
The maximum angular size of sources used in this dataset was 270deg, corresponding to 150 pixels in FIRST. However, as the images created were square, not circular, "unwanted" data was present at the corner of each image that was not expected to have any relevance to the FR source. To ensure that these regions would not provide any possibly confounding data, a circular mask was applied to each image. Initially, it was considered whether it might be useful to customise the size of this mask to each image, masking out any data beyond the angular size of the source stated in Miraghaei & Best (2017), to completely remove any possible bright background sources. However, this option was rejected for the following reasons:
1. Using masks set to the angular sizes provided by Miraghaei & Best (2017) frequently resulted in pixels that clearly contained source data to be masked out. This discrepancy between the stated and observed extents of the sources might be a result of the authors using NVSS data to determine angular size, as NVSS's 15deg per pixel resolution could readily result in single-pixel underestimations of size that correspond to multiple pixels at FIRST's 1.8deg resolution. Correctly setting the mask limits would hence require visual inspection of every image to be used in MiraBest to determine the appropriate size for its mask, a task which would be time-consuming to complete because the asymmetric structure and diffuse emission of many sources were found to require testing multiple sets of limits per image to ensure that no emission was mistakenly masked out. Given that relatively few images clearly benefited from this more curated approach to masking, it was deemed to be an inefficient use of time to do so.
2. While bright background sources might somewhat impact the performance of machine learning classifiers on this dataset, the presence of such sources is astrophysically normal; it is not only possible but inevitable that some images produced by future radio telescopes will contain background sources within the field of view of a target source, and it is unrealistic to expect that they will all be cleaned from these images before classification is attempted. For this reason, radio background sources within the central 270deg of an image represent behaviour that will likely be seen in new astrophysical images, and their inclusion was deemed to be unlikely to harm the usefulness of this dataset as a whole.
Because of this, rather than a variable mask, a fixed circular mask of diameter 150 pixels was applied to every image to remove data that was unambiguously not associated with the FR source.
### Normalisation
The images included in MiraBest at this stage naturally showed variation in source flux density; FR galaxies are not standard candles, so even sources at the same redshift can show markedly different maximum flux densities, and the population used in MiraBest covers redshift range \(0.03<z<0.1\), resulting in more distant galaxies tending to be fainter. While, as with the presence of background sources, variation in source flux density is inevitable in future surveys, a potential issue caused by inherent properties of FR galaxies was noted if the images were not normalised.
FRII galaxies are known to be on average intrinsically brighter than FRIs (Mingo et al., 2019). A machine learning model trained on images which preserve the innate flux densities might result in the model, rather than identifying morphology, basing much of its classification on maximum flux density alone. Such a model could be expected to be easily confounded when applied to unseen images from surveys with greater sensitivity than FIRST; it might be expected to simply label all images below a certain flux density as being FRIs, rendering it useless for surveys besides FIRST.
To prevent this from occurring, all images were normalised. This was done by identifying the minimum and maximum flux density values in each image, and scaling each pixel's value as follows:
\[Normalised\ pixel\ value=255\times\frac{Pixel\ value-minimum\ flux}{Maximum\ flux-minimum\ flux}\]
The factor of 255 is used to facilitate image conversion into PNG files; greyscale PNGs have pixel values ranging from 0 (darkest) to 255 (brightest), so multiplying by this factor ensures maximum dynamic range by setting the minimum pixel value to 0 and the maximum to 255. At this stage, all image processing is complete; all images are converted to PNG format and are ready to be collected into a dataset.
PNG files are here favoured over retaining the original FITS format to make this dataset accessible to non-astronomers who wish use astronomical data for machine learning. Although PNGs have a more limited dynamic range than those of FITS files, the FR classification
scheme is primarily concerned with the location of the brightest regions of a source, making the potential loss of very faint emission from these sources unlikely to affect the ability to classify images in this case. As the source data was originally in monochrome 1.4 GHz flux, conversion into single-channel greyscale PNG is able to represent relative flux within an image, and has the additional benefit of reducing typical file size by two orders of magnitude, making the dataset more practical to download and store if desired.
### Constructing the batched dataset
MiraBest's images are 150\(\times\)150 pixels, making them relatively large when compared with commonly used benchmarking datasets such as MNIST (28 \(\times\)28 pixels; LeCun & Cortes 2010) or CIFAR10 (32\(\times\)32 pixels; Krizhevsky, Nair & Hinton Krizhevsky et al.), although we note that Imagenet (Russakovsky et al., 2014) contains a variety of image sizes, including those that exceed MiraBest. However, even when using three channels to provide RGB information rather than MiraBest's single greyscale channel, a MiraBest image still contains around an order of magnitude more data than CIFAR10. For this reason, it was designed for use as a _batched dataset_(Masters & Luschi, 2018); loading each batch into memory sequentially is less computationally demanding than loading the entire dataset at once, making MiraBest practical for use with devices with relatively small memory, such as personal laptops. It was decided to separate the dataset into eight batches of 157 images: seven were to be labelled as training set batches, with the last reserved to be a test set batch.
The classes within MiraBest are not balanced; that is, there is not an equal number of sources in every class (see Section 5 for a full discussion of the class breakdown). While in a balanced dataset, it is possible to randomly separate all images into different batches and expect a reasonably equal quantity of each class to be present in each batch, this is not the case for a dataset that, like MiraBest, has a number of subclasses that represent a very small proportion of the whole. Randomly selecting images in this case might result in some batches completely lacking in particular sources, which is a particularly pressing concern for a dataset's test batch; if the test set contains no examples of a particular class, there is no way to evaluate a machine learning's model's performance on that class during training, and the model risks significant overfitting as a result.
To prevent this from occurring, a fixed batch structure was used to ensure that a roughly equal quantity of sources of each class were present in every batch. The source quantities present in Mirahaei & Best (2017) were such that it was impossible to ensure exactly identical composition between all batches, but the following method was used to ensure that sources were distributed as evenly as possible:
1. The total number of images in each class was divided by eight and rounded down to determine the base number of images per class to include in each batch.
2. Images were shuffled, and the base number of images of each class were assigned to each batch. If fewer than eight images of a class were available, one was randomly chosen to be reserved for the test batch to ensure that there were no instances of a class missing a test set image.
3. The number of images in each batch and quantity of remaining unassigned images in each class were determined.
4. Beginning with the test batch, each batch was iteratively filled with unassigned images. Each iteration, the class with the largest number of remaining images was determined, and an image of that class was assigned to the batch until it reached a total of 157 images; if all classes had an equal number of unassigned images, a class was selected at random. This method ensured that the most populous classes were preferentially added to the test batch, with the rarer subclasses more likely to be present in one of the training batches.
Once the composition of the batches was determined, the file names of the images used in each batch were saved. This was done to allow for direct comparison between any datasets created using the same catalogue but using data from a different survey; ensuring the composition is entirely consistent prevents any potential differences in behaviour caused by having different proportions of sources in the different batches.
With batch structuring complete, the image data and labels for the sources within in each batch were collected to form the final dataset.
## 5 Dataset Composition and Analysis
The overall composition of the MiraBest dataset is detailed in Table 2. The three-digit class labels used by Mirahaei & Best (2017) were reduced to single-digit class labels within the dataset to match conventions with other machine learning datasets. As these single-digit labels are less informative at a glance of an image's class, the three-digit method will be preferentially used going forward to ensure clarity. In addition to FR class, each source's right ascension and declination can be retrieved from the image's filename, making them fully traceable. While this is not typical for machine learning datasets in general, it is a useful measure for an astronomical dataset as this allows for easy cross-matching of sources between different catalogues.
### Dataset analysis
With 1256 sources, MiraBest currently represents the largest publicly available machine learning dataset of Fanaroff-Riley galaxies. It is also the only known dataset that provides examples of clearly labelled non-standard morphology FRs, and hence contains not only the largest quantity but also the greatest morphological variety of FR galaxies. While at present some morphologies are represented by only a few samples, it can be expected that their numbers will significantly increase with wide-field, sensitive radio surveys, and their presence within the overall population of FR galaxies should not be neglected.
When considering broad FR morphology, there is a mild class imbalance between FRI and FRII sources, with forty more FRII sources than FRIs. However, this is considered unlikely to result in any noticeable effects upon performance of machine learning models; while significant class imbalances can lead to a model learning it can obtain a high overall accuracy by labelling all or most images as the majority class (Johnson & Khoshgoftaar, 2019), the ratio of binary FRI/FRII sources here is approximately 48/52, which is not a large enough discrepancy to expect this behaviour.
When considering individual morphological subclasses, a much more significant imbalance can be observed. The most populous subclass, confidently-labelled wide-angle tail FRIs (class 102), is less than 15% the size of its standard-morphology counterpart (class 100), and less than one in one hundred FRIs included shows double-double morphology (class 201). As a result, while their inclusion as a whole benefits MiraBest by showing examples of less common morphologies that are nonetheless part of the Fanaroff-Riley classification system as a whole, this dataset is not well-suited to be used to train machine learning models intended to specifically identify unusual morphologies; there are simply not enough examples of each
class for a model to be able to learn to classify them without severe overfitting, even with data augmentation. For this reason, MiraBest will generally better serve the needs of the astronomical community if the morphological subclasses are grouped with the overall population of their FR class; see Section 5.2 for discussion of the merits of this approach and the derivative datasets that have been created for this purpose.
The hybrid FR galaxies included in MiraBest likewise represent a very small portion of the dataset. Again, such a drastic class imbalance renders these images unlikely to be useful if attempting to develop a machine learning model that can differentiate FRI, FRII and hybrid FR sources, even with data augmentation; instead, they may best be used in identifying ways in which hybrid sources confound binary FRI/FRII classifiers (Mohan et al., 2022). Even so, the quantity of images available may not allow for a large enough sample to be statistically significant, and for this reason a separate, larger dataset of hybrid sources was created, incorporating the sources in MiraBest; see Section 6.1 for information about this dataset and the catalogues of hybrid FR sources it draws from.
Following the decision not to remove background sources within a 135" radius of the central FR sources, there are several images in which the FR source is comparatively faint, with a bright background source having a noticeably brighter radio flux density; a selection of these images are shown in Figure 1. As discussed in Section 4.3, while this does introduce some noise into the dataset by including data that is unrelated to FR galaxies, background sources like these are expected to be present in images from other surveys, and any machine learning model that is to be used on data from a new survey will need to be able to identify FR galaxies whether or not a background source is present. Examples of bright background sources being included in MiraBest, then, might serve to make more robust classifiers by including these small pieces of irrelevant data.
MiraBest uses any source from MiraBaei & Best (2017) with an angular size less than 270"; as a result, some sources included in the dataset have very small angular sizes, and five are visually very close to point-like. Of these five, four are FRI sources, and three of them are uncertainly labelled. One point-like source of each class is shown in Figure 2. FRIs are expected to be much more likely than FRIs to present this morphology, as their core-brightened emission can result in a source with small angular size compared to the resolution of the telescope used having only their central region detected, with diffuse jets being undetected. It is not apparent that the sigma-clipping process has removed noticeable amounts of source emission from these images, so it is assumed that these images are accurate representations of the radio emission of these sources. While, to humans, morphological information might be difficult to clean from these images, it remains possible that a machine learning model might be able to identify some properties of these sources that allow it to classify them accurately, and perhaps be capable of distinguishing them from other point-like radio sources; for this reason, and because they represent a very small proportion of the dataset overall, they were retained.
While MiraBest is considerably smaller than many machine learning datasets, this is unavoidable considering the comparative rarity of FR galaxies. As additional sources are identified that appear within the extent of the FIRST survey, it will become possible to extend it further to allow for a greater representation of the entire FR population; meanwhile, however, it remains the most comprehensive known machine learning dataset of FR galaxies. For direct comparison between MiraBest and other datasets of FR galaxies, see Section 7.
### Derivative datasets
As can be seen in Table 2, the unusual morphological subclasses represent a small proportion of the entire dataset; "standard" FRI and FRII sources represent approximately 92% of the images, with most other morphologies comprising too small a proportion for their behaviour to be considered representative of their subclass as a whole. For this reason, additional class wrappers were created to allow the classes to be reduced down to simply FRI and FRII populations. The resulting simplified datasets were produced:
Reducing the labelling system to these simplified variants allows the study of FR galaxies to be reduced to a binary classification problem, removing the possibility that an image could be considered to be "misclassified" if a machine learning model predicts the correct FR class but a different human confidence label or morphological subclass than is given by the dataset; as humans, we would recognise the former as being irrelevant to FR class and the latter to be "broadly correct", but this distinction would not be made by a machine learning model by default.
The possibility of some misclassifications being "less wrong" than others in this way leads to confusion in the overall ability of a model to classify FR sources accurately, and with such small populations of sources with unusual morphology being available, it is often more useful to simply group all sources of a particular FR class together to better be able to analyse the population as a whole. While the internal labels are simplified, it is of note that neither the morphological information nor the confidence of any image are lost - these are independently accessible via the image filename, and thus it remains possible to identify any subset of interest within the dataset as desired.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Class & No. of images & Confidence & Morphology & No. of images & Class label \\ \hline FRI & 591 & Confident & Standard & 339 & 0 \\ \cline{3-5} & & \multicolumn{3}{l}{Wide-angle tail} & 49 & 1 \\ \cline{3-5} & & \multicolumn{3}{l}{Head-tail} & 9 & 2 \\ \cline{3-5} & & Uncertain & Standard & 191 & 3 \\ \cline{3-5} & & \multicolumn{3}{l}{Wide-angle tail} & 3 & 4 \\ \hline FRII & 631 & Confident & Standard & 432 & 5 \\ \cline{3-5} & & \multicolumn{3}{l}{Double-double} & 4 & 6 \\ \cline{3-5} & & \multicolumn{3}{l}{Uncertain} & Standard & 195 & 7 \\ \hline Hybrid & 34 & Confident & Standard & 19 & 8 \\ \cline{3-5} & & \multicolumn{3}{l}{Uncertain} & Standard & 14 & 9 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The population of all FR classes within the MiraBest dataset, including the internal class labels used.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline Name & No. of images & FR classes & Confidence & No. of images & Class label & Best reported accuracy & ref. \\ \hline MBFRFull & 1222 & FRI & Any & 591 & 0 & 86.9 \(\pm\) 0.5\% & [1] \\ \cline{2-6} & & FRII & Any & 631 & 1 & & \\ \hline MBFRConfident & 833 & FRI & Confident & 397 & 0 & 96.54 \(\pm\) 1.29\% & [2] \\ \cline{2-6} & & FRII & Confident & 436 & 1 & & \\ \hline MBFRUncertain & 389 & FRI & Uncertain & 194 & 0 & N/A & \\ \cline{2-6} & & FRII & Uncertain & 195 & 1 & & \\ \hline \end{tabular}
\end{table}
Table 3: The three derivative datasets created from MiraBest with simplified internal class labelling system and the highest classification accuracy reported in the literature for each, where [1] Slijepecvić et al. (2022); [2] Scaife & Porter (2021).
Figure 1: Examples of MiraBest images where the brightest source visible is not the central FR source. (a) Confidently-labelled FRI (100); (b) Uncertainly-labelled FRI (110); (c) Confidently-labelled FRII (200); (d) Uncertainly-labelled FRII (210). Although faint, the FR sources remain visible despite the presence of a background source affecting the normalisation.
## 6 Supplementary Hybrid Dataset
Hybrid radio galaxies are not truly out-of-distribution sources, given that the same underlying mechanism is believed to create both hybrids and binary FR sources, but they nonetheless represent a sources of confusion for machine learning models trained on binary sources; they could stably show properties of both classes simultaneously, resulting in models finding them equally probable of belonging to both binary classes. Hybrid galaxies also present an interesting population to study; their discovery helped to support the theory that FR morphology is at least part environmentally driven (Gopal-Krishna and Wiita, 2000), but the mechanism behind their formation is still not fully understood and it has been questioned whether they truly represent a separate FR class (see e.g. Stroe et al., 2022). In order to identify these sources for study, it is important to be able to separate them from other radio galaxies and ensure they are not accidentally misidentified as binary FR sources.
### Constructing the hybrid dataset
The number of hybrid FR sources contained in the catalogue used to create MiraBest is not particularly large; there are only 34 sources with hybrid labels, 19 being confidently labelled and 15 uncertainly labelled, while the binary FR classes have many hundreds of examples. To provide a more reliable analysis of the greater population of hybrid FRs, additional hybrid sources were sought out. As this variety of FR morphology is seen far less frequently than binary FRs, it was not expected that it would be possible to identify an equally large population, but even a slight increase in the number of hybrid sources would allow for a more statistically thorough exploration of their properties.
Two additional catalogues of hybrid sources were identified; Kumari and Pal (2021) searched systematically within FIRST data to locate 45 confirmed hybrids and 5 candidate hybrids, while Kapirska et al. (2017) located 25 candidate hybrids within FIRST via the Radio Galaxy Zoo citizen science project (Banfield et al., 2015). These were assessed for suitability for inclusion in a hybrid dataset matching the image processing methods of MiraBest.
Sources in these two catalogues were labelled as either hybrid or hybrid candidate rather than MiraBest's confident and uncertain classification labels; to allow for a unified labelling system, the former class was taken to be equivalent to the confidently classified hybrids in MiraBest and the latter equivalent to uncertainly classified hybrids. The list of coordinates was compared to those of the hybrids in MiraBest to check for duplicate sources, resulting in two sources from Kumari and Pal (2021) being removed for being already contained in MiraBest. One of these was labelled as a confident hybrid by Miraghaei and Best, while Kumari and Pal listed it as a hybrid candidate, highlighting that even when classified by humans, a degree of uncertainty remains in classification confidence; the second was agreed by both to be a certain hybrid.
Two sources from Kapirska et al. (2017) were found to have angular sizes exceeding the 270" size limit imposed upon MiraBest, and were hence removed from consideration; upon processing, a third was identified visually to have some of its emission cropped by the image processing, and it was likewise removed. Of the remaining images, seven were noted to have a bright source present that did not appear to be part of the hybrid emission. While it would be possible to remove any background sources that are clearly separated from the hybrid galaxy, the presence of background sources is not unexpected within radio data, and thus allowing these images to remain with background sources intact provides a more realistic representation of the environments around hybrid galaxies.
With this, a dataset of 104 hybrid sources was created, with a total of 63 confident and 41 uncertainly labelled hybrids; 34 sources from the original set included in MiraBest, 48 from Kumari and Pal, and 22 from Kapirska et al. This represents a threefold increase upon the number originally provided by MiraBest, and is believed to be the largest single dataset of hybrid FR sources available at present.
## 7 Comparison to Existing FR Datasets
As discussed previously, the population of labelled Fanaroff-Riley galaxies is relatively small, and consequently relatively few datasets have been constructed focusing upon these sources. Most derive their sources from a combination of several FR catalogues, notably FRICAT (Capetti et al., 2017) and FRICAT (Capetti et al., 2017) (both catalogues of FR galaxies found within the FIRST survey), and CoNFIG (the Combined NVSS-FIRST Galaxies sample) (Gendre and Wall, 2008). Some background regarding the composition of these catalogues will be discussed before examining the datasets using these sources.
Figure 2: Three examples of FR galaxies that are observed to have point-like morphology. (a) Confidently-labelled FRI (100); (b) Uncertainly-labelled FRI (110); (c) Confidently-labelled FRII (200).There is no clear sign of emission having been lost due to sigma-clipping, so it is assumed that these sources simply have a small enough angular size that obvious FR features are not apparent.
### FRICAT and FRICAT
Both FRICAT and FRICAT were developed with the intention to provide large and comprehensive catalogues of Fanaroff-Riley galaxies that would allow for better understanding of their overall properties, with the particular goals of allowing their luminosity functions, environment, and evolution to be studied (Capetti et al. 2017a,b). At the time of its creation, FRICAT was the single largest catalogue of FRI galaxies in existence, containing more than double the sources of previous catalogues.
Both catalogues draw from a parent sample of radio sources visible in FIRST from Best & Heckman (2012) that provide information regarding whether their emission is associated with an active galactic nucleus. They limit their sample for consideration to those with redshifts \(z<0.15\) - greater than Miraghaei & Best (2017)'s limit of \(z<0.1\) - and angular size greater than 11"-4, to ensure all sources are resolved in FIRST. Galaxies were visually classified, and labelled with an FR class only if at least two of three human classifiers agreed on a classification.
FRICAT is comprised of a total of 219 galaxies with FRI morphology; no specific morphological labels were provided for the base catalogue, although the authors note that their inclusion criteria both permitted the presence of narrow-angle tail sources and rejected wide-angle tail sources.
FRICAT consists of 122 galaxies with FRII morphology; again, no specific morphological labels were provided in the catalogue, but the authors state that the majority of the galaxies used were "double" sources, i.e. ones where there is no detectable core emission, with all radio flux being in the lobes.
The criterion of requiring the agreement of multiple human classifiers for inclusion in these catalogues was commented by the authors to significantly limit the number of suitable sources within the population they examined, stating that "more than half of the 714 radio galaxies extended more than 30 kpc cannot be allocated to any FR class". As a result, sources from these catalogues are taken to be equivalent to MiraBest's "confidently labelled" images, as FRICAT and FRICAT's construction does not allow for morphological ambiguity.
A direct comparison between Miraghaei & Best (2017) and these two catalogues is also made in Capetti et al. (2017b), as all three catalogues draw from the same parent sample of sources in Best & Heckman (2012). It is noted that, despite this, only around 25% of the sources in FRICAT and FRICAT are included in Miraghaei & Best (2017)'s catalogue; this is ascribed in part to different criteria for inclusion for properties such as flux and angular size, and in part because Miraghaei & Best (2017) requires that its sources be labelled as having multiple radio components, thus rejecting many FRIs without clearly separated radio emission that appear in FRICAT. On the whole, however, the authors find that, despite different selection methods, the properties derived from each set of FR galaxies are not dissimilar.
### The Combined NVSS-FIRST Galaxies sample (CoNFIG)
CoNFIG's construction of a catalogue of Fanaroff-Riley galaxies was similarly motivated to FRICAT and FRICAT; to provide as large as possible a dataset of labelled FR galaxies to allow study of their properties for a variety of purposes, but notably to question whether the Fanaroff-Riley dichotomy was the best method to classify these types of galaxies, given the existence of hybrid sources, which show characteristics of both (Gendre & Wall 2008).
Sources in CoNFIG, as the catalogue's name suggests, were identified amongst the population of sources visible in both NVSS and FIRST with relatively high NVSS flux density. The catalogue is comprised of four sets of combined observations (CoNFIG-1 through CoNFIG-4), and the flux density requirements differ between the different observation sets (\(S_{\rm NVSS}>1.3\), 0.8, 0.2, and 0.05 Jy respectively).
In all cases, sources were morphologically classified either by identifying an established label from the 3CRR catalogue (Laing et al. 1983), or by visual inspection of NVSS and FIRST contours. Possible labels within this dataset are FRI, FRII, "compact" (size < 3"), and "unclassifiable" (extended, but with morphology that cannot be assigned an FR class). It is of note that any observed hybrid sources were " classified according to the characteristics of the most prominent jet", and thus CoNFIG contains an indeterminate quantity of hybrid sources.
The total population of CoNFIG is 71 FRIs, 466 FRIs, 285 compact objects, and 37 unclassifiable sources. This population shows clear imbalance in favour of FRIs, which is not unexpected; given that FRIIs have intrinsically higher luminosities than FRIs, an over-representation of FRIIs at high redshift may be anticipated, particularly for the CoNFIG populations with a larger minimum flux density requirement. Consequently, it is likely that that a bias towards FRIIs could have been introduced. Because of this bias, CoNFIG alone is not especially well-suited for use as a FRI/FRII dataset without appreciable data augmentation being used upon its FRI sources.
### The FR-DEEP batched dataset
The FR-DEEP batched dataset (Tang et al. 2019) is a catalogue of Fanaroff-Riley galaxies of classes FRI and FRII, drawing its sources from a combination of FRICAT and CoNFIG, and using image data from both FIRST and NVSS.
#### 7.3.1 Source selection
As the methods used to compile these two catalogues differed in their methods of classification, the CoNFIG sources were subject to additional inspection to ensure their suitability for FR-DEEP.
A spectroscopic redshift was required for inclusion in this dataset; these were not available for all CoNFIG sources, resulting in an initial reduction to 638 sources. Compact and uncertain sources were removed, as the lack of definite FR class rendered them useless for a binary FR dataset. The remaining FRI and FRII sources are labelled as either "confirmed" or "possible" in their classification within CoNFIG, via additional visual inspection via either the VLBA calibrator list (Beasley et al. 2002; Fomalont et al. 2003; Petrov et al. 2005, 2006; Kovalev et al. 2007) or the Pearson-Readhead survey (Pearson & Readhead 1988); only "confirmed" sources being included in this dataset. Although the authors do not explicitly state as much, it is likely that, by rejecting sources with only "possible" FR classification, many if not all of the hybrid sources within CoNFIG will have been removed from FR-DEEP, reducing the risk of "confusing" morphologies being present. With this complete, a set of 50 FRIs and 390 FRIs was deemed suitable for use.
The reduced CoNFIG sample was then combined with the 219 FRICAT FRI sources. As three of the FRIs were found in both catalogues, this resulted in a final population of 266 FRIs and 390 FRIIs. As this represents a split of approximately 40% FRI, 60% FRII, it is a noticeably imbalanced dataset still; Tang et al. (2019) acknowledge this, and suggest that FR-DEEP should be augmented so to result in a more even balance between the classes.
#### 7.3.2 Construction
The data processing methods used in FR-DEEP, as briefly mentioned in Section 4, were used as a model for MiraBest. As such, the steps performed to prepare images for inclusion in FR-DEEP are broadly identical, and Section 4 should be referred to for full detail on these; we will here identify principally where the methods used for FR-DEEP differ from MiraBest.
FR-DEEP was designed for transfer learning, to evaluate the performance of a classifier trained upon images from one survey when used to classify images from a different survey. Consequently, two variants were created using the same list of sources: FRDEEPF, which uses data from FIRST, and FRDEEPN, which uses data from NVSS.
Image data for both sets of sources were initially downloaded from SkyView Virtual Observatory as 300\(\times\)300 pixel images centred upon the FR source; this corresponds to an angular size of 540\({}^{\prime}\) for FIRST and 4500\({}^{\prime\prime}\) (75\({}^{\prime}\)) for NVSS. As both NVSS and FIRST images had radio background noise present, images underwent a sigma-clipping process identical to that used on MiraBest to minimise this noise, and were scaled using a likewise identical method.
At this stage, the dataset was augmented via image rotation to create a far larger quantity of images with a more even split between classes. FRI images were rotated in 1\({}^{\circ}\) increments between 1\({}^{\circ}\) and 73\({}^{\circ}\), while FRII images were rotated between 1\({}^{\circ}\) and 50\({}^{\circ}\), resulting in approximately 20,000 images of each class. As there is no preferred orientation for FR galaxies in space, this method of augmentation is an effective method to artificially increase the number of images that can be used in a dataset; however, it being performed during dataset construction means that any later attempts to augment the dataset via rotation have a risk of effectively cancelling this rotation out, resulting in multiple identical images being present in the doubly-augmented dataset. If any further data augmentation were deemed desirable, rotation should be avoided in favour of methods such as flipping, cropping, and scaling the sources.
Once the dataset had been augmented, all of the images were clipped down to a size of 150 \(\times\) 150 pixels, again centred on the FR source. Unlike MiraBest, no circular mask was used to remove data at a radius of > 75 pixels from the image centre; this potentially allows for a slightly greater proportion of bright background sources to have been included, but as discussed in Section 4.2 the inclusion of such sources is not generally expected to have deleterious effects upon a dataset. The images were then converted into PNGs, and the finalised dataset was separated into a training set of 27,447 images and a test set of 11,690 images, equivalent to an approximately 70%/30% test-train split. As FR-DEEP was designed for binary classification, it expresses its labels as vectors: [1,0] for FRIs, and [0,1] for FRIs. This is functionally equivalent to a MBFRFull's system of labelling FRIs as class 0 and FRIs as class 1.
#### 7.3.3 Comparison to MiraBest
In its unaugmented form, FR-DEEP consists of a total of 656 images, and hence contains around half as many sources as MiraBest, and has a noticeably more imbalanced FRI/FRII split of 40%/60% vs. MiraBest's 48%/52%. As mentioned in Section 7.3.1, augmentation was viewed by Tang et al. (2019) as a necessary step to ensure both a better balance between classes and to create a dataset of size comparable to other contemporary machine learning datasets. As many commonly used models are not equivariant, they will identify each rotated image as a completely unique source, and thus augmenting the dataset via rotation is not expected to introduce overfitting. However, it is nonetheless apparent that more information about the variety of morphologies in FR galaxies may be obtained from having a greater number of unique sources than by providing multiple rotated images of the same source; consequently, MiraBest may be considered to provide a broader representation of the FR population as a whole.
FR-DEEP does not account for morphological variation in its sources beyond the binary FR classes; it is unclear from FRICAT, FRICAT and CoNFIG's source information if any sources with nonstandard morphology were included, or if all sources represent "classic" morphology. If the former is the case, this does not necessarily make FR-DEEP less useful a dataset, as the inclusion of these sources would be expected to result in models flexible enough to be able to identify nonstandard morphology as belonging to the appropriate FR class, but it would reduce the ability to identify any characteristics that the model might associate with such subclasses without manually inspecting and relabelling all images with an appropriate subclass label. If instead the latter is the case, and all images are of "standard" morphology, then no relabelling is necessary; however, given that all images would then be both confidently-labelled and standard-morphology, it might suffer from the effects of being overly curated - namely, resulting in models that are overconfident in their predictions on sources with morphology they have not been exposed to during training. MiraBest's explicit inclusion of uncertainly-labelled sources as well as morphological subclasses allows for direct comparison of model performance on different combinations of subclass and human labelling confidence, permitting a more in-depth study of which types of sources a classifier tends to struggle with and whether this corresponds to similar difficulty in classification by humans.
MiraBest was designed to match the construction methods of FR-DEEP closely enough that the two catalogues could potentially be merged if a yet larger dataset were desirable; besides the addition of a centred mask, MiraBest and FRDEEPF images are identically processed. Doing so, however, would require that the combined catalogue were searched for duplicate images before it were used with any machine learning model. As around 25% of sources in Miragai & Best (2017) were noted to also be present in FRICAT and FRIICAT (see Section 7.1), so it is to be expected that multiple sources found in MiraBest may also be found in FR-DEEP. While a small quantity of duplicate images would be unlikely to significantly impact any model that trained upon this combined dataset, if as many as 25% of FR-DEEP's images have duplicates in MiraBest then their inclusion is expected to have a negative impact upon the training process, either by providing extra sources that add no new information if both present in the training set, or by allowing for overfitting if identical images are in the training and test sets. Removing any duplicate sources, then, is considered of significant importance if combining these two datasets. On the whole, if a larger number of FR sources were deemed necessary for machine learning, one of the simplest methods would be to merge MiraBest and FR-DEEP with appropriate filtering to identify possible image duplicates, with the caveat that some effort should be made to investigate if any sources show obviously nonstandard morphology and relabel them accordingly if so.
### Aniyan & Thorat's catalogue
Aniyan & Thorat (2017)'s work presents a dataset of Fanaroff-Riley galaxies of classes FRI and FRII as well as radio galaxies with bent-tail morphologies, using sources from CoNFIG, FRICAT and Proctor (2011)'s catalogue of bent radio galaxies. All image data were obtained from FIRST. The authors did not offer a name for this dataset, but for brevity it will be referred to hereon as AT17.
#### 7.4.1 Source selection
AT17's methodology of initially selecting sources from CoNFIG was similar to Tang et al. (2019), in that they chose to use only images which were labelled by CoNFIG as "confident" FRI or FRII sources, rejecting all compact objects, unclassifiable sources, and sources with less certain morphological class which were labelled with a "possible" FR class, resulting in a sample of 50 FRIs and 390 FRIs. Likewise, to reduce the imbalance between classes, FRICAT's 219 FRI sources were added. While MiraBest treats bent-tail morphologies as simply being nonstandard morphological variants of FRI galaxies, Aniyan & Thorat (2017) chose to treat them as a third class of radio galaxy; as the other FR galaxies are presumed to show "standard" morphology, they could be considered distinct enough to form their own population. Proctor (2011)'s catalogue provides a number of varieties of bent-trail galaxies, but only those labelled as being both confidently classified and either wide-angle-tail or narrow-tail morphology were included in AT17, for a total of 299 bent-tail sources.
From this initial sample, sources were then inspected, and rejected for inclusion if they showed "strong artefacts" (which are not elaborated on, but are assumed to be inherent in FIRST's data), if multiple sources were visible, or if their angular size was too great for intended image size. Once this was done, any images found to be present in both the FR sample and the bent-tail sample were removed to prevent confusion from identical sources having multiple labels, and FRICAT sources were visually inspected to remove any additional bent-tail sources. With this done, a final population of 178 FRIs, 284 FRIs and 254 bent-tail galaxies was obtained.
#### 7.4.2 Construction
The preprocessing methods used by Aniyan & Thorat (2017) were used as a basis for those of FR-DEEP; consequently, image processing follows a very similar method as has been previously discussed. Images of all sources were obtained from FIRST data at an initial size of \(300\times 300\) pixels, and all pixels below \(3\sigma\) of the image mean were clipped. Sigma-clipping to the levels of \(2\sigma\) and \(5\sigma\) were also trialled, but it was found that this resulted in poor accuracy in any classifiers used; while the authors do not state as much, it is assumed that sigma-clipping to \(2\sigma\) is likely to leave some radio background present, leaving machine learning models attempting to identify features in noise, while sigma-clipping to \(5\sigma\) could be expected to remove portions of diffuse emission which are significant features in both FRI and bent-tail sources, destroying useful source data. No normalisation was performed; if not intending to convert an image to PNG or another similar graphics format, normalising to the colour values of said format is not necessary. However, as discussed previously, the inherent flux density properties of FR galaxies are such that foregoing normalisation might lead to a machine learning model largely ignoring morphological features to classify based on maximum flux density; this possibility is not discussed by the authors, so this effect may not be clearly apparent in their work.
In the sample used to create this dataset, FRIs are noticeably underrepresented compared to the other two classes; as with FR-DEEP, the dataset was augmented via rotation, with each image rotated multiple times to produce an approximately even balance of classes, but these augmentations were only to be applied to the training set, with the validation set left unaltered. Consequently, the train-validation split was applied at this stage; 53 FRIs, 57 FRIs and 77 bent-tail sources (corresponding to around 30% of the total dataset population) were selected for the validation set and excluded from the augmentation process.
Unlike FR-DEEP, the increments at which each training set image was rotated were varied based on class rather than setting differing maximum angles of rotation; FRIs were rotated at increments of 1\({}^{\circ}\), FRIs at 2\({}^{\circ}\), and bent-tail sources at 3\({}^{\circ}\). The authors also state that versions of the images that had been both flipped and rotated were produced, but do not specify whether these were used in the dataset. Once the training set had been supplemented, all images were again cropped to a final size of \(150\times 150\) pixels.
At this stage, the training set was separated into two portions - a training set and a test set - at proportions of 80%/20%. By using this test set during training, the validation set separated earlier could serve as examples of truly unseen data, acting as a second check against overfitting if necessary. Following this, the final dataset population was approximately 94000 sources in the training set (\(\sim\)36000 FRI, \(\sim\)33000 FRII, \(\sim\)25000 bent-tail), approximately 23000 sources in the test set (\(\sim\)9000 FRI, \(\sim\)7900 FRII, \(\sim\)6400 bent-tail), and 187 sources in the validation set (53 FRIs, 57 FRIs, 77 bent-tail).
#### 7.4.3 Comparison to MiraBest
In its unaugmented form, AT17 contains a smaller quantity of labelled FRI and FRII sources than both FR-DEEP and MiraBest; its total of 462 FR sources makes it approximately one third the size of MiraBest, and it has a similar class balance to FR-DEEP at 39% FRI/61% FRII. While on the whole its image selection and processing methods are similar to the other two datasets (besides its lack of normalisation), one aspect is significantly different: the treatment of bent-tail sources as a third class.
At present, the exact nature of bent-tail sources is still not completely certain. Some view them as FRI galaxies with unusual morphologies; Miragahei & Best (2017) are among this group, including wide-angle-tail sources in their catalogue with a FRI label, as do Terni de Gregory et al. (2017). Others view them as potentially being a completely separate population, given that their bent morphology often seems more alike FRIs than FRIs; while constructing WATCAT (a wide-angle-tail galaxy catalogue built to complement FRICAT and FRICAT), Missaglia et al. (2019) found that at 1.4 GHz wide-angle-tail sources have flux density more comparable to FRIs, above the classic flux cut-off, but when their radio power is plotted against optical magnitude they instead fall within the region populated by FRIs. Aniyan & Thorat (2017) find that their three-class classifier is much more likely to mislabel bent-tail sources as FRIs than FRIs, although as discussed previously this may have been affected by the lack of image normalisation.
While there does not yet appear to be a full consensus as to where bent-tail galaxies fit to the FR dichotomy, it is generally agreed that if not an entirely separate class, bent-tail sources resemble FRIs more closely than either FRII or hybrid sources. Consequently, if we are to treat bent-tail sources as a subset of FRI galaxies (as done within MiraBest), AT17 becomes imbalanced in the opposite direction: over 60% of the unaugmented images are of FRIs, with 60% of that group being bent-tail sources, which does not reflect the true prevalence of this morphology. Because of this imbalance and focus upon only standard FRI, standard FRI, and bent-tail morphologies, AT17 may be considered to be a less diverse catalogue in terms of overall FR morphology than MiraBest, in addition to containing a noticeably smaller population of sources.
Despite sharing broadly the same image processing methods, MiraBest and AT17 are not immediately compatible to be merged into a single dataset because of AT17's lack of scaling and the previ
ously discussed mismatch in their class structure. The former is more readily overcome, as the application of the same scaling method used in both MiraBest and FR-DEEP is still possible to perform on the processed AT17 images, while the latter requires consideration as to the most recent consensus on the properties of bent-tail galaxies. Additionally, as with FR-DEEP, the presence of duplicate sources is a concern; as previously discussed, Miraghaei & Best (2017)'s catalogue shares a noticeable population of sources with FRICAT and FRICAT, and while it is unclear whether any of the bent-tail sources of Proctor (2011)'s are likewise present, both catalogues drawing from data from the FIRST survey means it is entirely possible. Combining these catalogues, then, is not trivial, and this difficulty makes AT17 a less appealing immediate candidate than FR-DEEP for expanding upon the overall source count; however, if a larger population of bent-tail sources is desired for study, AT17 is the most straightforward catalogue to adapt for use alongside MiraBest.
## 8 The Mirabest Dataset Python Class
The structure of the MiraBest dataset mimicks that of the widely used MNIST (LeCun & Cortes, 2010) and (e.g.) CIFAR (Krizhevsky, Nair & Hinton Krizhevsky et al., 2014) datasets that are used with popular deep learning software packages such as Pytorch (Paszke et al., 2019) and Keras (Chollet et al., 2015).
A Python class is provided with the dataset itself in order to facilitate its use with these packages with a structure inherited from the Pytorch data.Dataset class. When using this class there is no need to download the MiraBest dataset independently as the data loader will pull a remote copy automatically if no local instance is found. As for the Pytorch MNIST and CIFAR data loaders a checksum is instituted in order to avoid corrupted versions of the dataset being created.
The metadata for each sample includes both a "label" and a "fine label", where the label indicates the binary FRI/FRII classification, see Table 3, and the fine label indicates the morphological sub-classification, see Table 2. Child classes for the sub-samples MiraBest Confident, MiraBest Uncertain and MiraBest Hybrid, see Table 3, are also included in the Dataset class.
### Dataset normalisation
For deep learning applications it is standard practice to normalise individual data samples from the data set by shifting and scaling as a function of the mean and variance calculated from the full training set (LeCun et al., 2012). The normalisation parameters for the MiraBest training dataset and its derivative training datasets are listed in Table 4.
## 9 CRUMB: Collected Radiogalaxies using Mirabest
All of the FR galaxy datasets discussed in this work use image data from the VLA FIRST survey, and as such it is possible to combine them into a single dataset without encountering problems as a result of differing survey properties, such as (e.g.) angular resolution. Doing this would allow for a larger number of sources to be used for training and, if the parent dataset of each source were to be labelled, allow for direct comparison of performance on each of these datasets. However, this cannot be done by simply merging the datasets together; both FR-DEEP and AT17 draw from the same catalogues for their sources, meaning that a simple merge would not only result in duplications of sources but potentially those duplications having multiple different labels, which would effectively result in label noise. This motivated the creation of the Collected Radiogalaxies Using MiraBest (CRUMB) dataset, which is a cross-matched combination of MiraBest, FR-DEEP, AT17 and MiraBest Hybrid. This dataset retains not only a record of which parent datasets each source can be found in but their label in each of these datasets, and hence offers the ability to select labels from the user's catalogue of choice.
### Constructing CRUMB
To construct CRUMB, the full lists of the sources used in FR-DEEP (Tang, 2019) and AT17 (Aniyan & Thorat, 2017) were cross-matched with the sources in MiraBest and MiraBest Hybrid to identify duplicates. Since the location of the source centre was not expected to be exactly consistent between different catalogues, duplicates were found by searching for sources which had coordinates within 270 arcseconds of ones another, i.e. within the same image using MiraBest's image size, and checking whether the coordinates aligned with the same source using visual inspection. Using this method, a total of 2100 unique sources were found to exist when combining these four datasets, with 541 being present in more than one dataset.
For this combined sourcelist, the class labels of sources which appear in more than one dataset were examined to check for disagreements in FR class between different datasets. The vast majority of the duplicate sources (518 of 541) were found to have been classified with the same overall FR class in all datasets, with 470 also agreeing on morphological subclass. A small number of sources (15) showed clearly contradictory labels, with sources labelled as FRII in one dataset being labelled as FRII in another. This demonstrates the label noise issue that can arise if merging machine learning catalogues without checking for duplicate sources, which if not addressed would result in models learning multiple labels for the same source.
To allow for this ambiguity in class to be retained, CRUMB uses a labelling system which provides both a "basic" and a "complete" label. The "complete" label is represented by vector with four entries, each of which represents a source's class label in each of the four parent datasets as shown in Table 5. If a source is not present in a given dataset, it is denoted with "-1" in the relevant entry. This allows for multiple class labels to be registered; for example, a vector of [0, -1, 2, -1] would correspond to a source which is labelled as a confident standard-morphology FRI in MiraBest, a bent source in AT17, and is not present in FR-DEEP or MiraBest Hybrid.
Additionally, each source is labelled with one of three "basic" labels: FRI (0), FRII (1) and hybrid (3). These labels are assigned by the majority label in all the datasets a source appears in; in the case of two contradictory labels, we favour the label provided by MiraBest. Using this method, all sources labelled by AT17 as "bent" are folded
\begin{table}
\begin{tabular}{l c c} \hline Dataset & \(\mu\) & \(\sigma\) \\ \hline MiraBest (full) & 0.0031 & 0.0352 \\ MiraBest Confident & 0.0031 & 0.0350 \\ MiraBest Uncertain & 0.0031 & 0.0356 \\ MiraBest Hybrid & 0.0036 & 0.0375 \\ CRUMB & 0.0029 & 0.0341 \\ \hline \end{tabular}
\end{table}
Table 4: Mean and standard deviation of the MiraBest training dataset and its derivatives.
into the FRI class, and a total of 1009 FRIs, 998 FRIs and 97 hybrid sources are included in CRUMB.
Images in the CRUMB dataset were processed in the same manner as for MiraBest. Because of the ambiguity in "true" label and lack of information on redshift and angular size for many sources, image file names are formatted as "[source right ascension]_[source declination]" for consistency across this combined dataset. Both the file name and complete label may be retrieved for any source using the built-in "filename" and "complete_labels" methods.
## 10 Use of Mirabest in the Literature
The first use of MiraBest was made by Bowles et al. (2021) who demonstrated that an attention-gated CNN model recovered an 92% accuracy on the MiraBest Confident test set (84% accuracy on the full MiraBest test set including uncertain samples), exceeding the 88% classification accuracy attained using the FRDEEP-F dataset by Tang et al. (2019). Scaife & Porter (2021) used the Mirabest dataset to demonstrate that classification performance is modestly improved by enforcing both cyclic and dihedral equivariance in the convolution kernels of a CNN for FR classification and that E(2)-equivariant models were able to reduce variations in model confidence as a function of galaxy orientation. Siljepcevic et al. (2022) explored the effect of dataset shift in semi-supervised learning (SSL) by combining labelled data from MiraBest with a larger unlabelled data pool from the Radio Galaxy Zoo catalogue (Wong et al. in prep), demonstrating that when different underlying catalogues drawn from the same radio survey are used to provide the labelled and unlabelled data sets required for SSL, a significant drop in classification performance is observed. In Mohan et al. (2022) the uncertainty associated with classification of individual data samples within the MiraBest dataset was explored using Bayesian deep learning, confirming that the machine learning model was less confident about the samples qualified as Uncertain by the MiraBest labelling scheme than those labelled as Confident, and that this was amplified for samples labelled as Hybrid.
## 11 Conclusions
Machine learning datasets of astronomical data often have different requirements than astronomical datasets used for other purposes. For image classification, principle amongst these requirements is having reliably-labelled data that either exists in large enough quantities to not necessitate augmentation, or exists in smaller quantities that may be augmented in such a way that the dataset size can be artificially increased without resulting in overfitting. These requirements often result in astronomical ML datasets being created for the specific research needs of a small number of individuals and not being made readily available for broader use, as it is assumed that others wishing to construct a similar dataset will likewise independently seek out suitable sources which meet their needs.
Fanaroff-Riley galaxy classification has previously been performed using machine learning, but the majority of existing catalogues of FR galaxies have not been used to produce publicly accessible image datasets. Datasets which have been made accessible, such as FR-DEEP, were found to be limited to only binary FR sources and to contain fewer images - \(\mathcal{O}(10^{2})\) - than the largest current catalogues of FR galaxies, which consist of \(\mathcal{O}(10^{3})\) examples. Because of this, we felt the need to produce a new publicly accessible machine learning dataset for Fanaroff-Riley galaxies, and created MiraBest for this purpose.
At time of writing, MiraBest is believed to be the largest publicly available image dataset labelled according to the FR classification and most diverse in terms of inclusion of rarer morphologies. Additionally, the option of including the more morphologically ambiguous data represented by the "uncertainty-labelled" images means that MiraBest may be considered a less curated dataset than many other image classification datasets, which largely present only clear and unambiguous images of the target classes. Because of this, MiraBest is suitable for examining the ability of classifiers to identify unusual and ambiguous sources, and whether the inclusion of these sources in a model's training data helps or hinders performance both on the whole and in ability to recognise these unusual sources in particular.
## Acknowledgements
FP gratefully acknowledges support from STFC and IBM through the iCASE studentship ST/P006795/1. AMS gratefully acknowledges support from an Alan Turing Institute AI Fellowship EP/V030302/1. The authors would also like to thank Hongming Tang and Kshitij Thorat for their assistance in providing the source lists used in their datasets.
## Data Availability
MiraBest has been made accessible for public download via the Zenodo website ([https://doi.org/10.5281/zenodo.4288837](https://doi.org/10.5281/zenodo.4288837); DOI: 10.5281/zenodo.4288837), allowing it to be used for any research applications using FR galaxies. Information about its construction has also been provided, permitting its integration with other datasets if desired and making it possible to supplement it with additional FR galaxies processed in an identical format if other large catalogues of these galaxies should become available in the future. CRUMB has likewise been made accessible for public download via Zenodo ([https://doi.org/10.5281/zenodo.7948346](https://doi.org/10.5281/zenodo.7948346), DOI: 10.5281/zenodo.7948346).
|
2302.13960 | Acquisition Conditioned Oracle for Nongreedy Active Feature Acquisition | We develop novel methodology for active feature acquisition (AFA), the study
of how to sequentially acquire a dynamic (on a per instance basis) subset of
features that minimizes acquisition costs whilst still yielding accurate
predictions. The AFA framework can be useful in a myriad of domains, including
health care applications where the cost of acquiring additional features for a
patient (in terms of time, money, risk, etc.) can be weighed against the
expected improvement to diagnostic performance. Previous approaches for AFA
have employed either: deep learning RL techniques, which have difficulty
training policies in the AFA MDP due to sparse rewards and a complicated action
space; deep learning surrogate generative models, which require modeling
complicated multidimensional conditional distributions; or greedy policies,
which fail to account for how joint feature acquisitions can be informative
together for better predictions. In this work we show that we can bypass many
of these challenges with a novel, nonparametric oracle based approach, which we
coin the acquisition conditioned oracle (ACO). Extensive experiments show the
superiority of the ACO to state-of-the-art AFA methods when acquiring features
for both predictions and general decision-making. | Michael Valancius, Max Lennon, Junier Oliva | 2023-02-27T17:02:11Z | http://arxiv.org/abs/2302.13960v1 | # Acquisition Conditioned Oracle for Nongreedy Active Feature Acquisition
###### Abstract
We develop novel methodology for active feature acquisition (AFA), the study of how to sequentially acquire a dynamic (on a per instance basis) subset of features that minimizes acquisition costs whilst still yielding accurate predictions. The AFA framework can be useful in a myriad of domains, including health care applications where the cost of acquiring additional features for a patient (in terms of time, money, risk, etc.) can be weighed against the expected improvement to diagnostic performance. Previous approaches for AFA have employed either: deep learning RL techniques, which have difficulty training policies in the AFA MDP due to sparse rewards and a complicated action space; deep learning surrogate generative models, which require modeling complicated multidimensional conditional distributions; or greedy policies, which fail to account for how joint feature acquisitions can be informative together for better predictions. In this work we show that we can bypass many of these challenges with a novel, nonparametric oracle based approach, which we coin the acquisition conditioned oracle (ACO). Extensive experiments show the superiority of the ACO to state-of-the-art AFA methods when acquiring features for both predictions and general decision-making.
## 1 Introduction
An overwhelming bulk of efforts in machine learning are devoted to making predictions when given a fully observed feature vector, \(x\in\mathbb{R}^{d}\). Although this paradigm has produced impressive models, it largely ignores that the collection of features comes at a cost, since it serves up all the possible features up front (requiring a potentially expensive complete collection of features for all instances). In contrast, real-world scenarios frequently involve making judgements with some features or information missing. In a conceptual leap beyond conventional imputation and partially observed methodology, here we note that many situations allow for the dynamic collection of information on an instance at inference time. That is, often one may query for additional information (features) that are currently missing in an instance to make a better prediction. This is especially so in an automated future, where autonomous agents can routinely interact with an environment to obtain more information as they make decisions. In this work we develop novel, more performant methodology for _active feature acquisition_ (AFA), the study of how to sequentially acquire a dynamic (on a per instance basis) subset of features that minimizes acquisition costs whilst still yielding accurate predictions.
Consider the following illustrative applications where AFA is useful. First, we consider an automated survey system to perform psychological assessments. A traditional ML approach would require collecting responses on an exhaustive list of survey questions, and only after all responses (features) are collected would one make a prediction of the correct assessment. However, administering a long survey is slow, may lead to user fatigue (Early et al., 2016), and may even decrease the accuracy of responses (Early et al., 2016). Instead, an AFA approach would _sequentially_ decide what next question (if any) to prompt the user with to help it make its assessment, ultimately leading to a prediction with a succinct, personalized subset of features per instance. Note that in contrast to traditional feature selection, which would always select the same subset of questions, an AFA approach will determine a custom subset of questions (of potentially different cardinalities) to ask on a per case basis. This is because, in AFA, the next feature (answer to a question) to acquire can depend on the values of previous acquisitions. For example, if an instance answers "no" to "Are you sleeping well?" the next acquisition may be for "Did you go outside today?"; had they answered "yes" instead, the next acquisition could be for "Are you eating well?". Similar applications are possible for educational assessments and au
tomated trouble shooting systems. In cyberphysical systems, AFA approaches can be used to determine what sensors are necessary (and when they are necessary), potentially reducing latency and extending battery life.
In this work, we introduce the Acquisition Conditioned Oracle (ACO), a nonparametric approach to perform active feature acquisition. Through a careful design, ACO presents a novel yet simple approach that has many advantages over predecessors. Namely, ACO is a: non-greedy, deployable oracle that can be used to train a parametric policy, all without the challenging task of reinforcement learning (Minsky, 1961; Sutton, 1988; Shim et al., 2018) or generative surrogate model training (Gu et al., 2016; Ma et al., 2018; Li and Oliva, 2021), whilst achieving better empirical results. In addition to active feature acquisitions for making predictions, we generalize the ACO with novel techniques for active feature acquisition when making _decisions_ to maximize expected outcomes.
Our contributions are as follows: 1) we develop the ACO, a novel, non-greedy oracle that is based on conditional dependencies in feature and labels; 2) we develop a data-driven approximation to the ACO based on simple nonparametric techniques; 3) we develop a simple approach to train a parametric policy for AFA using the ACO; 4) we extend the ACO to decision-making tasks where the policy attempts to maximize expected outcomes based on actively acquired features; 5) we show the empirical effectiveness of our approach compared to more complicated state-of-the-art (SOTA) deep learning based AFA models.
## 2 Background
We now expound on the formal definition of the AFA problem. Throughout, we consider underlying instances \(x\in\mathbb{R}^{d}\) and corresponding labels \(y\). We denote the \(i^{\rm th}\) feature as \(x_{i}\in\mathbb{R}\), and a subset of feature values \(o\subseteq\{1,\ldots,d\}\) as \(x_{o}\in\mathbb{R}^{|o|}\). At a high level, the AFA problem is concerned with a sequential, dynamic acquisition of salient features, such that one can make a successful prediction based only on the acquired feature values. As several previous works have noted (Zubek and Dietterich, 2002; Ruckstiess et al., 2011; Shim et al., 2018), AFA can be neatly encapsulated as the following MDP: states \(s=(x_{o},o)\) are comprised of the currently observed features \(o\), and the respective values \(x_{o}\); actions \(a\in(\{1,\ldots,d\}\setminus o)\cup\{\phi\}\) indicate whether to acquire a new feature value, \(a\in\{1,\ldots,d\}\setminus o\), or to terminate with a prediction, \(a=\phi\); when making a prediction (\(a=\phi\)) rewards are based on a supervised loss \(-\ell(\hat{y}(x_{o},o),y)\) of a prediction based on observed features \(\hat{y}(x_{o},o)\) and the ground truth label \(y\), otherwise the reward is a negative cost of acquiring another feature onto \(o\), \(-c(a,o)\) (commonly at a constant cost \(c(a,o)=\alpha\)); lastly, for non-terminal actions, the state transitions \((x_{o},o)\rightarrow(x_{o\cup\{a\}},o\cup\{a\})\). Note that predicting \(y\) is done via an estimator \(\hat{y}(x_{o},o)\) (abbreviated as \(\hat{y}(x_{o})\)) that is able to take in various (arbitrary) subsets of features to infer the label. In practice, this estimator is often trained ahead of time (and possibly fine-tuned) using a neural network with a masking scheme (Li and Oliva, 2021) or set-embedding of observed features (Ma et al., 2018).
Although an agent policy \(\pi(x_{o},o)\) for the aforementioned MDP may be trained directly using reinforcement learning techniques (Ruckstiess et al., 2011; Shim et al., 2018), the large action space and sparse reward structure make this a difficult RL task. To assuage these issues, recent work (Li and Oliva, 2021) has employed a learned surrogate generative model and model-based RL techniques to improve the learned agent policies. Other approaches have bypassed training a general policy and instead utilize greedy policies to select features. For instance, Ma et al. (2018); Gong et al. (2019) use a generative approach to estimate the expected utility (e.g., mutual information) of any one acquisition, employing a greedy strategy to maximize utility; similarly Zannone et al. (2019) also consider information gain for AFA with limited dataset sizes. Most related to our work, He et al. (2012, 2016) utilize a greedy "oracle" and imitation learning techniques to teach a supervised policy to select features. In SS 3 we develop an alternative, non-greedy oracle, which may also be used to supervised a parametric policy.
### Related Works
Before developing our methodology, we briefly compare and contrast AFA to other related directions in ML.
Feature SelectionFeature selection (Chandrashekar and Sahin, 2014; Miao and Niu, 2016; Li et al., 2017; Cai et al., 2018), ascertains a _constant_ subset of features that are predictive. Feature selection methods choose a _fixed_ subset of features \(s\subseteq\{1,\ldots,d\}\), and always predict \(y\) using this same subset of feature values, \(x_{s}\). In contrast, AFA considers a _dynamic_ subset of features that is sequentially chosen and personalized on an instance-by-instance basis (Fig. 1) to increase useful information. Feature selection can be seen as a special case of AFA (perhaps a stubborn one) that selects the same next features to observe regardless of the previous feature values it has encountered. (It is worth noting that AFA may be applied after an initial feature selection preprocessing step to reduce the search space.) In general, a dynamic strategy will be helpful whenever covariates are in
Figure 1: Acquisition process (top) and the prediction probabilities (bottom) using ACO agent on 2 MNIST digits (§ 4.2). Features are acquired sequentially and vary per instance.
dicative of other features that are dependant with the output, which is often the case in real-world examples 1.
Footnote 1: A simple case that illustrates the dominance of an active acquisition strategy is as follows. Suppose that there are \(d-1\) independent features \(x_{i}\) for \(i\in\{1,\ldots,d-1\}\), and one “guiding” feature, \(x_{d}\), which decides what independent feature determines the output \(y\). E.g., if \(x_{d}\) is in some range then \(y\) is a function of \(x_{1}\), if \(x_{d}\) is in another range then \(y\) is a function of \(x_{2}\), etc. If all independent features are used with equal probability, then a typical feature selection algorithm will have to select all \(d\) to obtain a high accuracy (since it must use a fixed subset). In contrast, by dynamically selecting features, AFA can observe only two features: \(x_{d}\), and the corresponding independent feature.
Active learningActive learning (e.g., Fu et al. (2013), Konyushkova et al. (2017), Yoo and Kweon (2019), MacKay (1992), Houlsby et al. (2011)) is a related task in ML to gather more information when a learner can query an oracle for the _true label_, \(y\), of a _complete_ feature vector \(x\in\mathbb{R}^{d}\) to _build a better estimator_\(\hat{f}\). Active learning considers queries to an _expert_ for the correct _output_ label to a _complete_ set of features in order to construct a training instance to _build a better classifier_. Instead, AFA considers queries to _the environment_ for the _feature value_ corresponding to an unobserved feature dimension, \(i\), in order to provide a _better prediction on the current instance_. Thus, while the active learning paradigm queries an _expert_ during _training_ to build a classifier with complete features, the AFA paradigm queries the _environment_ at _evaluation_ to obtain unobserved features of a current instance to help its current assessment.
## 3 Methods
We develop our methodology by first designing an oracle that operates over the same information as the AFA policy (an important restriction that has not been previously respected); after, we develop a data-driven approximation to the oracle, and show how to learn a parametric student policy to emulate it. Lastly, we extend our approach to also consider the active acquisition of features when making decisions that maximize expected general outcomes.
### Cheating Oracle
We begin by considering previous retrospective oracle approaches (He et al., 2012, 2016; Madasu et al., 2022), which greedily determine what next feature would ideally be acquired for a particular instance \(x\), with corresponding label \(y\), given an observation set \(o\) of already acquired features. That is, such a retrospective approach will look for the next feature, \(i\), that, when added to the already acquired features, \(o\), results in the best improvement of a loss \(\ell\) (e.g., MSE or cross-entropy) using an estimator \(\hat{y}\):
\[i=\operatorname*{arg\,min}_{j\in\{1,\ldots,d\}\setminus o}\ell\left(\hat{y}(x _{o\cup\{j\}}),y\right). \tag{1}\]
Although this approach has been previously described as an "oracle" (e.g., the "forward-selection oracle," He et al. (2012)), such a designation is somewhat misleading, as this search operates over information that is not available to the agent. Note that to select the next action, eq. (1) will try the classifier (or regressor) \(\hat{y}\) on the instance \(x\) with the instance's values \(x_{o\cup\{j\}},\;\forall j\in\{1,\ldots,d\}\setminus o\) (for yet to be acquired features), and compare the predictions to the instance's true label \(y\); this requires complete knowledge about \(x\) (as well as its true label \(y\)). Thus, in actuality, the resulting action from eq. (1) depends on inputs \(x\) (the entire feature vector), \(y\) (the respective true label), and \(o\) (the already acquired feature indices): \(\pi_{\mathrm{cheat}}(x,y,o)=\operatorname*{arg\,min}_{j\in\{1,\ldots,d\} \setminus o}\ell\left(\hat{y}(x_{o\cup\{j\}}),y\right)\). As an oracle is meant to provide an optimum action for the same state space, we see that eq. (1) "cheats," since it operates over more information than the AFA agent has available, \(\pi(x_{o},o)\).
Although the cheating oracle offers an intuitive appeal and variants of it have been used as a reference teacher policy in AFA (He et al., 2012, 2016; Madasu et al., 2022), here we develop an alternative oracle approach that:
1. **Is Non-greedy**: Previous retrospective approaches have looked for one single acquisition to make at a time. However, once the agent decides to make a prediction, it uses its acquired features _jointly_ to estimate the output. Greedily acquiring the next best feature ignores jointly informative groups of multiple features that may be acquired. Thus, a better strategy (one that we will employ) considers acquisitions _collectively_ for the final prediction.
2. **Respects Sequential Acquisition**: The cheating oracle operates over more information than is accessible to an AFA agent, since it utilizes not-yet-acquired features (and the true label) while evaluating feature subsets and implicitly uses this information to select the next feature. In contrast, the AFA agent \(\pi(x_{o},o)\) only has access to the already acquired feature values \(x_{o}\) when selecting the next action. That is, the agent cannot make a decision based on information it has not acquired. In order to more directly provide guidance on the correct acquisition action to take when provided with _partially observed_ information on an instance, we shall define a non-cheating oracle that operates only over acquired feature values, which should ultimately provide more accurate targets to a learned student policy.
3. **Is Deployable**: In principle, an oracle should be deployable in one's environment, as it should be able to yield the correct action given _the same information_ (the same states) as the agent. As a consequence of utilizing information that is not available during a roll-out, the cheating oracle is not deployable at inference time, making it difficult to judge its performance. Instead, our approach shall yield a deployable oracle, making it possible to judge the performance of the oracle at test time and disentangling any degradation in performance due to imitation-learning.
### Acquisition conditioned Oracle
Here we now define our non-cheating, deployable oracle, which we coin the _acquisition conditioned oracle_ (ACO). To begin, we define the ACO in terms of a full distributional knowledge over features and labels \(p(x,y)\) (including all marginal and conditional distributions), and later describe approximations based on training data. First, we generalize the greedy search in eq.1 to a non-greedy approach using a weighted (by \(\alpha>0\)) cost \(c\) for subsets:
\[u(x,y,o)=\operatorname*{arg\,min}_{v\subseteq\{1,\ldots,d\}\setminus o}\ell\left( \hat{y}(x_{o\cup v}),y\right)+\alpha c(o\cup v). \tag{2}\]
The (still "cheating") eq.2 now considers additional subsets of features \(u\subseteq\{1,\ldots,d\}\setminus o\) atop of \(o\) to best reduce the loss along with the cost of the additional features \(c(o\cup u)\). The search-space is now much larger than the greedy search eq.1; below we describe effective strategies to navigate the large search. In addition, eq.2 still operates with unacquired features of \(x\) in \(\{1,\ldots,d\}\setminus o\), and thus is still not an apples-to-apples (in the state-space) oracle to the AFA policy. We propose an intuitive, straightforward, and ultimately effective, _novel_ approach to avoid utilizing unacquired features: use the conditional distribution (conditioned on acquired features) over labels and unacquired features,
\[u(x_{o},o)=\operatorname*{arg\,min}_{v\subseteq\{1,\ldots,d\}\setminus o} \mathbb{E}_{y,\mathbf{x}_{i}|x_{o}}[\ell(\hat{y}(x_{o},\mathbf{x}_{v}), \mathbf{y})]+\alpha c(o\cup v). \tag{3}\]
At a high-level, this approach imagines _likely scenarios_ of the unacquired feature values and labels (based on the acquired features and their values), and determines additional subsets of features to acquire based on these imagined values. Whilst a natural choice (in retrospect) for how to determine useful acquisitions without "cheating," we emphasize the novelty of this oracle. Whenever the ACO minimization eq.3 returns \(u(x_{o},o)=\varnothing\), it is clear that there are no further acquisitions that are worth the cost. Below we discuss alternatives for when \(u(x_{o},o)\neq\varnothing\), which indicates that (based on the acquired information) we would like to acquire all the features in \(u(x_{o},o)\) jointly to make a prediction. As the typical AFA MDP acquires one feature at a time, a simple approach is to uniformly select one acquisition from the subset \(u(x_{o},o)\). That is, we may define the ACO's policy (Alg. 1) as (with notational abuse for clarity):
\[\pi_{\mathrm{ACO}}(x_{o},o)= \tag{4}\] \[\mathbb{I}\{u(x_{o},o)=\varnothing\}\hat{y}(x_{o})+\mathbb{I}\{u( x_{o},o)\neq\varnothing\}\mathrm{Unif}[u(x_{o},o)].\]
As the loss is in expectation over the unobserved label and features eq.4, the ACO avoids "cheating" and plugging in unacquired values not available during inference as eq.1.
```
Input: Initial observed features \(o\) (possibly \(o=\varnothing\)), instance values \(x_{o}\), joint distribution \(p(x,y)\), estimator \(\hat{y}\) Initialize predict \(\coloneqq\) false while\(|o|<d\)and not predict do \(u(x_{o},o)\coloneqq\operatorname*{arg\,min}_{v\subseteq\{1,\ldots,d\}\setminus o} \mathbb{E}_{y,\mathbf{x}_{i}|x_{o}}[\ell(\hat{y}(x_{o},\mathbf{x}_{v}), \mathbf{y})]+\alpha c(o\cup v)\) if\(u(x_{o},o)=\varnothing\)then predict = true else \(j\sim\mathrm{Unif}[u(x_{o},o)]\), \(o\coloneqq o\cup\{j\}\) endif endwhile Return Prediction \(\hat{y}(x_{o})\)
```
**Algorithm 1** Acquisition Conditioned Oracle
### Approximate ACO
There are two main limitations that make the ACO minimization eq.3 infeasible in practice: 1) the ground truth data distribution is unknown; and 2) the search space over subsets \(v\subseteq\{1,\ldots,d\}\setminus o\) is large for most dimensionalities. While both problems have many potential solutions, we show (see empirical results, SS 4) that simple, straightforward strategies are performant. Throughout, we assume that we have a training dataset \(\mathcal{D}=\{(x^{(i)},y^{(i)})\}_{i=1}^{n}\) of \(n\) input/output tuples as is common in prior AFA approaches.
First, we note that in practice the data distribution, \(p(x,y)\), is unknown and therefore cannot be used in the ACO minimization eq.3 and resulting policy eq.4. Furthermore, since the data distribution must be conditioned as \(p(y,x_{v}|x_{o})\), for \(v\subseteq\{1,\ldots,d\}\setminus o\) (e.g., for the expectation in eq.3), we must be able to condition on _arbitrary_ subsets of features \(o\), which is typically infeasible for many performant generative models (Kingma et al., 2016; Dinh et al., 2017; Oliva et al., 2018). Advances in _arbitrary conditional_ models (Ivanov et al., 2018; Belghazi et al., 2019; Li et al., 2020; Molina et al., 2019; Strauss and Oliva, 2021), which are able to approximate conditional distributions \(p(x_{u}\mid x_{o})\) for arbitrary subsets \(u,o\), and recent work that leverages these _arbitrary conditional_ model to provide a surrogate model-based RL approach for AFA (Li and Oliva, 2021), suggest that one can learn distribution approximators to draw realistic samples over \(\mathbf{y},\mathbf{x}_{v}|x_{o}\), for \(v\subseteq\{1,\ldots,d\}\setminus o\). However, this approach requires learning an expensive generative model, which can be challenging due to computation, hyperparameter-optimization, and sample-complexity. We propose to bypass a complicated generative approach and instead sample labels and unacquired features through neighbors, i.e.
\[\mathbb{E}_{y,\mathbf{x}_{v}|x_{o}}[\ell\left(\hat{y}(x_{o}, \mathbf{x}_{v}),\mathbf{y}\right)]\approx\frac{1}{k}\sum_{v\subseteq\mathbb{N }(x_{o})}\ell\left(\hat{y}(x_{o},x_{v}^{(i)}),y^{(i)}\right), \tag{5}\]
where \(N_{k}(x_{o})\) is the set of \(k\) nearest neighbor indices in \(\mathcal{D}=\{(x^{(i)},y^{(i)})\}_{i=1}^{n}\), to \(x_{o}\) (i.e., comparing instances only using features \(o\subseteq\{1,\ldots,d\}\) to values \(x_{o}\) according to a distance function \(d(x_{o},x_{o}^{(j)})\mapsto\mathbb{R}_{+}\)). In practice, as we typically only consider acquiring small subsets \(o\), this approach
provides an effective way to generate useful samples2. A special case arises when \(o=\varnothing\) (when starting the acquisition with no features selected), where all dataset instances are valid neighbors in \(N_{k}(\varnothing)\). In this case, the first feature to acquire can be drawn from some fixed distribution \(j\sim\pi_{0}\) based on \(\operatorname*{arg\,min}_{v\subset(1,\ldots,d)\cup\frac{1}{n}\sum\limits_{i=1}^{n }\ell\left(\left\langle g(x^{(i)}),y^{(i)}\right\rangle+\alpha c(o\cup v)\right)}\), which needs to be computed only once; alternatively the first feature to acquire atop of an empty set can be set as \(j\coloneqq j_{0}\) where \(j_{0}\) is a hyperparameter that can be validated).
Footnote 2: Note that special care must be taken to avoid returning an instance as its own neighbor when \(x_{o}\) is from the training data.
The proposed ACO policy considers all possible additional subsets of features to append to a current set of features \(o\) eq.3. This results in a number of possible subsets that grows exponentially in the dimensionality, \(d\). When the number of features, \(d\), is modestly sized, it is feasible to search across each subset, however this quickly becomes infeasible for larger dimensionalities. We note that minimizing over \(v\subseteq\{1,\ldots,d\}\setminus o\) can be posed as a discrete optimization problem (as it is equivalent to searching over binary membership indicator vectors); hence, one may deploy a myriad of existing discrete optimization approaches [13] including relaxations [13], and genetic algorithms [15]. Instead, we propose an efficient, simple approach which randomly subsamples potential subsets to search over, \(\mathcal{O}\subseteq\{v|v\subseteq\{1,\ldots,d\}\setminus o\}\). Although this is no longer guaranteed to return the optimal subset, we found that in practice a cardinality of \(|\mathcal{O}|\) in the few thousands typically performed well (see SS 4), indicating that slightly suboptimal subsets were still effective.
Lastly, a subtlety arises in developing an AFA policy from the ACO minimization (3) since AFA policies typically acquire one feature at a time. The ACO minimization returns an additional subset of multiple features, \(u(x_{o},o)\) that are likely to be _jointly_ informative at a relatively low cost given the present information \(x_{o}\). As the oracle suggests that ending up acquiring the features \(u(x_{o},o)\) is beneficial, a sensible heuristic is to choose to acquire any one of the features in \(u(x_{o},o)\) uniformly at random (4). Note that after choosing to acquire a feature \(j\in u(x_{o},o)\), and observing the value \(x_{j}\), the known information changes and the agent is able to update the remaining features that it wishes to acquire (based on a new search with the addition information in \(x_{j}\)). Here we propose another heuristic by "breaking the tie" in features in \(u(x_{o},o)\) according to the single feature in \(u(x_{o},o)\) most minimizes the expected loss (Alg. 2).
```
Input: Initial observed features \(o\) (e.g., \(o=\varnothing\)), instance values \(x_{o}\), Data \(\mathcal{D}=\{(x^{(i)},y^{(i)})\}_{i=1}^{n}\), estimator \(\hat{y}\). Initialize predict \(\coloneqq\)false while\(|o|<d\)and not predict do Draw \(\mathcal{O}\subset\{v|v\subseteq\{1,\ldots,d\}\setminus o\}\) Uniformly \(u(x_{o},o)\coloneqq\operatorname*{arg\,min}_{v\in\mathcal{O}}\frac{1}{\sum \limits_{v\in N_{k}(x_{o})}\ell\left(\left\langle\hat{y}(x_{o},x_{w}^{(i)}),y ^{(i)}\right\rangle+\alpha c(o\cup v)\right.\) if\(u(x_{o},o)=\varnothing\)then predict \(\coloneqq\)true else \(o=o\cup\left\{\operatorname*{arg\,min}_{w\in u(x_{o},o)}\sum\limits_{i\in N_{k }(x_{o})}\ell\left(\hat{y}(x_{o},x_{w}^{(i)}),y^{(i)}\right)\right\}\) endif endwhile Return Prediction \(\hat{y}(x_{o})\)
```
**Algorithm 2** Approximate Acquisition Conditioned Oracle Policy, with Tie-break Selection
### Behavioral Cloning
The approximate ACO policy, \(\hat{\pi}_{\mathrm{ACO}}(x_{o},o)\), defined in Alg. 2 is a valid nonparametric policy in that, for a new instance drawn at test time, it is deployable and can actively acquire features and make a prediction without ever using unacquired features or the instance's label. One may, however, want a parametric policy \(\pi_{\theta}(x_{o},o)\) that is able to succinctly decide what actions to take without having to store and search through the training data (e.g. where \(\pi_{\theta}\) is a parameterized function stemming from neural network weights \(\theta\)). Fortunately, mimicking a teacher policy is the focus of the well studied problem of imitation learning [16], where we are able to leverage algorithms such as DAgger [11] to supervise and train a parametric policy \(\pi_{\theta}(x_{o},o)\) based on the approximate ACO policy \(\hat{\pi}_{\mathrm{ACO}}(x_{o},o)\). In practice, we observed that a simple behavioral cloning approach [10] that directly trains \(\pi_{\theta}(x_{o},o)\) based on roll-outs of \(\hat{\pi}_{\mathrm{ACO}}(x_{o},o)\) was an effective way of supervising the parametric policy. The surprising effectiveness of behavioral cloning is perhaps attributable to the "non-cheating" nature of the ACO, which uses only the same available information as \(\pi_{\theta}(x_{o},o)\) (and can thereby be easily emulated) (See SS 4).
### Afa for decision making
We now consider an important extension to settings where one wishes to actively acquire features to determine a _decision_ that will maximize an _outcome_, rather than for determining a _prediction_ to match a _label_, as before. For example, this is relevant in a medical context where clinicians need to decide between treatment strategies for a patient. A fundamental challenge in constructing algorithms for decision tasks is that the outcome under only one decision is ever observable for a given instance. Therefore, decision making requires constrasting counterfactuals that are unobserved in the data. The construction of policies to make optimal decisions from data with _fully observed_ contexts has been extensively studied in statistics, computer science, and operations research under the titles dynamic treatment regimes
(DTRs) or individualized treatment rules (ITRs) Murphy (2003); Athey and Wager (2021); Kosorok and Laber (2019).
Below, we develop analogies between components in a causal inference setup and ACO for prediction that enable AFA for general decision making using our framework. Let \(Y(a)\) denote the potential outcome (Rubin, 2005) under decision (intervention) \(a\in\mathcal{A}\), where we assume larger values of \(Y(a)\) are better without loss of generality. It can be shown that, under certain conditions (see Appendix), one can train a partially observed decision-making policy \(\hat{\pi}_{\mathcal{A}}(x_{o})\) which maps observed feature values \(x_{o}\) to interventions to maximize \(\mathbb{E}[Y(\hat{\pi}_{\mathcal{A}}(x_{o}))]\). Note that this decision-making policy \(\hat{\pi}_{\mathcal{A}}(x_{o})\) is analogous to \(\hat{y}(x_{o})\), the classifier given partially observed inputs, since \(\hat{\pi}_{\mathcal{A}}(x_{o})\) similarly maps partially observed inputs to outputs. Moreover, one can also train an estimator, \(\hat{Q}(x,a)\), of \(\mathbb{E}[Y(a)\mid x]\), the expected outcome for an instance \(x\) with action \(a\). \(\hat{Q}(x,a)\) is akin to a loss function \(\ell\) as it judges the effectiveness of an output \(a\) for an instance. Leveraging \(\hat{Q}(x,a)\), and \(\hat{\pi}_{\mathcal{A}}(x_{o})\), it is now possible to utilize an approximate ACO to construct a decision-making acquisition policy \(\pi_{\text{acq}}(x_{o})\), which determines what (if any) new features are worth acquiring in order to make a better decision (one that shall yield a higher expected outcome). Analogously to eq. (3), we may minimize:
\[\underset{v\subseteq\{1,\ldots,d\}\setminus\sigma}{\arg\min}\frac{1}{k}\sum \limits_{i\in N_{k}(x_{o})}-\hat{Q}\left(x^{(i)},\hat{\pi}_{\mathcal{A}}( \left.x^{(i)}_{o\cup v}\right)\right)+\alpha c(o\cup v), \tag{6}\]
which uses \(k\) neighbors \(N_{k}(x_{o})\) in the training set to determine if acquiring new features \(v\) will lead to better decisions (made by \(\hat{\pi}_{\mathcal{A}}\) and assessed using \(\hat{Q}\)). As before, the ACO's policy \(\pi_{\text{acq}}(x_{o})\) stemming from eq. (6) will determine what new features to sequentially acquire until reaching a subset \(o^{\prime}\) for which no new acquisitions are worth the cost, at which time we choose an intervention according to \(\hat{\pi}_{\mathcal{A}}(x_{o^{\prime}})\). To the best of our knowledge, this represents the first oracle based approach for AFA for general decision-making.
## 4 Experiments
Next, we perform extensive experimentation to assess the ACO's performance relative to SOTA AFA methods. Throughout, we compare to: JAFA (Shim et al., 2018), which jointly trains a deep learning RL agent (using Q-Learning) and a classifier for AFA; GSMRL(Li and Oliva, 2021), which learns a generative surrogate arbitrary conditioning model to derive auxiliary information and rewards that are used to train a deep learning agent; and other methods (described below). Please refer to Shim et al. (2018), Li and Oliva (2021) for baseline hyperparameters details. We use an equal cost for each feature and report multiple results with different \(\alpha\) (3) (as distinct ticks, e.g. Fig. 3) to trade-off between task performance and acquisition cost. Code will be made open-sourced upon publication (see Supp. Mat.).
We found that gradient boosted trees tended to perform well and were therefore used for the ACO policy (AACO, Alg. 2, w/ cross-entropy \(\ell\)) predictor \(\hat{y}\). The training procedure for \(\hat{y}\) varied upon the dimensionality of the features. Small to moderate dimension settings allowed for the separate training of a \(\hat{y}\) for each possible subset of features, leading to a dictionary of predictors. In higher dimensions, we utilized a masking strategy where the feature vector, whose unobserved entries were imputed with a fixed value, was concatenated with a binary mask that indicated the indices of observed entries (Li et al., 2020). During training, these binary masks were drawn at random to simulate making predictions with missing features. While this predictor allows for predictions with arbitrary subsets of features and could be used to make the final predictions, we found better performance by using the aforementioned dictionary-of-predictors approach on the subsets of features available at prediction, which was often significantly smaller than \(2^{d}\) or \(n\) due to the redundant subsets of features acquired. The AACO policy approximates the distribution of \(p(y,x_{u}|x_{o})\) by using the set of \(k\) nearest neighbors. In our experiments, we use \(k=5\), with the exception of the Cube dataset, which exhibited better performance with a larger (\(k=20\)) number of neighbors. Further details are provided in Appendix.
### Synthetic Cube Dataset
We begin with a study of the CUBE-\(\sigma\) dataset (as described by Shim et al. (2018)), a synthetic classification dataset designed for feature acquisition tasks. We consider a \(d=20\)-dimensional version with 8 classes. The feature values are a mixture of values drawn from one of 3 distributions: uniform \([0,1]\), \(\mathcal{N}(0,\sigma^{2})\), and \(\mathcal{N}(1,\sigma^{2})\); in our case, we chose \(\sigma=0.3\) in order to match existing comparative baselines. Each data point consists of 17 uniform and 3 normally distributed features; the class label of the point determines the position and mean of the 3 normal features. For a data point of class \(k\), features \(k\) through \(k+2\) will be normally distributed, with means as shown in Fig. 3; remaining features are uniform.
We compare to previous approaches performing AFA on the CUBE-\(0.3\) dataset (as reported by Shim et al. (2018)), which include: JAFA+\(*\) variations (Shim et al., 2018); and RAM(Mnih et al., 2014). The main Q-learning approach described by Shim et al. (2018) trains all of its constituent networks in a joint fashion; the authors also consider ablations in which the jointly trained classifier is replaced with a pre-trained classifier and in which a naive feature encoding is used (keeping the structure of the classifier and value network consistent) instead of a learned set encoder. We also compare to GSMRL, based on their official repository implementation on Cube(Li and Oliva, 2021). As can be seen from Fig. 3, despite not using deep RL methodology, AAC outperforms other SOTA approaches and nears the performance of an AACO policy with the ground truth classifier (AACO-GT), which is available in this synthetic dataset.
### Real World Datasets
Next we perform experiments on real-world datasets stemming from the UCI ML repository and MNIST. In these experiments we additionally compare to: EDDI[11], a greedy policy that estimates the information gain for each candidate feature using a VAE-based model with a set embedding and selects one feature with the highest expected utility at each acquisition step; GSM+Greedy[12], which similarly acquired features greedily using a surrogate arbitrary conditioning model that estimates the utility of potential feature acquisitions. Results of JAFA, GSMRL, EDDI, GSM+Greedy are reported from Li and Oliva (2021). Due to the higher dimensionality of MNIST, most baselines are unable to scale, and hence we consider a downsampled \(16\times 16\) version were \(d=256\). (See below for an ablation on the full-dimensional MNIST.)
Here, unlike with the synthetic Cube dataset considered above, labels need not be predictable with only a few observed features. Surprisingly, it can be observed that one can make effective predictions using only a few features. Moreover, we see that our ACO approaches are often outperforming the state-of-the-art methods. This is especially impressive considering that most of the baselines utilize complicated deep-learning approaches, where the AAC is utilizing simple nonparametric techniques. Furthermore, note that the performance of the parametric policy AAC0+BC, is competitive with that of the oracle AAC0, despite being supervised using a relatively simple behavioral cloning approach.
### Ablations
Next we perform some ablation experiments to study the effect of the oracle construction and higher dimensionalities.
Oracle ComparisonsAs previously discussed, the cheating oracle, both in its greedy and non-greedy versions, operate on different state-spaces than an AFA policy since they observe information (\(y\) and \(x_{u}\)) that is not accessible to an AFA agent. Thus, in order to utilize these cheating oracles, one must use them to supervise a parametric policy. Here we compare supervising the same (gradient boosted tree-based) parametric policy using behavioral cloning based on greedy-cheating (1), which is analogous to He et al. (2012), nongreedy-cheating (2), and AACO (Alg. 2) oracles. As can be seen, behavior cloning consistently does a better job when mimicking the AAC0 policy. This is likely due not only to the non-greedy nature of ACO, but also due to fact that this teacher policy utilizes the same information as student and can thereby be more easily emulated.
Higher Dimensionalities
As noted by Li and Oliva (2021), most existing AFA baseslines fail to scale to higher dimensional settings, such as the full \(28\times 28\) MNIST dataset. With \(784\)-d MNIST JAFA struggles to learn a policy that selects a small number of features (and instead learns to select all or no features) [12]. Furthermore, greedy methods like EDDI and GSM+Greedy are unable to scale their searches. Indeed, even GSMRL, which is able to learn a reasonable policy in the \(784\)-d MNIST, actually sees a degradation of
Figure 4: Oracle Ablations.
Figure 5: \(786d\) MNIST.
Figure 3: Left: Distribution of features in CUBE-\(\sigma\). Right: Accuracy for methods. Multiple ticks per same method represent different average number of acquired features vs. accuracy for policies stemming from different cost scales \(\alpha\).
Figure 2: Test accuracy on Real-world datasets.
performance when compared to the policy in \(256\)-d MNIST (Fig. 2). In contrast, we find that the AACO policy on the \(784\)-d MNIST achieves higher accuracy with significantly fewer features than GSMRL (Figure 5). This finding, as well as the relative success on the \(256\)-d MNIST, provide evidence that the ACCO policy might better navigate high dimensional feature spaces than its competitors.
### Decision Making
We investigate the ability of the AACO's policy \(\pi_{\mathrm{acq}}(x_{o})\) to learn what features are relevant for making decisions that maximize expected outcomes according to a decision policy \(\hat{\pi}_{\mathcal{A}}(x_{o})\). We first evaluate its performance under a synthetic environment, where the ground-truth optimal decisions are known (i.e. \(Y(a)\) is known for every \(a\in\mathcal{A}\)). We generate four features \(x_{0},..,x_{3}\), a treatment \(a\in\{0,1\}\), and an outcome \(y\) that depends on \(x_{0},...,x_{3}\) and \(a\). Full details of the environment are provided in the Appendix. We use gradient boosted trees to fit \(\hat{Q}(x,a)\) and separate decision policies \(\hat{\pi}_{\mathcal{A}}(x_{o})\) for every \(o\). As depicted in Figure 6, we find that increasing acquisition costs prompts the AACO policy to choose relevant features for decision making. By construction, an average of 2.25 features are sufficient for optimal decision making. At similar levels of AACO acquired features, the partially-observed decision-making policy (\(\hat{\pi}_{\mathcal{A}}(x_{o})\)) attains near-perfect decision making with a value function (\(\mathbb{E}[Y(\hat{\pi}_{\mathcal{A}}(x_{o}))]\)) approaching the value function of the full-context optimal decision policy (green line). As AFA in this setting has been previously understudied, here we compare to a feature selection approach that uses a _constant_ subset of features (depicted in orange). This highlights AACO's ability to dynamically acquire relevant features on a per-instance basis, which leads to better performance.
Next, we demonstrate the practical utility of our method in the contextual bandit setting, which encompasses many decision making problems such as internet adverting [1], content recommendation [1], and medical treatment strategies [17]. When contexts are associated with a _cost_ (possibly in the form of time, computation, or money), a fruitful strategy is to acquire features based on their relevance for decision making. We illustrate a novel application of this end-goal using the Open Bandit Dataset [14]. Here, we propose actively acquiring features detailing a user to recommend products (clothing items) in an attempt to maximize the expected click-rate by the user. The data set consists of over 1 million instances with 65 features. For simplicity, the action space is restricted to the two most frequent clothing item recommendations (see Appendix).
Here the ground-truth optimal decisions are unknown. Therefore, we measure the effectiveness of our methods relative to a learned recommendation policy based on full contexts. As a baseline, we compare our strategy to a natural cost-aware alternative, feature selection, in which a fixed subset of features are used by the decision-policy for all instances. The fixed subset of features is chosen to maximize the policy's estimated value (\(n^{-1}\sum_{i=1}^{n}\hat{Q}(x^{(i)},\pi_{\mathcal{A}}(x_{o}^{(i)}))\)). In Figure 7, we see that the partially-observed decision policy under AACO makes the same decisions as the full-context policy over 90% of the time with only 20 features, has an estimated value close to the full-context policy (green line), and generally outperforms the best feature selection alternative. This highlights the ability of the policy to make satisfactory decisions without observing full contexts and to tailor the acquired contexts to each instance.
## 5 Discussion
In conclusion, this work establishes several notable findings of interest. First, we show that, using a relatively simple approach, it is possible to perform a non-greedy retrospective search for active feature acquisition. Second, by utilizing the conditional data distribution, we show that it is possible to define an oracle that utilizes only the information that is available to the target AFA policy. Although previous work has considered AFA "oracles," to the best of our knowledge the ACO is the first AFA oracle that utilizes only information available in the MDP state, making it the first oracle that is deployable at inference time. Third, we show that simple nonparametric techniques can be effectively used in place of the true data conditionals to deploy the ACO in practice. Fourth, we expound on how to utilize the ACO as a teacher to train a parametric policy. Fifth, we generalize our ACO active feature acquisition approach to yield a policy that acquires useful features in order to make a useful decision (as opposed to prediction) based on a few observed features. Lastly, we conduct extensive empirical experiments showing the superiority of our approach to the state-of-the-art methodology for AFA. It is particularly notable that, despite utilizing simple, nonparametric techniques (which do not need deep learning based models) the ACO is able to outperform deep learning based approaches, even in higher dimensions.
Figure 6: Decision making on synthetic data. Left: % of time policy made the ground-truth optimal decision. Right: value function of the policy.
Figure 7: Left: % agreement (in decision-making) with the full feature policy. Right: estimated value function. |
2305.12401 | WOT-Class: Weakly Supervised Open-world Text Classification | State-of-the-art weakly supervised text classification methods, while
significantly reduced the required human supervision, still requires the
supervision to cover all the classes of interest. This is never easy to meet in
practice when human explore new, large corpora without complete pictures. In
this paper, we work on a novel yet important problem of weakly supervised
open-world text classification, where supervision is only needed for a few
examples from a few known classes and the machine should handle both known and
unknown classes in test time. General open-world classification has been
studied mostly using image classification; however, existing methods typically
assume the availability of sufficient known-class supervision and strong
unknown-class prior knowledge (e.g., the number and/or data distribution). We
propose a novel framework WOT-Class that lifts those strong assumptions.
Specifically, it follows an iterative process of (a) clustering text to new
classes, (b) mining and ranking indicative words for each class, and (c)
merging redundant classes by using the overlapped indicative words as a bridge.
Extensive experiments on 7 popular text classification datasets demonstrate
that WOT-Class outperforms strong baselines consistently with a large margin,
attaining 23.33% greater average absolute macro-F1 over existing approaches
across all datasets. Such competent accuracy illuminates the practical
potential of further reducing human effort for text classification. | Tianle Wang, Zihan Wang, Weitang Liu, Jingbo Shang | 2023-05-21T08:51:24Z | http://arxiv.org/abs/2305.12401v2 | # WOT-Class: Weakly Supervised Open-world Text Classification
###### Abstract.
State-of-the-art weakly supervised text classification methods, while significantly reduced the required human supervision, still requires the supervision to cover all the classes of interest. This is never easy to meet in practice when human explore new, large corpora without complete pictures. In this paper, we work on a novel yet important problem of weakly supervised open-world text classification, where supervision is only needed for _a few examples from a few known classes_ and the machine should handle both _known_ and _unknown classes_ in test time. General open-world classification has been studied mostly using image classification; however, existing methods typically assume the availability of sufficient known-class supervision and strong unknown-class prior knowledge (e.g., the number and/or data distribution). We propose a novel framework WOT-Class that lifts those strong assumptions. Specifically, it follows an iterative process of (a) clustering text to new classes, (b) mining and ranking indicative words for each class, and (c) merging redundant classes by using the overlapped indicative words as a bridge. Extensive experiments on 7 popular text classification datasets demonstrate that WOT-Class outperforms strong baselines consistently with a large margin, attaining 23.33% greater average absolute macro-F1 over existing approaches across all datasets. Such competent accuracy illuminates the practical potential of further reducing human effort for text classification.
text classification, weak supervision, open-world learning +
Footnote †: journal: WOT-Class means _What class? Let’s discover_
+
Footnote †: journal: WOT-Class means _What class? Let’s discover_
set of class-words will help re-cluster documents, and we repeat this process till the number of classes no longer decreases.
We conduct our experiments by fixing the most infrequent half of classes as unseen, which emphasizes the imbalanced and emerging nature of real-world scenarios. And our extensive experiments on 7 popular text classification datasets have shown the strong performance of WOT-Class. By leveraging merely a few known-class examples and the names of known classes, WOT-Class gains a 23.33% greater average absolute macro-F\({}_{1}\) over the current best method across all datasets. When given our prediction of classes as an extra input, WOT-Class still achieves 21.53% higher average absolute macro-F\({}_{1}\). While precisely discovering unseen classes identical to the ground truth remains challenging, our method can provide predictions closest to the actual classes more stably than existing approaches. And considering WOT-Class provides class-words for each discovered unknown class, it shall only require a reasonable amount of effort for humans to digest the discovered ones to something similar to the ground truth. Finally, WOT-Class is less sensitive to class imbalances, making it more suitable in real-world scenarios.
Our contributions are as follows.
* We introduce the novel yet important problem of weakly supervised open-world text classification.
* We propose a novel, practical framework WOT-Class that jointly performs document clustering and class-word ranking that can discover and merge unknown classes.
* Extensive experiments demonstrate that WOT-Class outperforms the previous methods in various manners. Competent accuracy of WOT-Class also illuminates the practical potential of further reducing human effort for text classification.
## 2. Preliminaries
In this section, we formally define the problem of weakly supervised open-world text classification. And then, we brief on some preliminaries about CGExpan and X-Class, two building blocks that we will use in our method.
**Problem Formulation.** In an open-world setting, there exists a not-fully-known set of classes \(\mathcal{C}\), which follow the same hyper-concept and a set of documents \(\mathcal{D}\), each uniquely assigned to a class. A weakly supervised open-world model can observe partial information of \(\mathcal{C}\). In this work, we assume that partial information is given as a labeled few-shot dataset \(\mathcal{D}_{s}=\{x_{b},y_{i}\}_{i=1}^{n},y_{i}\in\mathcal{C}_{s}\), where \(\mathcal{C}_{s}\subset\mathcal{C}\) where \(\mathcal{C}_{s}\) is the known subset of classes and \(n\) is rather small (e.g., a ten-shot dataset would mean \(n=10*|\mathcal{C}_{s}|\)). The goal of the model is to classify the remainder of the dataset, \(\mathcal{D}_{u}=\mathcal{D}\backslash\mathcal{D}_{s}\), where some of the labels in \(\mathcal{C}_{u}=C\backslash\mathcal{C}_{s}\) is completely unknown to the model. We emphasize that different from extremely weakly supervised or zero-shot prompting based text classifiers, the names of the unknown classes are also not given to the model.
**CGExpan.** Entity set expansion aims to expand a set of seed keywords (e.g., _United States, China_) to new keywords (e.g., _Japan_) following the same hyper-concept (i.e., _Country_). This is the exact technique to help us discover potential class words that are highly suggestive of the hidden class names. In our method, we employ CGExpan (Zhou et al., 2017), one of the current state-of-the-art methods for set expansion. CGExpan selects automatically generated hyper-concept words by probing a pre-trained language model (e.g., BERT), and further ranks all possible words guided by selected hyper-concept. However, a common problem of such set expansion method is that they typically give duplicated and semantically-shifted entities even at the top of the rank list. In our work, we utilize CGExpan to find semantically related words to the user-given class names as _candidates_ for the class-words. Our method resolves this imperfect set of candidates problem by ranking them based on a scoring metric learned by few-shot supervision.
**X-Class.** X-Class is an extremely weakly supervised text classification method that works with only names of the classes (Zhou et al., 2017). It proposes a framework that learns representations of classes and text, and utilizes clustering methods, such as a Gaussian Mixture Model (GMM) to assign text to classes. While X-Class showed promising performance in close-world classification settings with minimal supervision, it does not work in open-world settings. Our method for the open-world classification problem reduces open-world text classification to close-world classification by iterative refinement of class-words. Therefore, a strong performing (and efficient) close-world text classifier X-Class is employed.
**Static Representation.** For each unique word \(w\) in the input corpus, we obtain its _static representation_\(\mathbf{s}_{w}\) by averaging BERT's contextualized representations of all its appearances. That is,
Figure 2. An overview of WOT-Class framework. Given the corpus and a part of labels, we first estimate document representations and construct the initial clusters. And then, we perform an iterative cluster refinement to remove redundant clusters. At the end of each iteration, we will update the document representations and recluster them.
\[\mathbf{s}_{w}=\frac{\sum_{w^{\prime}=w}\mathbf{t}_{w^{\prime}}}{\sum_{w^{\prime}=w }\mathbf{t}_{w^{\prime}}},\]
where \(w^{\prime}\) are occurrences of the word in the corpus and \(\mathbf{t}_{w^{\prime}}\) is its contextualized word representation. A static representation is useful in determining the similarity and relatedness of two words.
## 3. Our WOT-Class Method
In this section, we introduce our WOT-Class framework. To be able to accompany unknown classes, a common approach (Dong et al., 2017) is to first overestimate the number of classes and then reduce them after observing the data. We follow this approach and integrate it with class-words, with the goal of reducing the problem into an existing weakly supervised text classification (WS-TC) problem, where there are solutions to classify text in a close-world setting when class-words are given for each class and all classes are known. In simple words, WOT-Class first proposes a set of high-potential words from which class-words can be found, and a list of class-words for an over-estimated number of classes. At each iteration, we start with clusters of documents obtained by WS-TC methods on the proposed class-words, and rerank the high-potential words to find the new list of class-words for each class. During this, classes of similar class-words are removed, and a new cluster of documents can be obtained by WS-TC methods. The iteration stops when no class removal happens. We summarize our iterative refinement framework in Algorithm 1. And Figure 3 shows an overview of the sub-components of this method.
### Initial Overestimation and Clustering
In WOT-Class, we take the approach where we make an initial overestimation of classes, and then try to refine the class-words and clusters of text. We fix a number \(K\) (\(K=100\) in all experiments) and ask CGExpan (refer to Section 2) to suggest \(K-|\mathbf{C}_{s}|\) similar words as the given \(|\mathbf{C}_{s}|\) class names. We consider these as a rough approximation of classes in the corpus, and employee X-Class (refer to Section 2) to construct the initial clusters of text, that may contain many duplicates and different granularity clusters. From now on, our iterative method is completed by two processes. In the first process, we obtain a list of class-words for each cluster and remove duplicate ones; in the second process, we simply apply X-Class to refine the clusters based on the new class-words. We mainly elaborate on the first process.
### Cluster \(\rightarrow\) Class-words
In the first process, we start with clusters of text, the initially suggested words by CGExpan, the class-words in the last iteration (for the first iteration, the class-words are the CGExpan proposed words) and the few-shot supervision, and aim to reduce the number of clusters and assign class-words to each cluster.
**Proposing Potential Class-words.** Class-words are words that are related to and highly indicative of the class. Words from CGExpan qualify for the relativeness, but we also wish to find words that are rather exclusive to all except one (or a few, because of the noise in the clusters) cluster. The indicativeness of a word can be expressed by the statistical outstandness of it to its cluster of text, compared to other clusters. Such statistical measures are well-researched in information retrieval, the representative would be the tf-idf score. We apply a more recent measure that has been used in text classification (Kang et al., 2018) to find statistically representative words within cluster \(i\):
\[score_{i}(w)=\frac{s_{i}(w)}{size_{i}}\cdot\tanh\left(\frac{t_{i}(w)}{size_{ i}}\right)\cdot\log\left(\frac{size_{all}}{s_{all}(w)}\right),\]
where \(t_{i}(w)\) is the number of occurrences of the word \(w\) in documents belonging to cluster \(i\), \(s_{i}(w)\) indicates how many documents in cluster \(i\) contain the word \(w\) while \(size_{i}\) indicates how many documents are in cluster \(i\). In the measurement, the first term tells how indicative a word is to a cluster, the second term measures how frequent this word is, and the third is a normalization based on the inverse document frequency.
We find the top such statistically representative words for each cluster and merge these statistically representative words with words from CGExpan as the set of potential class-words.
**Class-word Ranking.** We utilized CGExpan and statistical representativeness to approximately retrieve a list of potential class-words, and now we precisely rank them by learning a metric that defines the similarity between a cluster and a word.
Specifically, we construct features for a potential class-word to a cluster as the mean and variance of Euclidean distance and cosine similarity of static representations between the word and (a large list of \(W=50\)) statistically representative words for the cluster. Since we know a few labeled examples, they serve as virtual clusters where we treat the features of their class names to the respective virtual clusters as positive signals of closeness. The negative signals are obtained by a heuristic where we find the most dissimilar word from the last iteration class-words. With these two signals, we train a Multilayer Perceptron (MLP) logistic regression on the features to predict the signal. We assign a score \(p(w,i)\) for each cluster \(i\) and each word from the high potential words.
We also propose a post-processing step to remove generic words from the ranking. We follow previous work (Kang et al., 2018) and design a penalty coefficient \(\mu(w,i)\) for each candidate class name \(w\) in cluster
\(i\) based on inter-class statistics:
\[\mu(w,i)=\log\left(\frac{\mathrm{M}\{rank_{j}(w)\mid 1\leq j\leq C\}}{1+rank_{i}(w)} \right),\]
where \(rank_{i}(w)\) is the absolute rank number of \(w\) in cluster \(i\) based on MLP's prediction, \(\mathrm{M}\{S\}\) is the median value of the set \(S\) and \(C\) is the size of the clusters in the current iteration.
The main idea of this formula is to obtain a coefficient to penalize those generic words (e.g., life, which might rank high in most clusters) from being selected as class-words. The numerator of the fraction shows how the word behaves across all clusters while the denominator shows how it behaves in a specific cluster. The median rank of a generic word will be very close to the specific rank. Note that we allow one word as the class name of several clusters because of the initial overestimation, but if a word ranks high in more than half of the clusters, it is considered a generic word that must be filtered. Such penalization and normalization are similar to the idf term in information retrieval. Therefore, we follow the design and choose to divide the two values and take the logarithm. Similar to the idf, this penalty coefficient lowers the chance of selecting a generic word but will not harm proper words.
The final indicativeness ranking is based on the product of two scores:
\[I(w,i)=p(w,i)\times\mu(w,i).\]
**Removal of Clusters.** We finally discuss how we remove the clusters based on the class-words found. In simple terms, we remove clusters that have non-empty intersections in the class-words. While the cluster is removed, all its documents are discarded until they are reassigned to new clusters in the second text clustering process.
Precisely, we pick the \(T\) highest ranked class-words for a cluster to compare, where \(T\) is the number of iterations in the removal process, a simple way to inject the prior that the cluster quality is better and better after each iteration. We noticed that in certain clusters, there might not be enough good class-words, so we introduce a cutoff threshold \(\beta\) such that we do not pick words that have a low ratio of indicativeness score to the highest indicativeness score in the cluster. Then, when two list of class words overlap, we would like to retain one cluster (or in other words, the list of class-words) and remove the other. We remove the cluster with a low coherent score \(\eta\), which is the closeness of the text in the cluster. This can be provided by X-Class which provides representations of each text, and we can simply
\[\eta=\frac{1}{|\mathbf{R}|}\sum_{\mathbf{r}\in\mathbf{R}}\cos\left(\mathbf{r}, \mathbf{R}\right), \tag{1}\]
where \(\mathbf{R}\) is the list of text representations belonging to the cluster. When overlap happens, we remove the cluster that has lower coherence \(\eta\).
We also rerank the class words after removing the duplicated clusters and continue until no clusters require removal.
### Iterative Framework and Final Classifier
The whole iterative framework is simply applying the first class-word suggesting and cluster deduplicating process and the second class-word-based text clustering process iteratively, until the first process no longer removes clusters.
After exiting the iterative loop and obtaining the stable set of clusters, we follow the common approach in weakly supervised text classification (Han et al., 2015; He et al., 2016; He et al., 2017; He et al., 2018) and train a final text classifier based on the pseudo-labels assigned to each text. This usually improves the performance as the fine-tuned classifier mitigates some noisy in the pseudo-labels.
## 4. Experiments
### Datasets
We evaluate WOT-Class on 7 popular datasets of different textual sources and criteria of classes, including news article datasets 20News (He et al., 2017), NYT-Small (He et al., 2017), NYT-Topics (He et al., 2017), NYT-Locations (He et al., 2017) and AGNews (He et al., 2018), an internet question-answering dataset Yahoo (Zhou et al., 2018), and an ontology categorization dataset DBpedia (He et al., 2018) based on 14 ontology classes in DBpedia. Table 1 contains the detailed statistics of the 7 datasets.
Sentiment analysis is also popular in text classification. However, many explored sentiment analysis settings with weak supervision are on the coarse-grained setting (He et al., 2017; He et al., 2018) with too less classes (e.g.,
Figure 3. An overview of _Cluster \(\rightarrow\) Class-words._
positive and negative), which is not practical for open-world class detection.
### Compared Methods
We compare our method with 3 open-world classification methods, Rankstats+, ORCA and GCD.
**Rankstats**(Krishnan et al., 2017) (aka AutoNovel) is the first approach crafted for open-world classification without relying on any human annotation of the unseen classes. It tackles the task by joint learning to transfer knowledge from labeled to unlabeled data with ranking statistics. Since its original setting requires the labeled and unlabeled classes to be disjoint, we follow the process in GCD paper (Krishnan et al., 2017) to adapt it to our setting and named it **Rankstats+. ORCA**(Krishnan et al., 2017) is a general method for open-world semi-supervised classification, which further reduces the supervision of the seen classes. It utilizes an uncertainty adaptive margin to reduce the learning gap between seen and unseen classes. **GCD**(Krishnan et al., 2017) is also semi-supervised, utilizing contrastive representation learning and clustering to directly provide class labels, and improve Rankstats' method of estimating the number of unseen classes. To adapt these three methods to the text domain, we use BERT on top of their frameworks to obtain the document representations as the training feature. And since these three methods utilize self-supervised learning grounded upon visual data, within the text domain, we harness the SimCSE (Simonyan and Zisserman, 2014) approach.
We also propose other two baselines. BERT is known to capture the domain information of a document well (Bahdan et al., 2015; He et al., 2016). So we design **BERT+GMM**, which utilizes the CLS token representations after fine-tuning on the partially given dataset to fit a Gaussian Mixture Model for all classes. We also propose **BERT+SVM**. We first utilize Support Vector Machine to find all outlier documents based on CLS token representations and then classify documents belonging to seen classes with a BERT classifier and cluster outlier documents.
### Experimental Settings
For the basic experiments, we split the classes into half seen and half unseen. We set the most infrequent half of classes (i.e., classes with fewer documents) as unseen, which emphasizes the imbalanced and emerging nature of real-world scenarios. Among the seen classes, we give 10-shot supervision, that is 10 documents for each seen class containing labels and the rest are unlabeled (Figure 4).
For WOT-Class and all compared methods in our experiments, we utilize the pre-trained bert-base-uncased model provided in Huggingface's Transformers library (Huggingface, 2016). All experiments are conducted using a 32-core processor and a single NVIDIA RTX A6000 GPU. For WOT-Class, the default hyper-parameter settings are \(K=100\), \(W=50\), and \(\beta=0.7\). The analysis of hyper-parameter sensitivity is presented in Section 4.4.
Since all compared methods require the total number of classes as input, we evaluate them in two ways.
* **Our Estimation (OE)**: To ensure the comparison is fair to the baseline methods, we also run all the baselines based on (3 runs) of our prediction of classes.
* **Baselines' Estimation**: Since Rankstats+, ORCA, and GCD can also work started with an overestimated number of the initial classes and provide the prediction of the classes. So we also test these three methods starting with \(K=100\) classes as same as our method. Since further experiments show their predictions don't work well in most of our datasets, we do not test other baselines with their estimations.
**Evaluation.** There are several evaluation criteria like accuracy, or NMI-based clustering metrics have been used in previous work (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017). However, they were proposed in a balanced setting and would be biased toward the popular classes when the classes are imbalanced (i.e., the low penalty for misclassification of infrequent unseen classes). Since we argue that in open-world classification, the new, emerging classes are naturally the minority classes, these metrics are not suitable.
Therefore, we propose a new evaluation criterion based on F1 Score to better demonstrate results. Since the final number of classes
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & **AGNews** & **20News** & **NYT-Small** & **NYT-Topics** & **NYT-Locations** & **Yahoo** & **DBpedia** \\ \hline Corpus Domain & News & News & News & News & News & QA & Wikipedia \\ Class Criterion & Topics & Topics & Topics & Topics & Locations & Topics & Ontology \\ \# of Classes & 4 & 5 & 5 & 9 & 10 & 10 & 14 \\ \# of Documents & 12,000 & 17,871 & 13,081 & 31,997 & 31,997 & 18,000 & 22,400 \\ Imbalance & 1.00 & 2.02 & 16.65 & 27.09 & 15.84 & 1.00 & 1.00 \\ \hline \hline \end{tabular}
\end{table}
Table 1. An overview of our datasets. The imbalance factor refers to the ratio of sample sizes between the most frequent class and least frequent one in the dataset.
Figure 4. Schematic diagram of the corpus split. Only 10 samples in popular classes are provided as training labels.
produced by a method may not equal the ground truth, a mapping from the prediction to the actual classes is required. Given the clusters of documents provided by a method and the ground-truth classes of documents, we first perform a maximum bipartite matching between the method-provided clusters and the ground-truth classes, where the edge weights are the number of overlapping documents between the clusters and the ground-truth classes.2 The matched clusters are assigned to the corresponding classes. This step is to guarantee that all classes have some predictions. For each remaining cluster, we simply assign it to the class with which it exhibits the maximal matches.
Footnote 2: In the case when the number of predicted clusters is less than the ground truth, we create virtual clusters with no documents inside.
This is the equation. Consider a matrix \(\mathcal{M}\), where \(\mathcal{M}_{i,j}\) denotes the number of text in cluster \(i\) that belongs to class \(j\). We use \(r_{i}\) to denote the assignment of each cluster:
\[r_{i}=\begin{cases}j,&\text{where $i,j$ are obtained by maximum matching on $\mathcal{M}$;}\\ \text{arg max}_{j}(\mathcal{M}_{i,j}),&\text{otherwise.}\end{cases}\]
After assigning all clusters to the classes, the F\({}_{1}\) score can be computed instance-wise on the text as the evaluation criterion for classification performance.
### Experimental Results
**WOT-Class Performance.** We assess the weakly supervised open-world performance of WOT-Class versus other baselines. Table 2 contains overall comparisons, Table 5 and 6 further provide performances on seen and unseen classes. Specifically, WOT-Class outperforms BERT+GMM and BERT+SVM across all 7 datasets for both seen and unseen classes, even though they are given the matching number of classes as input. This strengthens the need for our iterative refinement process since merely applying few-shot tuning does not bring as good performance as ours. Moreover, WOT-Class performs noticeably better than the general methods Rankstats+, ORCA, and GCD under the same circumstances. Even when the matching number is given as input to them, WOT-Class consistently outperforms them in all cases on all datasets, except for the seen part of DBpedia.
**Imbalance Tolerance.** As generic solutions, ORCA, and GCD with our prediction number of classes only have a little performance margin with WOT-Class on balanced dataset DBpedia, while underperforming more severely on other imbalanced datasets. To gain further insights, we conduct experiments on the tolerance of imbalance for WOT-Class and these three compared methods. As shown in Table 4, we construct three imbalanced DBpedia datasets with different degrees of imbalance. This is achieved by removing the number of samples in each class by a linearly increasing ratio \(\Delta\). For example, when \(\Delta=4\%\), the classes have 100%, 96%, 92%,... of its original documents. We choose the ordering of classes randomly
\begin{table}
\begin{tabular}{c c c c} \hline \hline & **Low** & **Medium** & **High** \\ \hline \(\Delta\) & 2\% & 4\% & 6\% \\ \# of Documents & 19,480 & 16,565 & 13,652 \\ Imbalance & 1.35 & 2.09 & 4.56 \\ \hline \hline \end{tabular}
\end{table}
Table 4. An overview of the imbalanced DBpedia datasets with 14 classes.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline
**Method** & **Extra Info** & **AGNews** & **20News** & **NYT-S** & **NYT-Top** & **NYT-Loc** & **Yahoo** & **DBpedia** & **Average** \\ \hline Rankstats+ & & 39.53/28.55 & 24.94/13.88 & 52.01/23.13 & 42.23/19.98 & 39.68/23.13 & 29.66/20.44 & 48.20/39.15 & 39.47/24.04 \\ ORCA & ✗ & 72.44/72.27 & 48.92/39.83 & 74.34/42.22 & 62.23/39.02 & 58.71/44.81 & 35.57/32.71 & 69.27/67.92 & 60.21/48.40 \\ GCD & & 66.37/66.51 & 51.75/42.96 & 82.59/63.35 & 66.36/39.69 & 70.25/53.41 & 36.73/35.39 & 75.81/72.97 & 64.27/53.47 \\ WOT-Class & & **79.42/79.75** & **79.07/79.29** & **94.78/88.46** & **78.67/69.48** & **80.94/79.55** & **54.46/56.23** & **85.15/84.87** & **78.93/76.80** \\ \hline Rankstats+ (OE) & & 61.44/57.50 & 53.65/38.12 & 40.82/31.67 & 19.93/15.07 & 21.96/16.81 & 32.79/26.94 & 50.03/44.31 & 40.09/32.92 \\ ORCA (OE) & & 64.38/64.50 & 51.85/40.04 & 70.44/46.21 & 59.42/38.29 & 42.99/33.08 & 43.87/41.43 & 82.54/81.30 & 59.35/49.26 \\ GCD (OE) & ✓ & 65.42/65.44 & 61.27/56.42 & 78.82/56.59 & 70.51/42.44 & 55.37/44.86 & 39.01/37.58 & 84.14/36.80 & 64.93/55.28 \\ BERT+GMM (OE) & & 38.25/37.14 & 29.32/52.51 & 58.79/24.79 & 26.88/14.08 & 11.64/9.47 & 14.11/36.64 & 14.74/14.20 & 27.68/19.79 \\ BERT+SVM (OE) & & 45.20/44.15 & 39.07/34.96 & 51.97/22.34 & 24.95/12.83 & 13.91/7.45 & 15.25/13.39 & 16.28/14.41 & 29.52/21.36 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Evaluations of compared methods and WOT-Class. The overall mean micro-/macro-F\({}_{1}\) scores over three runs are reported. We also report performances for seen and unseen classes separately in Table 5, 6.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline
**Method** & \multicolumn{3}{c}{**DBpedia**} & \multicolumn{3}{c}{**DBpedia-Low**} & \multicolumn{3}{c}{**DBpedia-Medium**} & \multicolumn{3}{c}{**DBpedia-High**} \\
**All** & **Seen** & **Unseen** & **All** & **Seen** & **Unseen** & **All** & **Seen** & **Unseen** & **All** & **Seen** & **Unseen** \\ \hline ORCA & 81.30 & 95.19 & 67.42 & 76.07 & 95.51 & 56.64 & 72.21 & 97.20 & 47.23 & 69.99 & 97.63 & 42.34 \\ GCD & 83.60 & 93.48 & 73.71 & 82.92 & 94.32 & 71.51 & 81.78 & 94.42 & 69.15 & 75.57 & 92.53 & 58.61 \\ WOT-Class & 84.87 & 87.16 & 82.59 & 85.81 & 91.82 & 79.97 & 84.97 & 93.31 & 76.63 & 79.35 & 88.81 & 69.90 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Performance of compared methods and WOT-Class on different imbalance degrees. We report macro-F\({}_{1}\) scores to more effectively demonstrate the results under an imbalanced data distribution. For all compared methods, we report their Pareto optimal started with 100 classes and our estimation.
but fixed across the Low, Medium, and High experiments, and by design, the classes with a larger number of documents are seen classes.
Figure 5 and Table 3 shows the result of WOT-Class and compared methods on the constructed DBpedia. ORCA and GCD are sensitive to imbalanced classes, especially for the unseen part of the data. Even after reporting the Pareto optimal results based on their own predictions and our estimation for these two methods, their overall performance still dropped by 11.31% and 8.03% respectively as the data distribution became more imbalanced, while our method experienced a relative drop of only 5.52%. This experiment shows that WOT-Class is more robust to imbalanced classes of text datasets which are common in the real world (e.g., the imbalance ratio of NYT collected from NYT News is 16.65).
**Prediction of the Number of Classes.** WOT-Class starts with an initial guess on the number of classes and removes redundant ones iteratively. The number of the remaining classes is its prediction of the total number of classes. As shown in Table 7, in most cases, WOT-Class's final predicted number of classes is around 2 to 4 times larger than the ground truth, which is affordable for human inspection. And the estimation turns out to be reasonable as shown in Table 8, WOT-Class overestimates because its predicted classes are the fine-grained version of the ground truth classes. For example, DBpedia's _artist_ class can be split into two classes which respectively related to painting and performing. Moreover, our class-words are highly related to (or even as same as) ground truth class names and human-understandable. So based on our prediction of classes with class-words, users can simply find some underlying sub-classes and decide whether to merge them.
As baselines, Rankstats+, ORCA, and GCD can also start with an overestimated number of classes and get a prediction of classes. However, given the same initial number of classes, Rankstats+ struggles to discover any classes beyond the seen classes, and ORCA hardly eliminates enough classes to provide the user with an intuitive understanding of the dataset's text distribution. GCD can provide more reasonable predictions, but compared to our approach,
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline
**Method** & **Extra Info** & **AGNews** & **20News** & **NYT-S** & **NYT-Top** & **NYT-Loc** & **Yahoo** & **DBpedia** & **Average** \\ \hline Rankstats+ & & 0/0 & 0/0 & 13.35/13.35 & 0/0 & 0/0 & 18.69/8.49 & 4.58/3.12 \\ ORCA & ✗ & 66.26/66.43 & 32.98/28.42 & 52.95/90.82 & 17.33/15.32 & 31.20/27.03 & 24.84/28.28 & 62.28/87.41 & 41.45/38.52 \\ WOT-Class & & **81.29**/**82.22** & **73.65**/**74.91** & **86.45**/**86.31** & **62.07**/**95.52** & **72.45**/**70.99** & **57.00**/**56.34** & **82.51**/**82.59** & **73.63**/**73.27** \\ \hline Rankstats+ (OE) & & 64.44/62.17 & 35.56/29.93 & 16.17/17.68 & 8.48/3.92 & 7.60/6.59 & 23.07/11.97 & 11.38/6.97 & 24.78/19.89 \\ ORCA (OE) & & 55.26/54.84 & 24.70/20.27 & 26.47/24.66 & 20.34/16.32 & 15.58/12.04 & 34.43/30.70 & 69.67/67.42 & 35.22/32.32 \\ GCD (OE) & ✓ & 65.45/65.52 & 47.33/44.97 & 40.66/40.75 & 19.83/17.98 & 19.98/20.47 & 27.20/25.76 & 74.64/73.71 & 42.15/41.31 \\ BERT+GMM (OE) & & 43.74/43.28 & 20.03/18.84 & 14.12/13.46 & 6.80/6.28 & 7.54/6.76 & 13.52/13.10 & 12.96/12.70 & 16.96/16.35 \\ BERT+SVM (OE) & & 51.22/50.33 & 27.11/27.76 & 10.90/10.89 & 7.33/6.83 & 6.52/6.11 & 14.44/13.85 & 17.67/17.21 & 19.31/19.00 \\ \hline \hline \end{tabular}
\end{table}
Table 6. Performance of unseen classes. The mean micro/macro-F\({}_{1}\) scores over three runs are reported.
Figure 5. Overall performance of compared methods and WOT-Class on different imbalance degrees.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline
**Method** & **Extra Info** & **AGNews** & **20News** & **NYT-S** & **NYT-Top** & **NYT-Loc** & **Yahoo** & **DBpedia** & **Average** \\ \hline Rankstats+ & & 52.74/57.10 & 33.35/34.71 & 58.17/51.15 & 46.75/44.73 & 44.28/46.26 & 39.59/40.88 & 68.29/71.87 & 49.02/49.53 \\ ORCA & ✗ & 75.34/75.29 & 64.57/59.41 & 85.90/77.45 & 72.14/67.07 & 70.96/71.92 & 43.92/40.73 & **97.99**/**98.00** & 72.97/69.98 \\ GCD & & 66.47/66.59 & 66.80/67.97 & 90.93/82.15 & 78.56/72.09 & 82.00/79.78 & 48.04/48.83 & 91.36/92.23 & 74.88/72.81 \\ WOT-Class & & **77.72**/**77.76** & **85.60**/**85.86** & **97.33**/**91.68** & **83.04**/**81.92** & **84.13**/**88.10** & **52.50**/**56.14** & 87.69/87.16 & **81.14**/**81.23** \\ \hline Rankstats+ (OE) & & 57.93/52.83 & 53.33/50.40 & 50.82/52.64 & 24.75/29.01 & 26.84/27.03 & 43.14/41.92 & 76.49/81.64 & 47.62/47.92 \\ ORCA (OE) & & 74.15/74.15 & 68.60/69.71 & 84.93/78.52 & 71.19/65.74 & 55.70/54.12 & 53.13/36.77 & 95.18/95.19 & 71.84/67.74 \\ GCD (OE) & ✓ & 65.36/65.35 & 75.07/73.60 & 89.32/80.33 & 82.82/73.01 & 70.58/69.25 & 49.83/49.40 & 93.13/93.48 & 75.16/72.06 \\ BERT+GMM (OE) & & 31.67/31.01 & 36.14/34.77 & 69.11/41.76 & 34.54/23.84 & 13.73/12.18 & 14.57/14.17 & 16.28/15.70 & 30.86/24.78 \\ BERT+SVM (OE) & & 40.40/37.96 & 47.29/45.75 & 55.29/39.87 & 29.49/20.48 & 17.13/8.79 & 15.74/12.93 & 14.61/11.61 & 31.42/25.34 \\ \hline \hline \end{tabular}
\end{table}
Table 5. Performance of seen classes. The mean micro/macro-F\({}_{1}\) scores over three runs are reported.
its prediction still deviates substantially from the ground truth and is much more unstable. This indicates these methods' ability to estimate the number of classes in the few-shot setting is not reliable.
**Hyper-parameter Sensitivity.** WOT-Class has 3 hyper-parameters: \(K\), \(W\), \(\beta\), and we show their default values in Sec. 4.3. To further explore the stability and robustness of WOT-Class, we conduct a hyper-parameter sensitivity study on three datasets with different imbalance rates: 20News, NYT-Small and Yahoo, to study how fluctuations in hyper-parameters influence the performance of our method. The experiment is performed using a fixed random seed (42) for reproducibility. We report the overall macro-F\({}_{1}\) scores for 5 distinct values of each hyperparameter. As illustrated in Figure 6, the performance fluctuations remain within reasonable margins, basically under 5%. Our method does not need to fine-tune these hyper-parameters.
## 5. Related Work
**Open-world Learning.** Traditional open-world recognition methods (Beng et al., 2017; Chen et al., 2017; Chen et al., 2018) aim to incrementally extend the set of seen classes with new unseen classes. These methods require human involvement to label new classes.
Recently, Rankstats (Kang et al., 2017) first defined open-world classification tasks with no supervision on unseen classes in the image domain and proposed a method with three stages. The model begins with self-supervised pre-training on all data to learn low-level representations. It is then trained under full supervision on labeled data to glean higher-level semantic insights. Finally, joint learning with ranking statistics is performed to transfer knowledge from labeled to unlabeled data. However, Rankstats still require full supervision on seen classes to get high performance.
Following that, ORCA (Chen et al., 2017) and GCD (Chen et al., 2018) defined open-world semi-supervised classification and proposed general solutions which further improved the framework of Rankstats and alleviated the burden of manual annotation. However, these methods' performance
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline
**Method** & **AGNews** & **20News** & **NYT-S** & **NYT-Top** & **NYT-Loc** & **Yahoo** & **DBpedia** & **Average Offset** \\ \hline Rankstats+ & 2.00\({}_{0}\) & 2.00\({}_{0}\) & 2.33\({}_{0.58}\) & 4.33\({}_{0.58}\) & 5.00\({}_{0}\) & 5.00\({}_{0}\) & 10.33\({}_{0.06}\) & 3.71 \\ ORCA & 69.00\({}_{6.08}\) & 53.67\({}_{45.65}\) & 39.33\({}_{23.35}\) & 96.33\({}_{28.89}\) & 94.67\({}_{3.21}\) & 66.67\({}_{55.16}\) & 66.00\({}_{3.61}\) & 62.48 \\ GCD & 23.33\({}_{2.89}\) & 16.00\({}_{19.92}\) & 59.33\({}_{17.01}\) & 29.67\({}_{26.63}\) & 27.33\({}_{2.24}\) & 20.00\({}_{16.64}\) & 14.00\({}_{3.46}\) & 19.81 \\ WOT-Class & 19.67\({}_{1.15}\) & 20.67\({}_{0.58}\) & 27.00\({}_{2.89}\) & 18.673\({}_{3.61}\) & 11.67\({}_{0.58}\) & 21.00\({}_{2.00}\) & 17.67\({}_{2.89}\) & 11.33 \\ Ground Truth & 4 & 5 & 5 & 9 & 10 & 10 & 14 & - \\ \hline \hline \end{tabular}
\end{table}
Table 7. Predictions of the number of classes. The average and standard deviation over three runs are reported. The average offset refers to the average of the absolute discrepancies between each prediction and the ground truth value.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Dataset** & **Ground Truth** & **WOT-Class** \\ \hline \hline \multirow{4}{*}{NYT-Loc} & Russia & [Ukraine, Russia] \\ & Germany & [Germany] \\ & Canada & [Canada] \\ & France & [France] \\ & Italy & [Italy] \\ \hline \multirow{4}{*}{DBpedia} & athlete & [footballer], [Olympus] \\ & artist & [painting, painter, art], \\ & [tv, theatre, television] \\ \cline{1-1} & company & [retail, company, business] \\ \cline{1-1} & school & [school, education, academic] \\ \cline{1-1} & politics & [politician] \\ \cline{1-1} & transportation & [iircraft, locomotive] \\ \cline{1-1} & building & [architecture, tower, church] \\ \hline \hline \end{tabular}
\end{table}
Table 8. Examples of the class-words. We use ’[]’ to split class-words belonging to different clusters in WOT-Class.
Figure 6. Hyper-parameter sensitivity study on 20News, NYT-Small and Yahoo. The overall macro-F\({}_{1}\) scores using a fixed random seed are reported.
is not robust enough for the few-shot task and the imbalanced data distribution in the text domain. In contrast, our work is applicable to infrequent classes and exploits the fact that the input is words which are class-indicative.
**Extremely Weak Supervision in NLP.** Aharoni and Goldberg (2018) showed that the average of BERT token representations can preserve documents' domain information. X-Class (Zhu et al., 2019) followed this idea to propose the extremely weak supervision setting where text classification only relies on the name of the class as supervision. However, such methods can not transfer to open-world classification naively as they cannot detect unseen classes. Our method leverages such extremely weak supervision methods as a subroutine to help the clustering of documents. But importantly, we note that such methods cannot be applied straightforwardly as they also are sensitive to noise and too similar classes. We show that our general idea of using class-words can further help an extremely weak supervision method to obtain stable performance.
**Joint Clustering with Downstream Tasks.** To some sense, our method leverages partly an idea called joint clustering, which some recent works (Beng et al., 2019; Chen et al., 2020) in the image domain achieved high performance through jointly performing clustering and image classification. Their main idea is to utilize clustering to extract the hidden information of image representations and generate pseudo-labels, which in turn provide supervision for classification training and ultimately guide the co-improvement of representation and clustering. However, the crucial difference is that their methods already know the predefined classes and highly depend on strong assumptions like all classes share the same size to obtain excellent performance. Conversely, WOT-Class utilizes the general idea of joint clustering in an open-world setting where the classes may be too fine-grained and noisy. We address these unique challenges via the class-words we propose and show that our methodology can not only estimate the precise number of classes but also tolerate imbalanced data distribution.
## 6. Conclusions and Future Work
In this paper, we introduce the challenging yet promising weakly supervised open-world text classification task. We have identified the key challenges and unique opportunities of this task and proposed WOT-Class that achieves quite decent performance with minimal human effort. Specifically, WOT-Class starts with an overestimated number of classes and constructs an iterative refinement framework that jointly performs the class-word ranking and document clustering, leading to iterative mutual enhancement. Consequently, WOT-Class can progressively extract the most informative classes and assemble similar documents, resulting in an effective and stable open-world classification system, which is validated by comprehensive experiments. In the future, we envision that open-world text classification can be conducted with even less manual annotation, for example, by only requiring user-provided hyper-concept (e.g., Topics, Locations) or custom instructions. This will further reduce the cost of classification systems and extend their applicability.
In summary, this paper presents an initial exploration of open-world text classification, including problem formulation, methodology, and empirical results. We hope this work can inspire more research in open-world learning for NLP. As an emerging field, open-world classification demands more algorithms, datasets, and evaluation metrics to truly unleash its potential.
|
2306.02342 | Deep Optimal Transport: A Practical Algorithm for Photo-realistic Image
Restoration | We propose an image restoration algorithm that can control the perceptual
quality and/or the mean square error (MSE) of any pre-trained model, trading
one over the other at test time. Our algorithm is few-shot: Given about a dozen
images restored by the model, it can significantly improve the perceptual
quality and/or the MSE of the model for newly restored images without further
training. Our approach is motivated by a recent theoretical result that links
between the minimum MSE (MMSE) predictor and the predictor that minimizes the
MSE under a perfect perceptual quality constraint. Specifically, it has been
shown that the latter can be obtained by optimally transporting the output of
the former, such that its distribution matches the source data. Thus, to
improve the perceptual quality of a predictor that was originally trained to
minimize MSE, we approximate the optimal transport by a linear transformation
in the latent space of a variational auto-encoder, which we compute in
closed-form using empirical means and covariances. Going beyond the theory, we
find that applying the same procedure on models that were initially trained to
achieve high perceptual quality, typically improves their perceptual quality
even further. And by interpolating the results with the original output of the
model, we can improve their MSE on the expense of perceptual quality. We
illustrate our method on a variety of degradations applied to general content
images of arbitrary dimensions. | Theo Adrai, Guy Ohayon, Tomer Michaeli, Michael Elad | 2023-06-04T12:21:53Z | http://arxiv.org/abs/2306.02342v2 | # Deep Optimal Transport: A Practical Algorithm for Photo-realistic Image Restoration
###### Abstract
We propose an image restoration algorithm that can control the perceptual quality and/or the mean square error (MSE) of any pre-trained model, trading one over the other at test time. Our algorithm is few-shot: Given about a dozen images restored by the model, it can significantly improve the perceptual quality and/or the MSE of the model for newly restored images without further training. Our approach is motivated by a recent theoretical result that links between the minimum MSE (MMSE) predictor and the predictor that minimizes the MSE under a perfect perceptual quality constraint. Specifically, it has been shown that the latter can be obtained by optimally transporting the output of the former, such that its distribution matches the source data. Thus, to improve the perceptual quality of a predictor that was originally trained to minimize MSE, we approximate the optimal transport by a linear transformation in the latent space of a variational auto-encoder, which we compute in closed-form using empirical means and covariances. Going beyond the theory, we find that applying the same procedure on models that were initially trained to achieve high perceptual quality, typically improves their perceptual quality even further. And by interpolating the results with the original output of the model, we can improve their MSE on the expense of perceptual quality. We illustrate our method on a variety of degradations applied to general content images of arbitrary dimensions.
## 1 Introduction
Many image restoration algorithms aim to recover a clean source image from its degraded version. The performance of such algorithms is often evaluated in terms of their average _distortion_, which measures the discrepancy between restored images and their corresponding clean sources, as well as _perceptual quality_, which refers to the extent to which restored images resemble natural images. The work in [2] exposed a fundamental trade-off between distortion and perceptual quality, where the latter is measured using a _perceptual index_ that quantifies the statistical divergence between the distribution of restored images and the distribution of natural images. The
Figure 1: The \(\mathcal{W}_{2}\)-MSE trade-off [1].
trade-off curve reveals the predictor that achieves the lowest possible distortion, denoted as \(\mathbf{D_{max}}\), while maintaining perfect perceptual quality (refer to fig. 1).
Following the methodology introduced by [2], it has become common practice to compare restoration methods on the perception-distortion (PD) plane, with many methods aiming to reach the elusive \(\mathbf{D_{max}}\) point. In this paper, we present a practical approach to approximate the \(\mathbf{D_{max}}\) predictor where distortion is measured using the MSE and perceptual quality is measured by the Wasserstein-\(2\) distance (\(\mathcal{V}_{2}\) ) between the distributions of restored and real images. Our approach is based on the recent work [1] which demonstrated that the \(\mathbf{D_{max}}\) predictor can be obtained using _optimal transport_ (OT) from the output distribution of the MMSE predictor to the distribution of natural images. By applying an optimal transport plan to an MMSE restoration resulting from a degraded image, we can produce \(\mathbf{D_{max}}\) estimations by transporting the MMSE restored estimate.
Although progress has been made in finding OT plans between image distributions [4; 5; 6], it remains challenging task, particularly for high-dimensional distributions. Therefore, we propose an approximation method for the \(\mathbf{D_{max}}\) estimator by performing transportation in the latent space of a pre-trained auto-encoder. A similar strategy was successfully employed in the context of image generation in [7], showing effectiveness in reducing complexity and preserving details.
Inspired by the style transfer literature [8; 9; 10], we assume that the latent representations follow a Multivariate Gaussian (MVG) distribution. Thus, by considering the first and second-order statistics of the embedded MMSE estimates and embedded natural images, we can compute the well-known closed form solution of the OT operator between two Gaussians. To further reduce complexity, we make additional assumptions about the structure of the latent covariance matrices, enabling the computation of the OT operator with as few as 10 _unpaired_ MMSE restored and clean samples. This approach leads to a few-shot algorithm that significantly enhances visual quality.
Interestingly, our method can even improve the visual quality of generative models that were trained to achieve high perceptual quality in the first place (see fig. 2). Furthermore, by adjusting a single interpolation parameter, we can trade off perception for distortion, resulting in marginal improvements in the distortion performance of some regression models that were trained to prioritize source fidelity. We demonstrate the improved photo-realism of our approach on a variety of tasks and models, including GAN and diffusion-based methods, using high-resolution (_e.g._, \(512^{2}\) px) general-content images with arbitrary aspect ratios. 1
Footnote 1: Our code is publicly available at [https://github.com/theoad/dot-dmax](https://github.com/theoad/dot-dmax).
## 2 Related Work
Throughout the paper, we distinguish between two kinds of restoration algorithms: distortion and perception focused. The former category includes traditional methods that minimize distortion (e.g.,
Figure 2: Our few-shot algorithm improves the visual quality of any estimator at test time. For example, we can improve the photo-realism of DDRM [3] even further.
MSE) [11; 12; 13; 14; 15]. The latter category includes more recent works that usually involve generative models like Generative Adversarial Networks (GANs) [16; 17; 18], or diffusion-based techniques [3; 19].
This paper searches for the theoretical \(\mathbf{D_{max}}\) estimator which minimizes the MSE under a perfect perceptual quality constraint. However, our method is not a stand-alone restoration algorithm, i.e., its input is not the degraded image. Rather, it can be applied on top of any _existing_ predictor. We provide a new way to potentially improve the performance (either MSE or perceptual) of any given estimator in a few-shot, plug-and-play fashion.
Provided with an image latent representation method (e.g., an auto-encoder), our algorithm applies a linear transformation on all the overlapping patches of its input (after encoding). In this regard, its functioning is not far from classical image restoration methods [14; 15; 20].
### Wasserstein-2 transport
While many successful approaches exist to compute the \(\mathcal{W}_{2}\) distance between discrete, low or medium dimensional densities [21; 22], it is far more challenging to determine an optimal transport plan in the continuous, high-dimensional setting. In fact, the task of even computing the Wasserstein distance (without its optimal plan) on empirical distributions has drawn significant attention with WGANs [23]. Thus, computing the transport operator requires to optimize the \(\mathcal{W}_{2}\) distance but with an additional ordering constraint on the generator, which proved to be a challenging task when dealing with real-world data sets [4; 5; 6]. An attempt to sidestep this difficulty would be to use the Gelbrich distance [1; 24], which lower bounds the \(\mathcal{W}_{2}\) distance and depends only on the first two moments of the distributions. Nevertheless, it is only a good estimate when the support of the distributions has elliptical level sets. To address this, one can find a low-dimensional embedding where (i) high dimensionality is no longer an issue, (ii) the distributions are not degenerated, and (iii) the Gelbrich distance applies. A possible option would be to use the bottleneck of an auto-encoder. This approach was adopted by style transfer works [8; 9; 10], and is also widely used by tools that compare image distributions like the Frechet Inception Distance (FID) [25]. Both use a convolutional encoder and consider the pixels of the latent embedding (the vectors that span across the channel dimension) as a MVG distribution.
## 3 Background
### Optimal transport in Wasserstein Space
In this section, we briefly introduce key concepts of optimal transport theory that we draw from [26].
Let \(\mu\) and \(\nu\) be probability measures on \(\mathbb{R}^{m}\) and \(\mathbb{R}^{n}\), respectively. The set of all transport plans, which are probability measures \(\pi\) on \(\mathbb{R}^{m}\times\mathbb{R}^{n}\) with marginals \(\mu\) and \(\nu\), is denoted by \(\Pi(\mu,\nu)\). The _Wasserstein-2 distance_ between \(\mu\) and \(\nu\) is defined as follows:
\[\mathcal{W}_{2}^{2}(\mu,\nu)=\inf_{\pi\in\Pi(\mu,\nu)}\mathbb{E}_{x,y\sim\pi} \left[\left\|x-y\right\|_{2}^{2}\right]. \tag{1}\]
A transport plan that achieves this infimum is called an _optimal transport plan_ between \(\mu\) and \(\nu\). If \(\pi\) is a degenerate distribution, there exists a measurable function \(\mathrm{T}_{\mu\longrightarrow\nu}:\mathbb{R}^{m}\longrightarrow\mathbb{R}^{n}\) such that, if \(\mathbf{x}_{1}\sim\mu\) and \(\mathbf{x}_{2}\sim\nu\) are two random variables, then \(\mathbf{x}_{1}^{\ \underline{a.s}\ }\mathrm{T}_{\mu\longrightarrow\nu}( \mathbf{x}_{2})\). We refer to \(\mathrm{T}_{\mu\longrightarrow\nu}\) as the _optimal transport operator_ between \(\mu\) and \(\nu\). Like in [5], we also abuse this notation even when \(\pi\) is non-degenerate, in which case \(\mathrm{T}_{\mu\longrightarrow\nu}\) represents a one-to-many (stochastic) mapping.
Additionally, when considering two Multivariate Gaussians (MVGs) \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) with \(\mathbf{x}_{1}\sim\mathcal{N}(\mu_{\mathbf{x}_{1}},\Sigma_{\mathbf{x}_{1}})\) and \(\mathbf{x}_{2}\sim\mathcal{N}(\mu_{\mathbf{x}_{2}},\Sigma_{\mathbf{x}_{2}})\), respectively, and assuming that \(\Sigma_{\mathbf{x}_{1}}\) and \(\Sigma_{\mathbf{x}_{2}}\) are non-singular, there exists a closed-form solution for the optimal transport operator, which is deterministic and linear:
\[\mathrm{T}_{p_{\mathbf{x}_{1}}\longrightarrow p_{\mathbf{x}_{2}}}^{\text{ MVG}}(x_{1})=\Sigma_{\mathbf{x}_{1}^{\ \underline{1}}}^{-\frac{1}{2}}\left(\Sigma_{\mathbf{x}_{1}}^{\frac{1}{2}} \Sigma_{\mathbf{x}_{2}}\Sigma_{\mathbf{x}_{1}}^{\frac{1}{2}}\right)^{\frac{1}{ 2}}\Sigma_{\mathbf{x}_{1}}^{-\frac{1}{2}}\cdot(x_{1}-\mu_{\mathbf{x}_{1}})+ \mu_{\mathbf{x}_{2}}, \tag{2}\]
where a symmetric and positive definite square root of the matrices is chosen.
### Wasserstein-2 MSE tradeoff
We build upon the problem setting introduced in [1; 2] to establish our analysis. We consider the following scenario: \(\mathbf{x}\in\mathbb{R}^{n}\) represents a source natural image, \(\mathbf{y}\in\mathbb{R}^{m}\) represents its degraded version, and we assume that the posterior \(p_{\mathbf{x}|\mathbf{y}}(\cdot|y)\) is non-degenerate for almost any \(y\). Our objective is to construct an estimator \(\mathbf{\hat{x}}\) that predicts \(\mathbf{x}\) given \(\mathbf{y}\). A valid estimator \(\mathbf{\hat{x}}\) should be independent of \(\mathbf{x}\) given \(\mathbf{y}\).
Let \(\mathbf{x}^{*}\) denote the MMSE estimator that achieves the minimal MSE, _i.e._, \(\text{MSE}(\mathbf{x},\mathbf{x}^{*})=D_{\min}\). Additionally, let \(\mathbf{\hat{x}}_{0}\) denote the \(\mathbf{D}_{\text{max}}\) estimator, which among all estimators satisfying \(\mathcal{W}_{2}(p_{\mathbf{x}},p_{\mathbf{x}_{0}})=0\), attains the minimal MSE, namely, \(\text{MSE}(\mathbf{x},\mathbf{\hat{x}}_{0})=D_{\max}\) (refer to fig. 1). Notably, as discussed in [1], these estimators have a compelling property: their joint distribution \(p_{\mathbf{x}_{0},\mathbf{x}^{*}}\) is an optimal transport plan between \(\mathbf{x}^{*}\) and \(\mathbf{x}\), characterized by the following optimization problem:
\[p_{\mathbf{x}_{0},\mathbf{x}^{*}}\in\operatorname*{arg\,min}_{p_{\mathbf{x}_{ 1},\mathbf{x}_{2}}\in\mathbf{\Pi}(p_{\mathbf{x}},p_{\mathbf{x}^{*}})}\mathbb{ E}\left[\|\mathbf{x}_{1}-\mathbf{x}_{2}\|_{2}^{2}\right]. \tag{3}\]
In other words, finding \(\mathbf{\hat{x}}_{0}\) is equivalent to finding an optimal transport plan from \(\mathbf{x}^{*}\) to \(\mathbf{x}\). Then, the \(\mathbf{D}_{\text{max}}\) estimator is simply \(\mathbf{\hat{x}}_{0}=\text{T}_{p_{\mathbf{x}^{*}}}{\rightarrow}p_{\mathbf{x} }(\mathbf{x}^{*})\).
This estimator is particularly useful as it allows to obtain any point on the perception-distortion function through a naive linear interpolation with the MMSE estimator. Specifically, we can define the interpolated estimator \(\mathbf{\hat{x}}_{P}\) as follows:
\[\mathbf{\hat{x}}_{P}=(1-\alpha)\mathbf{\hat{x}}_{0}+\alpha\mathbf{x}^{*}, \tag{4}\]
where \(0\leq\alpha\leq 1\) is an interpolation constant [1] that depends on the perceptual index of \(\mathbf{x}^{*}\) and the desired perceptual index \(0\leq P=\mathcal{W}_{2}(\mathbf{x},\mathbf{\hat{x}}_{P})\) (refer to fig. 1).
## 4 Method
We start by describing the general flow of our proposed algorithm, and then move to elaborate on each of its components.
Theoretically speaking, our algorithm, combined with any given MMSE estimator, is an approximation of the \(\mathbf{D}_{\text{max}}\) estimator. In practice, however, it can be combined with any type of estimator and potentially improve its perceptual quality. I.e., it can be combined with an estimator that optimizes distortion (e.g., SwinIR [11]) and improve its perceptual quality at the expense of distortion, or it can
Figure 3: Trading perception and distortion using out-of-the-box predictors, wrapped with our method. Using eq. (4) with \(\alpha\in[0,1]\) we interpolate a given predictor (orange) and our improved \(\mathbf{D}_{\text{max}}\) estimation (green), to approximate the PD FID-MSE function (blue curve). With \(\alpha\in[-1,0]\cup[1,2]\) we extrapolate outside of the PD curve (light gray), beyond the theory-inspired area, to further improve performance.
be combined with an estimator that optimizes perceptual quality (e.g., DDRM [3]) and improve its perceptual quality even further. As a result, our algorithm is agnostic to the type of degradation. To clarify, our algorithm is not really a restoration algorithm by itself, but rather a wrapper which can potentially improve the perceptual quality of any given estimator.
### The algorithm
The main goal of our algorithm is to approximate the optimal transport plan between \(p_{\hat{\mathbf{x}}}\) and \(p_{\mathbf{x}}\), namely, \(\text{T}_{p_{\mathbf{x}}\longrightarrow p_{\mathbf{x}}}\), where \(\hat{\mathbf{x}}\) is a given estimator. Theoretically, with such an operator, one could optimally transport \(\hat{\mathbf{x}}\) such that, with minimal loss in MSE performance, the transported estimator would attain perfect perceptual quality. Computing \(\text{T}_{p_{\mathbf{x}}\longrightarrow p_{\mathbf{x}}}\) for high dimensional distributions is a difficult task, involving complex (often adversarial) optimization (see discussion in section 2). To solve this, we perform several assumptions and approximations that allow us to efficiently compute a closed form transport operator that approximates \(\text{T}_{p_{\mathbf{x}}\longrightarrow p_{\mathbf{x}}}\). The flow of the algorithm is presented in fig. 4, and goes as follows:
**Encoding**: In the training stage we encode \(N\) natural images \(\{x^{(i)}\}_{i=1}^{N}\) and \(N\) restored samples \(\{\hat{x}^{(i)}\}_{i=1}^{N}\) (unpaired) into their latent representations, \(\{x_{e}^{(i)}\}_{i=1}^{N}\) and \(\{\hat{x}_{e}^{(i)}\}_{i=1}^{N}\), respectively. The size of each image sample is denoted by \((3,H,W)\), where \(H\) and \(W\) are the height and width, respectively, and the size of their latent representation is denoted by \((c,H_{e},W_{e})\). In the inference stage we perform the same process but only on a single estimate \(\hat{x}\), resulting again in a latent representation of size \((c,H_{e},W_{e})\)
**Unfold**: From each latent representation we extract all the overlapping patches of size \((c,p,p)\), where \(p\) is the height and width of each patch.
**Flatten each patch and aggregate**: We flatten all the extracted patches to obtain 1-dimensional vectors of size \(cp^{2}\), which we assume to come from a MVG distribution. In the training stage we compute their empirical mean and covariance matrix (aggregating over the \(N\) dimension), and compute the optimal transport in closed-form \(\text{T}_{p_{\mathbf{x}_{e}}\longrightarrow p_{\mathbf{x}_{e}}}^{\text{MVG}}\) using eq. (2).
Figure 4: With a pre-trained VAE, we estimate the first and second order statistics of the latent patches of natural images and the restorations of some given estimator. At inference time, we use the closed-form OT eq. (2) operator between MVG distributions to transport the latent representation of a given restored sample, which, after decoding, increases the visual quality of the restored sample. For a fully detailed explanation of the algorithm, see section 4.
**Matmul and unflatten**: We apply the pre-computed transport operator using a simple matrix-vector multiplication on the flattened version of the patches extracted from the latent representation \(\hat{x}_{e}\). We then reshape each vector to the original patch size \((c,p,p)\) (unflatten).
**Fold**: We rearrange the transported patches back to the original size of the latent representation (reversing the unfold operation). Since the patches overlap, we simply average the shared pixels.
**Decoding**: To produce our final enhanced estimation \(\hat{x}_{0}\), we decode the transported latent image back to the pixel space using the decoder of the VAE.
Together, the inference steps form an end-to-end approximation of the desired transport operator \(\text{T}_{p_{k}\longrightarrow p_{\star}}\). In appendix B we elaborate on the choices and practical considerations of our algorithm.
## 5 Experiments
In all of our experiments we use the encoder and decoder of the VAE [27] from stable-diffusion [7].
**The pre-trained models we evaluate:** We apply our latent transport method (described in section 4) on SwinIR [11], Restormer [12] and Swin2SR [13], all of which attempted to minimize average pixel distortion using a supervised regression loss on paired image samples. Additionally, we apply our algorithm on models that are trained to achieve high perceptual quality, and show that we can improve their visual quality even further. As such, we tested two benchmark models in high perceptual quality image restoration: ESRGAN [16], a GAN-based method, and DDRM [3], a diffusion-based method. Beyond our original goal to improve perceptual quality, we demonstrate that we can also traverse the \(\mathcal{W}_{2}\) -MSE tradeoff using any restoration model (e.g., SwinIR, ESRGAN). To do so, we pick any of the aforementioned algorithms and apply our method to improve its perceptual quality, leading to a new estimator. We then interpolate the original algorithm and its improved version using eq. (4), adjusting \(\alpha\) to traverse the tradeoff. To clarify, we plug the original algorithm as \(\mathbf{x}^{*}\) in eq. (4) (instead of the theoretical MMSE estimator), and plug our improved version as \(\hat{\mathbf{x}}_{0}\) (instead of the theoretical \(\mathbf{D}_{\text{max}}\) estimator).
**Restoration tasks:** We showcase our algorithm on Single-Image Super-Resolution (SISR), denoising of Additive White Gaussian Noise (AWGN), JPEG [28] decompression, Noisy Super-Resolution (NSR) and Compressed Super-Resolution (CSR). Training and inference of our algorithm are performed on each restoration model separately, and the evaluation is performed on the restoration task that corresponds to the given model.
**Data sets:** The transport operator is computed using two disjoint sets of 10 randomly picked images from the ImageNet [29] dataset train split. The first set is used to approximate the predictor's latent statistics (\(\mu_{\mathbf{x}_{e}}\), \(\Sigma_{\hat{\mathbf{x}}_{e}}\)): we degrade each image according to the restoration task the predictor is intended to solve, compute the 10 restored outputs, embed the results and compute the embeddings' statistics. The second set is embedded into the latent representation without further modification and serves to approximate the natural image latent statistics (\(\mu_{\mathbf{x}_{e}}\), \(\Sigma_{\mathbf{x}_{e}}\)). For all models except DDRM [3] and Swin2SR [13], we report the performance on the 50,000 validation samples of ImageNet [29] following [7]. Because of its computational complexity, DDRM [3] reported its performance on a subset of a 1000 ImageNet [29] validation samples. For Swin2SR [13], we use the official DIV2K [30] restored samples provided by the authors. Although our algorithm can be applied to images with arbitrary aspect ratios, all the tested models were trained on square images. Thus, we resize the samples to \(512\times 512\) pixels following the pre-processing procedure of DDRM [3].
**Metrics:** In addition to Peak Signal-to-Noise Ratio (PSNR), we evaluate distortion performance with Structural Similarity Index Measure (SSIM) [31] and Learned Perceptual Image Patch Similarity (LPIPS) [32], both of which suit better for natural image comparison [32]. To evaluate perceptual quality, we use the Inception Score (IS) [33], the Frechet Inception Distance (FID) [25] and the Kernel Inception Distance (KID) [34].
### Quantitative results
As reported in table 1, our algorithm can trade perceptual quality for distortion (and vice versa) at test time. We sometimes even manage to improve the pre-trained predictor's PSNR, even of regression models like SwinIR and Swin2SR. When using \(\alpha\leq 0\), we systematically improve the predictor's perceptual performance (FID, KID, IS), even for estimators which were designed to achieve photo
realism in the first place, e.g., ESRGAN and DDRM. On Non-Local-Means (NLM) [15], an older, non deep-learning denoising algorithm, our method marginally improves all metrics.
While, in theory, our procedure to traverse the perception distortion tradeoff should only include values of \(\alpha\) in the range \([0,1]\) (see eq. (4)), we also tried to use values outside of this range. As shown in fig. 3, with \(\alpha\in[-1,0]\cup[1,2]\) we can obtain even better PD curves, and sometimes improve the perceptual quality and/or the distortion of the methods even further. For instance, the PD curve of Swin2SR obtained using \(\alpha\in[1,2]\) is strictly better than the one obtained using \(\alpha\in[0,1]\). This deviation from the theory can be explained by the implementation choices discussed in section 4; We perform the transport in the latent space - not the pixel space. Additionally, we use FID as perceptual index to measure visual quality when the theory presented in section 3.2 only talks about the Wasserstein-2 distance. In practice, the sharpen details that appear in \(\mathbf{\hat{x}}_{0}\) and not in \(\mathbf{x}^{*}\) are either amplified when \(\alpha\in[-1,0]\) or subtracted (instead of being added) to \(\mathbf{x}^{*}\) when \(\alpha\in[1,2]\). We leave the formal analysis of this interesting phenomenon for future research.
### Qualitative results
Qualitative results on arbitrary image sizes and aspect ratios are shown in fig. 5. Using our method, we observe a consistent improvement of photo-realism when transporting existing restoration algorithms using our method. Hence, the qualitative results align with the quantitative perceptual performance gains.
### Training details & ablation study
All the results presented in figs. 3 and 5 and table 1 were obtained using the same hyper-parameters. We used the "f8-ft-MSE" fine-tuned version of stable-diffusion's VAE from Hugging-Face's diffusers library [35]. For the training stage we use 20 randomly-drawn images from the ImageNet training set
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline & & \multicolumn{3}{c}{Distortion} & \multicolumn{3}{c}{Perception} \\ \cline{3-8} & Signal & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & FID \(\downarrow\) & IS \(\uparrow\) & KID\(\times 10^{3}\downarrow\) \\ \cline{2-8} Task & \(\mathbf{x}\) & \(\infty\) & 1 & 0 & 0 & \(240.53\pm 4.42\) & 0 \\ & \(\mathbf{D}(\mathbf{E}(\mathbf{x}))\) & 27.10 & 0.81 & 0.13 & 0.24 & \(234.71\pm 4.04\) & 0.02\(\pm 0.07\) \\ \hline \multirow{2}{*}{\(\text{SISR}_{\times 4}\)} & SwinIR [11] & 28.10 & **0.84** & **0.24** & 2.54 & \(201.52\pm 4.85\) & 1.24\(\pm 0.24\) \\ & \(\mathbf{\hat{x}}_{0.9}\) & **28.15** & **0.84** & **0.24** & 2.80 & \(198.69\pm 2.97\) & 1.38\(\pm 0.24\) \\ & \(\mathbf{\hat{x}}_{-0.2}\) & 25.08 & 0.77 & 0.25 & **1.19** & **216.74\(\pm 4.26\)** & **0.38\(\pm 0.89\)** \\ \hline \multirow{2}{*}{\(\text{JPEG}_{q=10}\)} & SwinIR [11] & **29.68** & **0.86** & **0.30** & 8.95 & \(161.73\pm 3.36\) & 6.52\(\pm 0.77\) \\ & \(\mathbf{\hat{x}}_{1.1}\) & 29.58 & **0.86** & **0.30** & 8.36 & \(166.50\pm 3.12\) & 6.08\(\pm 0.75\) \\ & \(\mathbf{\hat{x}}_{-0.2}\) & 23.74 & 0.76 & 0.31 & **7.56** & **166.65\(\pm 3.58\)** & **5.68\(\pm 0.83\)** \\ \hline \multirow{2}{*}{\(\text{AWGN}_{\sigma=50}\)} & Restormer [12] & **30.18** & **0.86** & 0.26 & 5.21 & \(178.62\pm 2.83\) & **3.29\(\pm 0.56\)** \\ & \(\mathbf{\hat{x}}_{1.1}\) & 30.09 & **0.86** & **0.25** & 4.63 & \(183.36\pm 3.20\) & 2.61\(\pm 1.53\) \\ & \(\mathbf{\hat{x}}_{1.7}\) & 27.26 & 0.82 & **0.25** & **2.73** & **198.93\(\pm 5.13\)** & **1.76\(\pm 1.58\)** \\ \hline \multirow{2}{*}{\(\text{SR}_{\times 4}\text{JPEG}_{q=10}\)} & Swin2SR [13] & 19.75 & **0.55** & 0.53 & \(205.00\) & \(5.95\pm 0.49\) & 40.68\(\pm 3.34\) \\ & \(\mathbf{\hat{x}}_{0.8}\) & **19.81** & **0.55** & 0.53 & \(209.82\) & \(5.91\pm 0.69\) & 43.28\(\pm 3.86\) \\ & \(\mathbf{\hat{x}}_{1.9}\) & 18.44 & 0.49 & **0.51** & **168.12** & **6.36\(\pm 0.69\)** & **19.95\(\pm 2.84\)** \\ \hline \multirow{2}{*}{\(\text{SISR}_{\times 4}\)} & ESRGAN [16] & 26.77 & 0.80 & **0.21** & 1.06 & \(221.68\pm 3.06\) & 0.43\(\pm 0.14\) \\ & \(\mathbf{\hat{x}}_{0.7}\) & **27.00** & **0.81** & **0.21** & 1.51 & \(215.87\pm 3.64\) & 0.56\(\pm 0.21\) \\ & \(\mathbf{\hat{x}}_{-0.2}\) & 24.84 & 0.74 & 0.23 & **0.80** & **221.89\(\pm 2.53\)** & **0.30\(\pm 0.20\)** \\ \hline \multirow{2}{*}{\(\text{SR}_{\times 4}\text{AWGN}_{\sigma=50}\)} & DDRM [3] & **26.10** & **0.75** & 0.34 & \(36.44\) & \(43.52\pm 3.33\) & 5.09 \\ & \(\mathbf{\hat{x}}_{1.2}\) & 25.91 & **0.75** & **0.33** & \(33.68\) & \(44.90\pm 4.06\) & 3.88 \\ & \(\mathbf{\hat{x}}_{1.7}\) & 24.48 & 0.70 & 0.35 & **29.05** & **47.91\(\pm 2.69\)** & **1.47** \\ \hline \multirow{2}{*}{\(\text{AWGN}_{\sigma=50}\)} & NLM [15] & 26.09 & 0.71 & 0.44 & \(12.84\) & \(148.71\pm 3.75\) & 8.73\(\pm 0.91\) \\ & \(\mathbf{\hat{x}}_{0.8}\) & **26.24** & **0.72** & **0.43** & **12.46** & \(148.78\pm 2.49\) & **8.60\(\pm 0.95\)** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Using eq. (4), our algorithm can trade-off perception and distortion at inference time on any predictor [3, 11, 12, 13, 15, 16] and image restoration task. For each task, we report choices of \(\alpha\) that optimize perception and distortion.
(10 images which we use as the natural images set, and 10 images which we degrade and then restore with the estimator). We used a patch-size of \(p=3\) in the latent space.
Thanks to its simplicity, for each restoration task, our few-shot algorithm requires just a single GPU, and a few seconds for both training and inference.
We turn to detail some considerations about practical aspects of our algorithm which we empirically evaluate on the popular SISR\({}_{\times 4}\) task for the ESRGAN estimator.
**Patch size**: We experiment with increasing patch-sizes when unfolding the latent image (see appendix B.2). \(p=\{3,5\}\) yielded the best PSNR and FID. Smaller patch size (\(p=1\)) resulted in worse FID and bigger size \(7\leq p\leq 15\) yielded slightly worse PSNR.
**Training size**: As discussed in appendix B.4, each image contributes thousands of samples to the computation of the OT operator. Still, we expect the empirical statistics estimation to benefit from a larger sample size \(S\). To confirm this, we repeated the visual enhancement experiments while varying the number of training samples. Surprisingly, we observe no change in the performance of the evaluated metrics for \(S=\{10^{5},10^{4},10^{3},10^{2}\}\), i.e., approximating the distribution statistics with
Figure 5: Our method notably improves the results of several benchmark predictors on various degradations.
100 samples is as good as using 100,000 samples, and this is true regardless of the chosen patch size. Moreover, when With \(S=10\), then only for small patch sizes of \(p\leq 5\) we observe no performance drop compared to using a larger sample size. This suggests that our method can be successfully deployed in few-shot settings, where the number of available samples is small.
**Paired vs. unpaired samples**: Surprisingly, using paired images to compute the distribution parameters yielded better PSNR but worse FID. We suspect that using paired updates induces a bias which results in worse covariance estimation.
**Transporting the degraded measurement directly**: Applying our algorithm on the degraded input directly led to insufficient results as we see in appendix B.7.
## 6 Discussion
Finally, it has been recently shown that the posterior sampler is the only estimator attaining perfect perceptual quality while producing outputs that are perfectly consistent with the degraded input [37]. As such, \(\mathbf{D_{max}}\) cannot hope for consistent restorations.
**Potential impact**: Instead of using sophisticated and data-hungry generative models, we show it is possible to obtain photo-realistic results using simple tools like MMSE estimators and VAEs. We hope our few-shot algorithm will inspire other simple and practical image restoration methods.
**Potential misuse**: Our algorithm aims at improving the perceptual quality of existing algorithms. However, when using a biased training set, this could potentially cause bias in the enhanced restoration as well. This could potentially harm the results of medical image diagnosis, for example.
|
2307.15517 | A Dataflow Compiler for Efficient LLM Inference using Custom
Microscaling Formats | Model quantization represents both parameters (weights) and intermediate
values (activations) in a more compact format, thereby directly reducing both
computational and memory cost in hardware. The quantization of recent large
language models (LLMs) faces challenges to achieve competitive memory density
compared to other models such as convolutional neural networks, since values in
LLMs require larger dynamic ranges.
Current hardware can expedite computation for LLMs using compact numerical
formats such as low-bitwidth integers or floating-point numbers. Each has
advantages: integer operations simplify circuit design, whereas floating-point
calculations can enhance accuracy when a wider dynamic range is required. In
this work, we seek an efficient data format that combines the best of both
worlds: Microscaling (MX) formats. MX formats are efficient data formats that
achieve both large dynamic ranges and high memory density.
In this paper, we propose a compiler named MASE for exploring mixed-precision
MX formats on dataflow hardware accelerators for LLM inference. Our main
contributions are twofold. First, we propose a novel orchestration abstraction
to explore both software and hardware optimizations with new data formats.
Second, MASE achieves LLM inference at an average precision of 4-bits, with
minimal to no accuracy degradation. To our knowledge, MASE represents the first
effort to harness fine-grain multi-precision MX formats in the design of LLM
hardware accelerators. Over a range of LLMs and datasets, MASE achieves an
average improvement of 24% in $\Delta$ accuracy with an overhead of only 3% in
energy efficiency compared to designs using 8-bit fixed-point numbers. | Jianyi Cheng, Cheng Zhang, Zhewen Yu, Christos-Savvas Bouganis, George A. Constantinides, Yiren Zhao | 2023-07-28T12:19:19Z | http://arxiv.org/abs/2307.15517v2 | Fast Prototyping Next-Generation Accelerators for New ML Models using MASE: ML Accelerator System Exploration
###### Abstract
Machine learning (ML) accelerators have been studied and used extensively to compute ML models with high performance and low power. However, designing such accelerators normally takes a long time and requires significant effort.
Unfortunately, the pace of development of ML software models is much faster than the accelerator design cycle, leading to frequent and drastic modifications in the model architecture, thus rendering many accelerators obsolete. Existing design tools and frameworks can provide quick accelerator prototyping, but only for a limited range of models that can fit into a single hardware device, such as an FPGA. Furthermore, with the emergence of large language models, such as GPT-3, there is an increased need for hardware prototyping of these large models within a many-accelerator system to ensure the hardware can scale with the ever-growing model sizes.
In this paper, we propose an efficient and scalable approach for exploring accelerator systems to compute large ML models. We developed a tool named MASE that can directly map large ML models onto an efficient streaming accelerator system. Over a set of ML models, we show that MASE can achieve better energy efficiency to GPUs when computing inference for recent transformer models. Our tool will open-sourced upon publication.
## I Introduction
Machine Learning (ML) has demonstrated state-of-the-art performance in domains such as Computer Vision (CV) [35, 38] and Natural Language Processing (NLP) [7, 15]. ML has now been widely deployed in many applications, such as autonomous driving [23, 44], recommendation systems [2], healthcare [22], and more. Often, ML applications necessitate a substantial amount of energy for computation due to the craving for superior performance and the ever-growing model sizes. To facilitate meeting the demand for low-energy computing, ML hardware accelerators are proposed as they utilize a particular architecture designed for ML computation, consuming significantly less energy than general-purpose processors while attaining a similar performance level [20, 34].
Designing an efficient ML accelerator requires a tremendous amount of knowledge and effort from hardware experts. It could take years to implement an ML accelerator in an application-specific integrated circuit (ASIC) and months for prototyping on a reconfigurable device such as a field-programmable gate array (FPGA). The gap between the development cycles of software ML models and hardware acceleration has been rapidly widening due to the increasing size and complexity of models. Furthermore, new models may significantly alter their program behaviour, making the hardware architecture obsolete and necessitating a complete redesign from scratch.
In order to address these challenges, two traditional approaches have been studied. The first approach involves transforming the ML model using software compilers, enabling efficient mapping onto a pre-existing accelerator architecture [31]. However, the fixed accelerator architecture imposes a performance upper bound. The second approach focuses on fast prototyping of hardware accelerators using high-level synthesis (HLS) tools, targeting ASICs [8, 33] or FPGAs [6, 17]. However, these tools primarily keep the software model unchanged and aim to explore solely an efficient hardware accelerator architecture. This naturally raises the question of _how we can systematically explore the opportunities for concurrent co-optimization of software and hardware_?
Previous research has explored the co-optimization of software and hardware for Deep Neural Networks (DNNs) from an Automated Machine Learning (AutoML) perspective. The general idea is to include hardware design parameters in the canonical AutoML search space. _However, this approach is still relatively limited in terms of hardware architecture exploration._ Co-design AutoML typically focuses on a small set of parameters in a Processing Element (PE) array-based architecture [1, 9], such as buffer sizes and PE array dimensions. Additionally, existing Co-design AutoML strategies are constrained to a single type of network, such as Convolutional Neural Networks (CNNs), due to the hindrance posed by the underlying 'template' hardware accelerator when exploring alternative network architectures, like transformers [7]. Therefore, the aim of this work is to develop a system-level flow that facilitates the mapping of various networks to custom silicon.
Many prior DNN hardware accelerator design frameworks [6, 18, 36, 39] focus on automating the design of single-device accelerators. This implies that DNNs must either fit within the designated device or be reused on the same hard
ware. However, the emergence of large language models such as GPT and OPT [7, 42], which consist of millions or even billions of parameters, has necessitated the exploration of multi-accelerator systems instead of traditional single-accelerator systems. Multi-accelerator systems are better suited to accommodate the ever-growing models, which raises the question of how can we automatically partition available hardware resources across multiple devices for different types of DNNs with varying runtime constraints.
In this paper, we are interested in exploiting custom hardware computing for a wide variety of ML models with optimization in both software and hardware on a many-accelerator system. Our work aims to solve the following challenges:
_1) Unified Abstraction for Software and Hardware_
The development flows for ML models and hardware design are currently separate. The ML models are treated as a'read-only' input to the hardware synthesis tools. How can we lift this restriction by enabling ML model optimization in the hardware flow?
_2) Efficiency and Scalability_
Most new ML models are large and contain millions of operations and cannot be efficiently mapped onto a single hardware device. How can we efficiently exploit parallelism in a scalable hardware system?
_3) Hardware Reusability_
Hardware designers may have already implemented efficient hardware blocks that could be reused for computing new ML models. How can we efficiently explore hardware optimization with these user-defined blocks?
_4) Correctness_
The user-defined blocks might not work correctly or efficiently with the synthesised blocks using MASE. How to test and verify the final hardware design by making use of the software ML model?
We propose a novel hardware architecture exploration tool termed MASE, which stands for ML Accelerator System Exploration. MASE enables fast prototyping of efficient hardware accelerator systems for state-of-the-art ML models. The main contributions are:
* A novel intermediate representation that describes _both_ software and hardware at module level;
* A scalable hardware synthesis flow with automated and cross-device design space exploration that maps the state-of-the-art of ML models into efficient many-accelerator systems;
* Two case studies demonstrating how to use MASE to automatically quantize and accelerate the latest large-language models using custom arithmetic; and
* Over a set of ML models, MASE can achieve comparable performance and energy efficiency to GPUs when accelerating recent and large ML models.
The rest of the paper is organised as follows. Section II provides an overview of the MASE framework. Section III provides the background of recent trends in ML model designs, design tools for streaming ML accelerator and AutoML methods. Section IV explains how MASE realises the four challenges explained before. Section V provides two case studies on how MASE can be used for prototyping efficient accelerators for new transformer models.
## II Overview
This section presents an overview of MASE. MASE is integrated into PyTorch and can directly interface with PyTorch modules without requiring any rewriting. The top right of Figure 1 illustrates an example of how MASE can be used. In MASE, an ML model can undergo analysis and transformation
Fig. 1: An overview of the MASE tool flow. We propose MASE IR as an intermediate representation that can exploit both software and hardware optimization at the same level.
in both software and hardware domains. The black edges in Figure 1 depict the fundamental tool flow of MASE.
1. A user-defined model in PyTorch with pre-trained weights is translated into an intermediate representation (IR) in MASE, named MASE IR. The MASE IR of an ML model encompasses both the software algorithm and its corresponding hardware implementation.
2. Similar to most compiler frameworks, a model in MASE IR can undergo software transformations such as post-training quantization using MASE compiler passes (explained in Section IV-A).
3. The MASE IR contains comprehensive model information and can be trained or fine-tuned after transformation (explained in Section IV-B).
4. MASE IR also describes the accelerator architecture for the model, enabling efficient and scalable hardware transformations using MASE compiler passes, akin to software development (explained in Section IV-B).
5. At the backend of MASE, the optimized hardware design can be generated for a scalable accelerator system.
6. MASE incorporates an efficient testing framework to verify the equivalence between the software model and the hardware implementation.
The tool flow depicted in Figure 1 serves as an example development flow in MASE. In MASE, all the MASE passes can be applied _out-of-order_, similar to other compiler frameworks like LLVM [24] and MLIR [25].
In this paper, we demonstrate how MASE enables rapid prototyping of next-generation ML accelerators for new ML models. The red dashed edges in Figure 1 illustrate the proposed techniques for software and hardware co-optimization. Firstly, we show the efficient transformation of new ML models by leveraging the implementations of existing models in both software and hardware. Secondly, we present how the hardware can be further customized in a scalable manner in interactive coordination with software transformations.
## III Background
In this section, we will first introduce the automated toolflows for ML hardware accelerators. Secondly, we compare the existing IR designs in the ML community. Finally, we review the related works on software and hardware co-design.
### _HLS for Hardware Accelerators_
Early research on hardware accelerators manually translated software models to HLS code [33, 8]. The results are often promising, but it takes a significant amount of effort and time to interpret the software model and perform manual hardware translation. Such engineering effort is usually repeated when accelerating a new model architecture.
There have been interests in fast prototyping hardware accelerators using HLS tools. High-level synthesis is a process that automatically translates high-level software programs into low-level hardware descriptions [12]. This enables software engineers without hardware background to customize their own hardware accelerators, or hardware engineers to efficiently and rapidly prototype hardware accelerators.
HLS4ML [18] is proposed for translating ML models into HLS code, resulting in custom hardware designs. AMD Xilinx FINN [6] supports hardware acceleration of CNN models. However, these tools keep the ML model unchanged and only exploit parallelism for hardware designs that produce the same results as the software ML model.
MASE aims to solve these problems as an efficient and scalable system-level HLS tool for ML. First, MASE utilises an efficient abstraction that enables software and hardware co-optimization. This allows the software model to be modified during hardware synthesis to generate better-quality designs. Second, MASE supports direct lowering from PyTorch to hardware accelerator systems, significantly reducing the development time for accelerator design.
### _Machine Learning and HLS IR_
There are a variety of tools for ML training and inference. These tools often use an intermediate representation (IR) to represent a model architecture. For instance, various IRs are widely used in software ML communities, including TorchScript [14], ONNX [4] and FX Torch [32]. Such an IR represents the model at the module level, where each operation is a predefined module or function in PyTorch. However, these IRs are designed specifically for processors and cannot describe custom hardware designs. MASE IR extends these IRs and enables custom hardware modelling for efficient hardware synthesis.
In hardware acceleration, the software model is often represented in a target-independent software IR, such as MLIR [25] or LLVM [24]. This IR is then translated into a hardware design. MLIR is an IR that can express a program at different abstractions, also known as dialects, enabling efficient optimization at the appropriate level of abstraction. It supports both software and hardware abstractions [11]. LLVM, originally designed as a target-independent IR for software
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Features & TorchScript [14] & Torch FX [32] & ONNX [4] & MASE IR & MLIR [25] & LLVM [24] \\ \hline Granularity & module & module & module & module & tensor/vector/scalar & scalar \\ Executing in software & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ Hardware mapping & \(\times\) & \(\times\) & \(\times\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ Training & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\times\) & \(\times\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: Comparison between our IR and other existing IRs. MASE IR is the first _trainable_ IR that both describes the software model architecture and its hardware accelerator architecture.
programs, has also been used to express HLS programs [40]. However, these IRs represent an ML model at a lower granularity than the module level and cannot be easily transformed back to the original model. This poses challenges for training models at this level of abstraction. In contrast, MASE IR retains the model trainability, which creates opportunities for software and hardware co-design. A detailed comparison between MASE IR and existing IRs is presented in Table I.
### _Software and Hardware Co-Design_
The field of Automated Machine Learning (AutoML) has made significant progress in recent years, aiming to achieve high accuracy and hardware performance. From a software perspective, this involves discovering state-of-the-art models through Neural Architecture Search (NAS) [29, 45]. On the hardware side, the NAS process incorporates performance metrics like latency and power, and it also integrates the hardware metrics with the model architecture for a joint search [1].
HW-NAS-BENCH [26] investigated co-design for small datasets such as CIFAR-10 and CIFAR-100. The study measured hardware performance on edge devices and developed device-specific performance models. EDD [27] focused on searching for the optimal combination of parallelism, loop-tiling factors, and quantization strategies for a given model. The authors formulated the co-design problem as a differentiable loss function, which was then optimized using gradient descent. Similar works in this field include Auto-NBA [19], HAO [16], and Co-Exploration [21].
These related works primarily focused on mapping neural networks to a single device, and their evaluations often involved small or outdated models. In contrast, MASE offers node-level hardware generation and integration with arbitrary user-defined components across a multi-accelerator system. This capability enables MASE to support the deployment of large and state-of-the-art models. In Section V-A, we provide a case study to demonstrate how MASE efficiently searches for both efficient quantization and hardware mapping for these models. In the future, the MASE tool flow can be easily extended to other use cases, such as zero-shot NAS [28].
## IV Methodology
This section provides a detailed explanation of the MASE framework. Firstly, we present the specifications for the proposed MASE IR. Then, we demonstrate how software transformations can be performed on the MASE IR using MASE passes. Next, we elaborate on the process of hardware optimization and generation within the MASE IR. Finally, we describe the implementation of the testing framework, which enables co-simulations between the software model and the hardware design.
### _Unified IR for Software and Hardware_
We seek an efficient IR that provides a high-level description of ML models and can be transformed back into the PyTorch model. Designing an _efficient_ and _trainable_ IR for hardware accelerator design is crucial for effectively exploring both software and hardware aspects. By using the term "trainable", we mean that the IR can be converted back into a trainable module within a high-level framework like PyTorch, enabling the re-training of a modified IR.
In this work, we introduce a novel software and hardware co-design IR at the module level, called MASE IR. A MASE IR is a directed graph representation of an ML model and its accelerator architecture, consisting of multiple vertices connected by edges. Each vertex is referred to as a 'MASE node'. A MASE node represents the computation of an operation at the module level for software (an event) and a custom computation kernel for hardware (an instance). Each edge between two MASE nodes represents data dependence for software and data interface in wires for hardware. MASE IR preserves the original PyTorch model within the IR by
Fig. 2: An example of MASE IR being transformed by a sequence of transform passes. The changed parts between two consecutive IRs are highlighted in red. MASE passes can change an ML model in both IR and attributes. These passes can be run out of order because of the same abstraction, which opens up opportunities for software and hardware co-design.
associating it with the nodes and edges, enabling analysis, transformation, training, and execution of the model.
Figure 2 provides several examples of MASE IR. The leftmost part of the figure shows the initial representation of an input model. The IR consists of six nodes, where the orange nodes represent nodes that do not alter the tensor values, and the blue nodes compute tensors and generate new values. The colours of the nodes serve as annotations, and all the nodes are treated the same by the tool. The node types in MASE IR are a superset of the types supported by PyTorch built-in modules and functions, enabling a direct translation between MASE and PyTorch. A list of the supported node types can be found in the top left of Table II.
Each instance of a node in MASE IR is associated with a set of attributes that model both software and hardware behaviour. In the figure, we show the key attributes for the sake of simplicity. Attributes denoted by black text in the figure are general and applicable to every node, such as input and output types. Certain attributes are specific to individual node types, highlighted in blue text in the figure. These node-specific attributes correspond to parameters specific to different types of nodes, such as convolutional layers and linear layers as depicted in the figure. The information regarding edges in the IR is captured by the attributes of the preceding and succeeding nodes. A list of key attributes used in this work can be found in the bottom left of Table II.
MASE interacts with MASE IR through two forms: actions and passes. An action, also referred to as a pass manager, carries out coarse-grained transformations of MASE IR. It consists of a sequence of passes and can be directly utilised by general users, as depicted in Figure 1. A pass performs analysis or transformation at a finer grain and is employed during development. A comprehensive list of actions and passes is provided in Table II. In the following sections, we will delve into the process of transforming MASE IR in both software and hardware through a series of MASE passes.
### _Software & Hardware Transformation_
The software transformation passes can be classified into two categories: architecture-level and node-level. In Figure 2, a pipeline of transformations is demonstrated using a sequence of three MASE passes. The first pass (fusion pass) conducts software transformation on the model architecture, while the second pass (quantization pass) performs software transformation on the model parameters.
Firstly, the fusion pass merges a BatchNorm layer following a Conv2d layer into the Conv2d layer by updating the weights within the Conv2d layer. A key benefit of such a transformation is that it reduces the hardware area due to the removal of a BatchNorm kernel, at the cost of preprocessing the weights. The fusion pass is an architecture-level transformation that modifies the model architecture.
Secondly, the quantization pass replaces the Conv2d and Linear nodes in floating-points with the quantized ones in integer types. The forward and backward passes of each node are updated so the node is still trainable. A key benefit of such a transformation is that it reduces the area occupied by each hardware kernel by utilising efficient arithmetic for hardware computation, at the cost of potential accuracy loss. The quantization pass operates at the node level, as it solely modifies the node locally. As depicted in the second rightmost IR in Figure 2, the values are quantized from float32 to small integers. Our quantization pass has the capability to conduct a mixed-precision search for each parameter in the model, which allows for fine-grained optimization of parameters and maximizes the benefits of quantization at scale. A detailed case study on mixed-precision quantization is presented in Section V-A.
\begin{table}
\begin{tabular}{l l l} \hline \hline Categories & Types & Descriptions \\ \hline \multirow{6}{*}{Node/Ops} & Conv2d & Standard Pytorch operations \\ & Linear & Standard Pytorch operations \\ & RelU & Standard Pytorch operations \\ & Softmax & Standard Pytorch operations \\ & Layernorm & Standard Pytorch operations \\ & – & View & Memory transformation \\ & Size & Memory transformation \\ & Flatten & Memory transformation \\ & – & \\ \hline \multirow{3}{*}{Attributes} & args & Input data shapes and types \\ & results & Output data shapes and types \\ & interface & Data interface with its predecessor and successor, e.g. handshake or no back pressure. \\ & tiling & Tiling parameters for each data interface \\ & toolchain & Synthesis flow for this node, e.g., vanilla MLIR HLS, optimised HLS or RTL kernels, user-defined kernels, \\ & param\_map & Resource mapping of internal parameters. \\ & device\_id & ID of hardware accelerator to map. \\ & rate & Estimated input and output throughput of the node. \\ & area & Estimated hardware resources of the node. \\ \hline \hline \end{tabular}
\end{table} TABLE II: A model in MASE IR contains two constructs, operations and attributes. Each operation has a set of attributes for presenting both software and hardware parameters. MASE IR works with a set of actions and passes, which perform analysis and transformation. Actions are either a sequence of MASE passes or preprocess operations directly on torch.nn.Module.
The hardware transformation can also be conducted at both levels. In Figure 2, we show an example of a design space exploration (DSE) pass. This pass explores the hardware design metrics using the provided specifications, such as available hardware resources, and establishes efficient constraints for hardware mapping. There are various design constraints to be explored for each node. For simplicity, we will only highlight the key hardware attributes of a linear layer:
* The toolchain attribute specifies the hardware synthesis approach used, which currently employs pre-defined hardware components provided by MASE.
* MASE offers a range of IP cores, and the IP core attribute indicates the specific component to be utilised if a pre-defined hardware component is used.
* Tensors computed by this linear layer are large. To maintain high throughput, matrices are partitioned into tiles and processed in a pipelined manner. The size and depth attributes specify the tile size and the number of partitions for each parameter, input, and output.
* The order attribute defines the order of tiles fed into the hardware kernel across multiple dimensions.
* The interface attribute determines the hardware interface for each data port, with the current configuration mapping inputs and outputs to handshake interfaces.
* The internal parameters can be stored in BRAM or DRAM, depending on the use cases. The map attribute specifies how weights are stored for the linear layer.
A detailed explanation of the DSE approach on how to determine these attributes is explained in Section IV-C2.
As a software and hardware co-design tool, MASE facilitates the interleaving of software and hardware co-optimization by utilizing the same MASE IR for both abstractions. The passes shown in Figure 2 can be executed in any order, allowing for efficient transformation and creating opportunities for synergistic effects between different passes.
### _System Synthesis_
In order to synthesize an efficient accelerator system, MASE explores the hardware parameters within the MASE IR and optimizes the hardware design using several optimization passes. Firstly, we demonstrate three possible approaches for synthesizing ML accelerators in MASE: utilizing pre-defined parameterized blocks available in MASE (in either RTL or HLS code), incorporating user-defined blocks, or directly utilizing HLS-generated blocks through MLIR and Vivado HLS [40]. Secondly, we explain how MASE explores the hardware architecture using these blocks. Lastly, we illustrate how the exploration engine in MASE determines the optimized accelerator design.
#### Iv-C1 Hardware Back End
The hardware backend of MASE takes a model in MASE IR and translates it into a hardware system. In the MASE IR, each MASE node can be mapped into hardware using one of three approaches. First, MASE contains a set of predefined parameterized blocks in its internal hardware library. Each block has a set of inputs and a set of outputs with handshake interfaces. These blocks are parameterized and manually implemented as templates in RTL or HLS code. If an operation has been defined in the internal hardware library, it could be directly synthesized into one of the pre-defined components with custom parameters. An advantage of such a component is that these components have been optimized for high performance and area efficiency. However, the internal hardware library requires manpower to develop and takes a long time to support new operations in state-of-the-art ML models.
#### Iv-C1 Enabling reusability by supporting external block
A second approach is to import user-defined hardware components. There have been a variety of ML accelerator designs that show high performance and energy efficiency for a particular ML model. These accelerators often contain reusable hardware blocks. To use these blocks, MASE enables the integration of external hardware blocks if each data port is implemented using a handshake interface. A user-defined hardware block is treated as a black box. With the parameters of the custom blocks provided, the parallelism of these blocks can be explored with other blocks in the DSE pass.
_2) Enabling synthesizability by supporting MLIR-HLS end_
However, new models often contain new or custom ops. Manual hardware development is slow and cannot catch up with the software model evolution. The absence of a single operation can fail the hardware synthesis of the entire model. To overcome this challenge, MASE supports the hardware synthesis of _arbitrary_ ML models by mapping new operations into hardware using the MLIR-HLS flow. Figure 3 illustrates the HLS flow from MASE IR to hardware. In MASE, each MASE node can be transformed by the Torch-MLIR front end [30] into MLIR, specifically the linalg abstraction at the tensor level. It can then be successively transformed into the affine abstraction using open-sourced MLIR passes and
Fig. 3: New models often contain new or custom ops. In order to solve the _synthesis ability_ challenge, MASE supports lowering MASE IR to MLIR so _arbitrary_ ML models can be mapped to hardware with the help of HLS. MASE has a set of MLIR passes shown on the left that automatically wraps the HLS hardware shown on the right for integration into a data flow architecture.
then translated into HLS code using MASE emit-c pass. These HLS blocks are modularised and connected to other hardware blocks using the handshake interface.
Here we use AMD Xilinx Vivado HLS [40] as MASE HLS back end. Vivado HLS generates hardware with a block interface by default, also called latency-sensitive design. Directives of synthesising a handshake interface for the input and output require human efforts. MASE currently has two passes on the MLIR end: 1) automatically inserting such directives using static analysis; 2) HLS code and wrapper generation from MLIR.
Here, we use AMD Xilinx Vivado HLS [40] as our HLS backend. By default, Vivado HLS generates hardware with a block interface, which is referred to as a latency-sensitive design. However, directives for synthesising a handshake interface for the input and output necessitate manual intervention. In MASE, we currently have two passes on the MLIR end for enabling hardware generation: 1) the automatic insertion of these handshake directives, and 2) the generation of HLS code and wrappers from MLIR.
The MLIR-HLS stage is considered as an exit point in the MASE flow, as it generates HLS code to enable the hardware synthesis capability of MASE. Ongoing research efforts on HLS optimization in MLIR [41, 43] can be applied in MASE for further improving the design quality of the HLS hardware. However, these optimizations are beyond the scope of this paper, as our focus is primarily on software and hardware optimization in the MASE IR. The hardware generated by vanilla MLIR-HLS still exhibits a significant performance gap compared to manually optimized RTL or HLS hardware. Nonetheless, during our experiments, we observed that the proportion of such hardware is minimal, as most operations are directly supported by MASE, as indicated in Table III. The performance overhead is limited and is expected to be further reduced with dedicated engineering efforts.
#### Iv-A2 Design Space Exploration
Here we introduce the DSE engine in MASE. The DSE problem can be formalised using the following terms:
* \(M=\{n_{0},n_{1},n_{2}...\}\) denotes the set of all the nodes;
* \(S:n\rightarrow\{s_{0},s_{1}...\}\) denotes the search space of node \(n\);
* \(O\subseteq M\times S\) denotes a single design point of \(M\);
MASE takes a model and explores all the search space \(S\) for each MASE node. Given a budget for hardware resources and cross-device memory bandwidth, the optimiser searches for an efficient \(O\) to maximise the system throughput in a greedy form.
As mentioned, MASE contains a set of pre-defined components. The throughput and area of nodes that use these components can be directly obtained using the internal regression model in MASE. The user-defined and HLS-generated components need to be analyzed from synthesis reports. The synthesis time overhead is relatively small because there were a small number of user-defined and HLS-generated components, as shown in Table III.
#### Iv-A1 Maximise throughput
The DSE process can be divided into five steps. It starts with the fully parallelized design and aims to efficiently reduce the area to a size that can fit onto the given devices. Let \(\theta(n,s)\) represent the maximum achievable throughput of a node instance \((n,s)\). Then the fully parallelized design \(O\) satisfies the following conditions:
\[\theta_{n}=\max_{s\in S(n)}\theta(n,s) \tag{1}\] \[\forall n\in M\Rightarrow\theta(n,s)=\theta_{n}\wedge(n,s)\in O \tag{2}\]
These conditions ensure that all nodes are paired with the configuration that provides maximum throughput.
#### Iv-A2 Rate balancing
The nodes are fully parallelized, but some of them may underperform, meaning that the actual throughput at runtime is significantly lower than the target throughput. To address this, we can reduce the parallelism of these nodes without affecting the overall throughput. This approach is known as rate balancing. We adopt the analysis provided by Venieris and Bouganis [36] for estimating the actual throughput in dataflow hardware. Let \(\theta_{r}(n,s)\) represent the actual throughput of a node instance \((n,s)\). After rate balancing, each node in the balanced design \(O^{\prime}\) is configured with a parallelism level close to its actual throughput.
\[\theta^{\prime}_{n}=\min\{\theta(n,s^{\prime})|\theta(n,s^{ \prime})\geq\theta_{r}(n,s)\} \tag{3}\] \[\forall n\in M\wedge(n,s)\in O\Rightarrow\theta(n,s^{\prime})= \theta^{\prime}_{n}\wedge(n,s^{\prime})\in O^{\prime} \tag{4}\]
Constraint 4 ensures that all nodes in the design are computing efficiently in a pipeline. We define such design points as Pareto-front hardware designs.
#### Iv-A3 Partitioning and bandwidth check
However, these design points are target-independent. MASE searches among these points and identifies an efficient design that can be implemented on the target hardware system, taking into account two constraints: resource and bandwidth. Here are the definitions of the terms used:
* \(B\) denotes the maximum cross-device bandwidth.
* \(R\) denotes the area budget per device.
* \(N\) denotes the number of devices.
\begin{table}
\begin{tabular}{l c} \hline \hline Node Types & Errors \\ \hline Linear & 8.1\% \\ Conv2d & 11.3\% \\ ReLU & 2.7\% \\ Softmax & 4.1\% \\ Layernorm & 3.7\% \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Area estimation error for internal hardware blocks.
\begin{table}
\begin{tabular}{l c} \hline \hline Hardware Module Class & Percentage \\ \hline Pre-defined & 75.5\% \\ Directly through MLIR-HLS & 10.8\% \\ User-defined & 13.7\% \\ \hline \hline \end{tabular}
\end{table} TABLE III: Hardware block types in this work.
Firstly, MASE attempts to map a balanced design point onto the system by partitioning the MASE IR, represented as a graph, into \(N\) subgraphs. Our pass traverses from the input nodes towards the output nodes and constructs a set of subgraphs \(D\) from \(M\), where \(|D|=N\) and \(\forall d\in D.\,d\subseteq O^{\prime}\). Let \(A(n,s)\) denote the area of a node instance \((n,s)\). The resource constraint, where each subgraph must fit onto a single device, is expressed as follows:
\[\forall d\in D.\,(n,s)\in d\Rightarrow\sum_{n\in d}A(n,s)+\delta\leq R \tag{5}\]
Here, \(\delta\) represents the area of the cross-device interconnect components, which can be obtained from the regression model in MASE, along with other predefined components.
Secondly, MASE also verifies whether the cross-device communication bandwidth can accommodate the data throughput between subgraphs. Let \(I(d)\subseteq d\) and \(I^{\prime}(d)\subseteq d\) represent the sets of input and output nodes of a subgraph \(d\). The bandwidth constraint can be expressed as follows:
\[\forall d\in D.\,(n,s)\in I(d)\Rightarrow\sum_{n\in I(d)}\theta_{r}(n,s)\leq B \tag{6}\]
\[\forall d\in D.\,(n,s)\in I^{\prime}(d)\Rightarrow\sum_{n\in I^{\prime}(d)} \theta_{r}(n,s)\leq B \tag{7}\]
If both the resource and bandwidth constraints are satisfied, then the design point \(O^{\prime}\) is considered efficient.
#### V-A5 Decrement parameters
If either of the constraints fails to be satisfied, MASE employs a strategy to gradually decrease the parallelism of the design. This process involves reducing the parallelism of an input node \(n\) in \(M\) by a small step. MASE then iterates through the re-balancing process described in step 2) until a valid design point is achieved. This iterative approach allows MASE to find a design that fulfills the resource and bandwidth requirements.
Our DSE approach allows us to thoroughly examine the entire design space across all numbers of devices and efficiently maps the model onto any number of devices for acceleration. We utilise QSFP28 as the interface for communication between FPGA devices, ensuring that the communication throughput and latency between every device are consistent and predictable. Figure 4 depicts the design spaces for three ML models. The blue points indicate the Pareto front design points defined by Constraint 4. The figure demonstrates that MASE can map these models onto hundreds of FPGAs for high-performance computing and onto a small number of FPGAs for area-efficient computing. The throughput of the model increases exponentially with the total area. However, a saturation point exists for the throughput when the area reaches a certain size, suggesting that the hardware system is memory-bound.
The accuracy of our regression model is high. Table IV shows the errors in the estimated area of the key components used for our experiments. This enables the design to be efficiently mapped onto each device based on the given resource budget. The throughput model is cycle accurate as there is no input-dependent control flow in the hardware, and all the hardware behaviour is predictable. Figure 5 shows an example of bert-base-uncased being mapped onto two U250 FPGA devices. The small blue blocks represent the QSFP28
Fig. 4: Design space exploration of each model over different resources. The x-axis represents area (in LUTs) and the y-axis represents performance (in FPS). The blue dots represent the Pareto front design points. The Pareto front designs are defined as the design where the throughput between every two consecutive nodes is balanced.
Fig. 5: An example of mapping a model onto two devices.
component for data transmission. The point-to-point cross-device link between two devices has a maximum bandwidth of 100 Gbps. The area of the first device is efficiently utilised, and the second device only uses 32% because all the kernels have been mapped.
### _Testing Framework_
Most HLS tools do not need a hardware testing framework if the tool is correctly implemented and the software model has been verified. However, MASE needs a testing framework because the correctness of the user-defined hardware block might cause errors. In order to test and verify the correctness of the generated hardware system, MASE runs software and hardware co-simulation to check the equivalence of software and hardware implementation.
Figure 6 illustrates the co-simulation flow in MASE. The left side of the figure shows the software testing flow, and the right side of the figure shows the hardware testing flow. For the software testing flow, the model in MASE IR takes a set of data from the validation dataset and runs inference on GPUs. For the hardware testing flow, the model in MASE IR is translated into a hardware system. The hardware testbench is also generated, taking the same test data as the software flow for hardware simulation. MASE calls an external hardware simulator (AMD Xilinx Vivado XSIM simulator [3]). Both the software model and the corresponding hardware system should return the same results or results with small discrepancies.1 MASE compares the results for correctness check. The hardware testbench tests the whole system, including _cross-device components_. MASE also provides a user option to simulate without these components for a fast behaviour simulation.
Footnote 1: Small discrepancies between the software and hardware results might happen when the model contains floating-point operations.
## V Experiments
We evaluated MASE on ten ML models, including classic CNN models and new transformer models. The detailed specifications of these models are shown in Table V. We obtained the total clock cycles from the MASE co-simulator, and the area results were extracted from the Post Synthesis report in Vivado. The FPGA family we used for the result measurements is Alveo U250, and the version of Xilinx software employed is 2020.2.
### _Mixed-Precision Quantization Co-design_
As mentioned in Section IV-B, MASE explores model quantization before deploying the models on hardware. We adopt the 8-bit fixed-point representation [10] as the default setup, referred to as "int8" in the rest of this paper. We report the accuracy of the quantized model in Table V, including both post-training and fine-tuning results.
It is well-known that applying a uniform bit width to the entire network may not be optimal. Previous literature [37] has highlighted that the dynamic range of data varies significantly across different layers of the same network. This observation motivates the exploration of mixed-precision arithmetic. MASE provides users with the ability to investigate mixed-precision arithmetic through our efficient IR.
In this example, we demonstrate how MASE can be utilized to automatically search for the optimal bit width of each layer in a hardware-aware manner. We employ the Tree-structured Parzen Estimator (TPE) [5] as the searching algorithm. For each explored design point, MASE automatically collects information on both the model accuracy and hardware resource utilisation.
Traditional mixed-precision quantization searches for small total numbers of bits with minimum loss. For this example, the analyser returns 23 optimal-like designs at minimal loss. However, these designs might not be all hardware efficient. Figure 7 illustrates the search results for facebook/opt-125m. From the figure, we observe the following:
\begin{table}
\begin{tabular}{l l l l l l l l l l l} \hline \hline \multirow{2}{*}{Category} & \multirow{2}{*}{Task} & \multirow{2}{*}{Name} & \multirow{2}{*}{Sub-style} & \multirow{2}{*}{Dataset} & \multirow{2}{*}{FLOPs (G)} & \multirow{2}{*}{Parameters (M)} & \multicolumn{3}{c}{Accuracy/perplexity} \\ \cline{6-10} & & & & & & & & & & \\ \cline{5-10} & & & & & & & & & & \\ \cline{5-10} & & & & & & & & & & \\ \cline{5-10} & & & & & & & & & & \\ \cline{5-10} & & & & & & & & & & \\ \cline{5-10} & & & & & & & & & & \\ \cline{5-10} & & & & & & & & & & \\ \hline \hline \multirow{10}{*}{\begin{tabular}{l} 6 \\ 7 \\ 8 \\ 9 \\ 10 \\ \end{tabular} } & \multirow{10}{*}{\begin{tabular}{l} Question Answering \\ \end{tabular} } & \multirow{10}{*}{\begin{tabular}{l} BERT \\ \end{tabular} } & \multirow{10}{*}{\begin{tabular}{l} ResNet \\ \end{tabular} } & \multirow{10}{*}{\begin{tabular}{l} ResNet \\ \end{tabular} } & \multirow{10}{*}{\begin{tabular}{l} ResNet \\ \end{tabular} } & \multirow{10}{*}{\begin{tabular}{l} ResNet \\ \end{tabular} } & \multirow{10}{*}{\begin{tabular}{l} ResNet \\ \end{tabular} } & \multirow{10}{*}{\begin{tabular}{l} ResNet \\ \end{tabular} } & \multirow{10}{*}{\begin{tabular}{l} ResNet \\ \end{tabular} } & \multirow{10}{*}{\begin{tabular}{l} ResNet \\ \end{tabular} } & \multirow{10}{*}{\begin{tabular}{l} ResNet \\ \end{tabular} } & \multirow{10}{*}{\begin{tabular}{l} ResNet \\ \end{tabular} } & \multirow{10}{*}{
\begin{tabular}{l} ResNet \\ \end{tabular} } \\ \cline{5-10} & & & & & & & & & \\ \cline{5-10} & & & & & & & & & \\ \hline \hline \end{tabular}
\end{table} TABLE V: The specifications for the ML models evaluated in this work. float32 = 32-bit floating-point. int8 = 8-bit fixed point. ft = fine-tuned. \({}^{*}\)The accuracy of models for language modelling was measured in perplexity (smaller is better).
Fig. 6: Software and Hardware Co-simulation.
* The loss of our work is worse than GPU because we only do standard quantization without fine-tuning.
* The total bits of the model might be the same for all the searched results but they have shown different energy efficiency.
* The best design point at the top right of the figure has better energy efficiency than the GPU.
Particularly, Figure 8 shows the distribution of the bits over different layers in the top-right design point in Figure 7. The layers are repeated because there are 12 building blocks. More than half of the bit widths have been quantized to less than 4 bits, which significantly improves energy efficiency.
### _Exploring Efficient Custom Arithmetic_
Here we show how to use custom arithmetic operations as user-defined blocks and integrate them into MASE for hardware architecture exploration. We use Microsoft Floating Point (MSFP) [13] in this section. It is a floating point representation sharing an exponent across vector elements. An MSFP vector takes the form as follows:
\[2^{e_{shared}}[(-1)^{s_{0}}m_{0},(-1)^{s_{1}}m_{1},\ldots,(-1)^{s_{n-1}}m_{n-1}] \tag{8}\]
where \(n\) signed mantissa \((-1)^{s_{i}}m_{i}\), \(i=0,1,\ldots,n-1\) scaled by the same exponent \(e_{shared}\) (\(n\) is the so-called bounding box size) [13]. The shared exponent enables efficient data size by reducing the average bit width. Additionally, it reduces the number of bits when computing matrix multiplication, leading to potentially better energy efficiency. An example of an 8-bit exponent and 4-bit value MSFP value is shown in Figure 9.
In this example, we seek to exploit MSFP operations in BERT using MASE. We provide similar tunable parameters as shown in Figure 10 so it can be explored in the MASE DSE. The sharing constraint is set to 16, and the bit width is configured to 4. That means every 16 4-bit values share an 8-bit exponent.
The results of MSFP integration for BERT are shown in Table VI. From the table, we made the following observations:
* Quantizing BERT to fixed-point has lost significant accuracy. This is because the variation of the data is huge and cannot fit into the 8-bit range.
* The accuracy of using MSFP is comparable with the floating point version. That is because the MSFP still preserves a large range of data values benefiting from its floating point representation, although the average bit widths are smaller than 8 bits.
* The performance of the MSFP implementation is significantly higher than the GPU results and has shown the best energy efficiency. That is because the low bit widths enable more data transfer to the FPGA, which was the performance bottleneck of the hardware system. Also, the arbitrary precision enables more power efficiency compared to the fixed architecture of GPUs.
As shown above, MASE enables users to fast prototype new arithmetic components and integrate them into a large model for systematical exploration. This eases the complexity
Fig. 8: The distribution of operation precision across different building blocks in facebook/opt-125m.
Fig. 7: Traditional mixed-precision quantization search only targets minimum loss. This returns a set of solutions for facebook/opt-125m, shown as green points, but they may not be efficient in hardware. MASE uses the model in DSE and determines the one (the top-right green point) that has comparable energy efficiency with GPU. The large loss between MASE and GPU is due to post-training quantization.
Fig. 9: An example of MSFP format, where multiple numbers in the same block share the same exponent.
in large hardware system design and significantly improves the hardware development process.
### _Overall Results_
Table VI shows the overall results for transformers. From the table, we made the following observations:
* The accuracy of quantized facebook/opt-1.3b is significantly worse than the floating point results. This is because the accuracy was not fine-tuned because of out-of-memory errors in our GPU server.
* Overall, MSFP has shown the best performance and energy efficiency. This could improve hardware performance if integrated into the next generation of ML accelerators.
* The results produced by vanilla MASE show high throughput with extremely large resources. This shows that the many accelerator systems generated by MASE can scale well with the number of devices as the cross-device communication does not have interconnect restrictions.
* The bit width used by vanilla MASE is 8 bits, which is inefficient for some models. This reduces the energy efficiency of the hardware. Quantization search could be used for further improvements, but it takes a long time for large models.
Table VII shows the overall results for CNN models. From the table, we made the following observations:
* The ML accelerators for CNN have been widely studied and have shown competitive results. Here we can show that MASE can still generate efficient hardware with competitive performance.
* The results for mobilenetv3 models are not as competitive as related works, because the model is relatively small, and there are fewer opportunities to exploit model parallelism.
## VI Conclusions
ML models today are evolving rapidly and growing significantly. This makes designing the next ML accelerators for new and large models challenging. In order to reduce the gap between the development speed of software ML models and hardware accelerators, we propose an efficient and scalable hardware system exploration tool named MASE.
We propose an efficient IR called MASE IR that allows the software model and the hardware architecture to be expressed at the same abstraction level. This work represents the initial step of the MASE framework, enabling ML accelerator system exploration within a software compiler flow. MASE presents various opportunities for next-generation ML accelerator exploration, including hardware-aware network architecture search (NAS) and custom quantization scheme search.
As examples of these use cases, we present two case studies demonstrating how to utilize MASE for exploring efficient quantization of ML models and incorporating newly proposed arithmetic operators to enhance the efficiency of an existing accelerator system. MASE facilitates end-to-end hardware synthesis flow for large models. Our aim is for MASE to contribute to accelerating the process of designing next-generation accelerators from months to weeks.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline Model & \multicolumn{3}{c}{bert-base-cased} & \multicolumn{3}{c}{bert-large-cased} & \multicolumn{3}{c}{facebook/opt-125n} & \multicolumn{3}{c}{facebook /opt-35n} & \multicolumn{3}{c}{facebook/opt-1.3b} \\ \cline{2-13} Device Class & GPU & FPGA & FPGA & GPU & FPGA & GPU & FPGA & GPU & FPGA & FPGA & FPGA & GPU & FPGA \\ Platform & \multicolumn{1}{c}{RTX 3009} & 1250 & 1250 & RTX 3009 & U250 & U250 & U250 & RTX 3009 & U250 & U250 & RTX 3009 & U250 \\ Device Count & 1 & 27 & 25 & 1 & 20 & 21 & 1 & 29 & 32 & 21 & 22 & 21 & 22 \\ Quantization & float32 & intsts & MSPF & float32 & ints & MSPF & float32 & ints & MSPF & float32 & ints \\ Accuracy & 99.8 & 75.3 & 90.0 & 92.7 & 56.0 & 91.3 & 21.4\({}^{*}\) & 27.9\({}^{*}\) & 22.9\({}^{*}\) & 255\({}^{*}\) & - & 38 & - & 51 \\ MLUTs & - & 45 & 43 & - & 34 & 36 & - & 50 & - & - & 38 & - & 51 \\ Fmax - MHz & - & 267 & 267 & - & 267 & - & 267 & 267 & - & 267 & - & 267 & - & 345 \\ Power (W) & 341 & 1683 & 1482 & 343 & 1288 & 1288 & 3438 & 1812 & 1901 & 1934 & 334 & 1327 & 340 & 1872 \\ Frame Rate (FPS) & 1090 & 2716 & 6953 & 60 & 1019 & 1273 & 706 & 2716 & 6953 & 7243 & 106 & 5899 & 28 & 164 \\ Throughput GOPS & 24.81 & 6070 & 158493 & 7344 & 90423 & 108653 & 15801 & 58794 & 224214 & 23384 & 961 & 4709 & 9335 & 58708 \\ GOPS/NUT & 1.3 & 3.6 & - & 2.4 & 2.8 & - & 1.8 & - & 1.4 & - & - & 1.2 & - & 1.1 \\ GOPS/W & 71.5 & 36.1 & 104.9 & 13.8 & 64.4 & 78.1 & 46.1 & 48.3 & 113.0 & 120.8 & 27.4 & 31.9 & 27.5 & 29.8 \\ FPS/W & 3.2 & 1.6 & 4.7 & 0.2 & 0.8 & 1.0 & 2.1 & 1.5 & 3.7 & 3.8 & 0.3 & 0.4 & 0.08 & 0.09 \\ \hline \hline \end{tabular}
\end{table} TABLE VI: Evaluation of MASE on transformers. \({}^{*}\)means perplexity. We highlight the best results in comparison.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline \hline Model & \multicolumn{3}{c}{resnet18} & \multicolumn{3}{c}{resnet50} & \multicolumn{3}{c}{ mobilenetv2} & \multicolumn{3}{c}{ mobilenetv3\_small} & \multicolumn{3}{c}{ mobilenetv3\_large} \\ \cline{2-13} Work & [34] & - & Ours & [34] & - & Ours & [34] & - & Ours & [34] & - & Ours & & Ours & & Ours \\ Device Class & FPGA & GPU & FPGA & GPU & FPGA & FPGA & FPGA & FPGA & GPU & FPGA & GPU & FPGA & GPU & FPGA \\ Platform & \multicolumn{1}{c}{ZCU102} & \multicolumn{1}{c}{RTX 3009} & \multicolumn{1}{c}{ZCU102} & \multicolumn{1}{c}{RTX 3009} & U250 & \multicolumn{1}{c}{ZCU102} & \multicolumn{1}{c}{RTX 3009} & U250 & \multicolumn{1}{c}{RTX 3009} & U250 & \multicolumn{1}{c}{RTX 3009} & U250 \\ Device Count & 1 & 1 & 6 & 1 & 1 & 13 & 1 & 1 & 2 & 1 & 1 & 1 & 2 \\ Quantization & \multicolumn{1}{c}{Mixed} & float32 & ints & Mixed & float32 & ints & Mixed & float32 & ints & float32 & ints \\ Accuracy & 70.5 & 69.6 & 68.5 & 77.3 & 75.8 & 78.1 & 65.7 & 72.0 & 70.8 & 67.4 & 60.1 & 75.2 & 69.9 \\ DSP & 2092 & - & - & 209.2 & - & - & 209.2 & - & - & - & - & - \\ LUTs & 180.1 & - & 6353 & 180.1 & - & 11437 & 189.1 & - & 1768 & - & 792 & - & 1856 \\ Area Factor & **389.3** & - & 635 & 389.3 & - & 11437 & **389.3** & - & 1768 & - & 792 & - & 1856 \\ Fmax - MHz & 150 & - & 280 & 150 & - & 250 & 150 & - & 250 & - & 280 & - & 240 \\ Power (W) & 13 & 153 & 427 & 13 & 262 & 890 & 13 & 211 & 137 & 126 & 56 & 204 & 131 \\ Frame Rate (FPS) & 215 & **4904** & 4725 & 109 & 1665 & **440** & 538 & 4550 & 4893 & 16308 & 5488 & 5985 & 1813 \\ Throughput (GOPS) & 779 & 17963 & 17182 & 891 & 13675 & 38635 & 320 & 2798 & **3009** & 1902 & 640 & 2651 & 807 \\ GOPS/Area Factor & 2.0 & - & 2.6 & 2.3 & - & 3.1 & 0.8 & - & 1.7 & - & 0.8 & - & 0.4 \\ GOPS/W & 60.4 & 117.4 & 40.2 & 69.1 & 52.2 & 40.0 & **24.8** & 13.3 & 22.0 & 15.1 & 14.1 & 13.0 & 6.2 \\ FPS/W & 16.7 & 32.3 & 11.1 & 8.5 & 6.4 & 4.9 & 41.7 & 21.6 & 35.8 & 129.4 & 98.0 & **29.2** & 13.8 \\ \hline \hline \end{tabular}
\end{table} TABLE VII: Evaluation of MASE on CNN models. We highlight the best results in each dimension. |
2303.10198 | 3D structured Bessel beam polarization and its application to imprint
chiral optical properties in silica | Polarization plays crucial role in light-matter interactions; hence its
overall manipulation is an essential key to unlock the versatility of light
manufacturing, especially in femtosecond laser direct writing. Existing
polarization-shaping techniques, however, only focus on the manipulation in
transverse plane of a light beam, namely a two-dimensional control. In this
paper, we propose a novel passive strategy that exploits a class of femtosecond
laser written space varying birefringent elements, to shape the polarization
state along the optical path. As a demonstration, we generate a
three-dimensional structured Bessel beam whose linear polarization state is
slowly evolving along the focus (typ. 90 degrees within 60 lightwave periods).
Such a "helical polarized" Bessel beam allows imprinting "twisted nanogratings"
in SiO2 resulting in an extrinsic optical chirality at a micrometric scale,
which owns a high optical rotation. Our work brings new perspectives for
three-dimensional polarization manipulations and would find applications in
structured light, light-matter interaction and chiral device fabrication. | Jiafeng Lu, Mostafa Hassan, Francois Courvoisier, Enrique Garcia-Caurel, Francois Brisset, Razvigor Ossikovski, Xianglong Zeng, Bertrand Poumellec, Matthieu Lancry | 2023-03-17T18:19:45Z | http://arxiv.org/abs/2303.10198v1 | D structured Bessel beam polarization and its application to imprint chiral optical properties in silica
###### Abstract
Polarization plays crucial role in light-matter interactions; hence its overall manipulation is an essential key to unlock the versatility of light manufacturing, especially in femtosecond laser direct writing. Existing polarization-shaping techniques, however, only focus on the manipulation in transverse plane of a light beam, namely a two-dimensional control. In this paper, we propose a novel passive strategy that exploits a class of femtosecond laser written space varying birefringent elements, to shape the polarization state along the optical path. As a demonstration, we generate a three-dimensional structured Bessel beam whose linear polarization state is slowly evolving along the focus (typ. 90\({}^{\circ}\) within 60\(\lambda\)). Such a "helical polarized" Bessel beam allows imprinting "twisted nanogratings" in SiO\({}_{2}\) resulting in an extrinsic optical chirality at a micrometric scale, which owns a high optical rotation. Our work brings new perspectives for three-dimensional polarization manipulations and would find applications in structured light, light-matter interaction and chiral device fabrication.
## I Introduction
Polarization, as a fundamental property of light, lies in the core of many important light applications in various domains, such as light-matter interaction [1], optical display [2, 3], light sensing [4], and data storage [5]. In light science, polarization refers to the vibration mode of the electric field vector of light and thus, corresponds to the spin angular momentum of the photon. Hence, polarization manipulation can control the light-matter interaction in a desirable manner, to unlock the versatile functions of light-driven technologies, especially in ultrafast laser manufacturing.
Light-matter interaction at the femtosecond (fs) scale consists in the absorption through nonlinear photoionization mechanism, which enables precise energy deposition in any volume of the transparent materials without surrounding damage [6]. Such a mechanism results in diverse modifications inside the material that are dependent on the laser parameters, such as refractive index three-dimensional (3D) profiling [7], anisotropic subwavelength nanogratings [8], or voids formation [9]. Owing strong anisotropic optical properties but also outstanding thermal properties, nanogratings have attracted significant interest in birefringent elements fabrication [10, 11], microfluidic channels [12] and high temperature sensing [13, 14]. Considerable research efforts have proven that, ideally (neglecting tilt effects, for example, the pulse front tilt, PTT), self-organization of the nanogratings coincides with fs laser polarization [8]. This may be attributed to a multiple scattered wave interference mechanism [15, 16] and, on the other hand, illustrates that the fs laser polarization can dominantly structure the nanogratings orientation in a desirable way. Hence, polarization is a crucial parameter in determining the imprinted nanogratings distribution, which has been adopted as one dimension of the 5D data storage [17]. Within this field, Kazansky et al. recently reported a new type of fs laser induced modification which consists of random nanopores elongated perpendicularly to the laser polarization namely "Type X" modification [18] corresponding to the early birth of nanogratings. From application point of view, these nanostructures own an ultralow loss with controllable form
birefringence enabled by laser polarization.
Polarization is not only important in laser materials processing, but also essential for probing and imprinting chiral structures [19, 20]. Basically, an object that exhibits chirality possesses the geometric property of being incapable of coinciding with its mirror image by translations and rotations [21]. This feature predominantly results in different chiroptical responses to left- and right-handed circularly polarized light, which lays the foundations of diverse chiroptic technologies ranging from analytical chemistry to chiral switches in polarization optics [22, 23, 24]. Two decades ago, chiral gratings with double helix symmetry have been proved to own polarization-selective properties in the ground-breaking work reported by Kopp et al. [25]. Since then, chiral (or helical) long period fiber gratings (LPFGs) and fiber Bragg gratings (FBGs), fabricated on a twisted optical fiber passed through a miniature heat zone or by a helical refractive index modulation in the laser writing process, have gained more attentions in torsion and torque sensing, orbital angular momentum mode converters and circular polarizers [26, 27, 28, 29, 30, 31]. The pioneered advance that a fs laser can directly induce chiral optical properties inside achiral transparent materials enriches the picture of light-matter interaction [19]. However, for now, the tailoring of fs laser generated chirality stays on 2D plane (single layer with small thickness), which restricts the level of the chiroptic effects that may be produced. 3D symmetry breaking has been proven to prominently raise the chiral optical properties compared to 2D asymmetry [32]. However, most of the existing polarization manipulations consist in 2D transformations through conventional birefringent optics (like waveplates) [33], digital holography (like Spatial Light Modulators, SLMs) [34] or geometric phase optics [35]. These strategies share a common limitation: the polarization behavior and its spatial structuration are considered only in the transverse plane, which is difficult to imprint a 3D symmetry breaking and the related chiral optical properties.
Figure 1: Schematic of the concept of 3D polarization structuring of a Bessel beam via a space varying birefringent element and its application to imprint “twisted nanogratings” for extrinsic chirality. (a) Schematic of generating a z-variable-polarized fs Bessel beam by employing a space varying half-wave plate. The space varying half-wave plate is written by a Gaussian fs laser and then is inserted into the Bessel focusing path to convert a linearly polarized Bessel beam into a “z-variable-polarized” Bessel beam. Namely, it is helically polarized in the central core: the polarization state of the central lobe of the Bessel beam is linearly polarized with an orientation which evolves from 90\({}^{\circ}\) up to 0\({}^{\circ}\) within the 50 \(\upmu\)m focus (along the optical path). Space varying half-wave plate is written at 1030 nm, 800 fs, 100 kHz, 0.16 NA, 1 mm/s, 2.5 \(\upmu\)l. The birefringence slow axis azimuth was varied from 0\({}^{\circ}\) at the outside “ring” region to 45\({}^{\circ}\) at the central “ring” region. Eventually, such a z-variable-polarized Bessel fs beam is used to imprint “twisted nanogratings” leading to extrinsic chirality. “\(K\)” is the direction of the light wavevector. (b) Schematic of the polarization distribution along the propagation of a circular polarized beam. “T” refers to the period of the light wave; “\(\lambda\)” refers to the wavelength of the light i.e, 800 nm for the Bessel fs laser. Here, considering light propagating in free space (speed approximates to velocity of light, c), the propagation distance of a single period T is equal to \(\lambda\). (c) Schematic of the polarization distribution along the propagation of the z-variable-polarized beam within a single light wave period (above) and along the 50 \(\upmu\)m long focus (below).
In this paper we present a unified strategy that exploits homemade space varying birefringent optical elements to structure a fs Bessel beam with a polarization function that evolves along its focus. This is a passive method for 3D polarization structuration resulting in a z-variable linearly polarized fs Bessel beam. Specifically, we experimentally fabricated a space varying half-wave plate by Gaussian fs laser direct writing and thus created a linearly polarized Bessel beam whose orientation is evolving along the optical propagating direction. Such a versatile longitudinal polarization shaping exhibits a strong capability of symmetry breaking along the optical path and thus, can create a strong extrinsic chirality inside transparent materials (here in silica) upon single scanning, as depicted in **Figure 1(a)**. This is just an easy-obtain but instructive example of such advanced direct writing strategy. Indeed, one can even shape the polarization in 3D for any polarization state (linear, circular or elliptical) in the different positions by adjusting the design of the space varying birefringent element. To our knowledge, this is the first report using a z-variable-polarized laser beam for laser-materials processing. One can envision this work not only to fabricate chiral optical elements with high efficiency, but also to inspire new ground-breaking perspectives in 3D light-matter interaction, polarization switching and light control.
## II Results
Mathematic description of the z-variable-polarized Bessel beam and definitions of anisotropic optical properties
Bessel beams that possess non-diffracting nature over long distance [36] can be seen as a coherent superposition of a set of infinite plane waves with their wavevector along the generatrix of the cone half angle \(\theta\). The infinite Bessel beam is a propagation-invariant solution of the Helmholtz equation calculated by Durnin et al. [37]. Because of the conical flow of light in a Bessel beam, the polarization state of the central lobe on the optical axis at a distance z can be controlled from the plane z=0 by the polarization state of the geometrical rays emerging from a circle of radius \(r=~{}z\cdot tan(\theta)\), that will focus in (r=0, z) as shown in **Figure 1(a)**.
In our case, we choose to produce a linear polarization on the axis with a rotating orientation with propagation distance. In our concept, it can be created with an input polarization distribution where the orientation of the polarization varies with the radius at z=0 plane. (We note that the polarization state in the other lobes of the Bessel beam, outside the central core is more complex, because it is created by the interference of waves with different linear polarization orientations.)
The \(x\) and \(y\) components of the electric field vector on the optical axis can be expressed as:
\[\mathbf{E}(r=0,z)=E_{0}\cdot\begin{bmatrix}\cos\bigl{(}\alpha(z)\bigr{)}\\ \sin\bigl{(}\alpha(z)\bigr{)}\end{bmatrix}\cdot e^{i(k_{x}x-\omega t)} \tag{1}\]
where \(E_{0}\) and \(k_{x}\) represent the on-axis Bessel beam amplitude and the longitudinal component of the wavevector, respectively. The linear polarization orientation is \(\alpha(z)=\frac{\pi}{2}-\rho\cdot z\), where \(\rho\) is the rotation speed parameter in rad/m. Therefore, the polarization direction evolves very slowly from \(90^{\circ}\) at the beginning of the focus (z=0 \(\mu\)m) to \(0^{\circ}\) at the end of the focus (z=50 \(\mu\)m), so \(\rho=\frac{\pi/2}{50~{}(\mu m)}=\pi\times 10^{4}~{}rad/m\) in our experiment. We have tentatively chosen a linear variation of the angle \(\alpha(z)\) in our work however, we also remark that the variation can be arbitrary chosen.
The polarization state we create remains linearly polarized i.e., the field oscillates along a controlled direction, and can manipulate the orientation of laser-written nanogratings as we will see (a "twisted nanogratings structure") in "section D". This is in contrast with a circular polarization for which the polarization vector rotates in time and cannot induce such a "twisted nanogratings". It is expected because in every point, the circular polarization makes a full circle: there is zero preferential direction, as shown in **Figure 1(b)**. However, an elliptical polarized beam can imprint a strong linear birefringence due to anisotropic nanopores [38].
Whereas as depicted in **Figure 1(c)**, if we consider a same period T (corresponding to a propagation distance of \(\lambda\), considering light propagating in free space namely, the propagation speed approximates to velocity of light, c), our z-variable-polarized Bessel fs beam remains linearly polarized and its orientation is evolving slowly by only 1.5\({}^{\circ}\). This kind of "helical polarization distribution" can be utilized to write some chiral nanostructures (such as "twisted nanogratings" structures) that results in a significant optical chirality, as illustrated in **Figure 2(a)**.
In addition, for better describing the subsequent results section that contains many polarimetric properties, the definitions of both linear and circular optical properties are listed in **Table. 1**. Besides, it is more convenient in experiments to consider the real polarization effects caused by these anisotropic optical properties to a light beam, which are also included in the table together with their units.
\begin{table}
\begin{tabular}{c|c|c} Property & Definition & Commonly described in polarimetry \\ LB (linear birefringence) & \(\Delta n_{L}=\bigl{(}n_{x}-n_{y}\bigr{)}\) & \(LB=\frac{2\pi}{\lambda}\bigl{(}n_{x}-n_{y}\bigr{)}\cdot d\) \\ \end{tabular}
\end{table}
Table 1: Linear and circular optical properties
### Gaussian fs laser writing space varying birefringent elements
In order to impart a 3D polarization manipulation, the key point is using a space varying birefringent element to shape, point-by-point, the state of polarization of the incident light. This 2D structured light beam with a transverse polarization shaping is then converted into a polarization variation along the optical path by using the Bessel focusing as depicted in **Figure 2(b)** resulting in an on-axis shaping ie., "along a line".
Experimentally, this space varying birefringent element is fabricated by a Gaussian fs laser direct writing, as shown in **Figure 1(a)**. The Gaussian fs laser beam is delivered from a system (Amplitude Systemes, Pessac, France) operating at \(\lambda=1030\leavevmode\nobreak\ nm\), 800 fs pulse duration and 100 kHz repetition rate and then is focused 1.5 mm below the surface of a 3-mm thick silica glass wafer (Suprasil CG, Heraeus, Hanau, Germany) by an aspheric lens with a 0.16 numerical aperture (NA). Specifically, since retardance of such birefringent element is wavelength dependent [10, 39], we designed and fabricated a space varying half-wave plate obtaining a \(\lambda/2\) retardance for the targeted wavelength i.e., a fs Bessel beam at 800 nm. Therefore, the laser parameters were chosen to fall within nanograting regime resulting in a controllable form birefringence. That is, the homogenous space varying half-wave plate with a diameter of 5 mm is written at 1 mm/s, 2.5 \(\upmu\)J/pulse with a spiral trajectory.
Here, we divided the space varying half-wave plate into 10 concentric "ring" regions, which leads to a simple fabrication process. This is enough to approximate continuous shaping for the further 3D polarization manipulation. The linear polarization orientation of the Gaussian fs laser is set different in each "ring" to imprint a form birefringence with a slow axis azimuth that evolves from 0\({}^{\circ}\) at the outside "ring" to 45\({}^{\circ}\) at the central "ring". This writing process is depicted by both slow axis color map and nanogratings orientation illustration in **Figure 1(a)**.
Then, when using this homemade half-wave plate with a slow axis azimuth "rings" distribution from 0\({}^{\circ}\) to 45\({}^{\circ}\), we obtain a linear polarization distribution of the output beam ranging from 0\({}^{\circ}\) outside to 90\({}^{\circ}\) inside, as shown in **Figure 2(c)**. This polarization distribution of the beam cross-section is measured (using a Stokes imaging polarimeter) before the geometric building of the final Bessel beam. Clearly, one can observe a well-structured radial evolution of the linear polarization orientation across the laser beam profile, except for the center. In the current design it was indeed not possible to imprint birefringence at the chosen speed at 1 mm/s within a central part of about 100 \(\upmu\)m in diameter because this leads to a too high acceleration for our 2D xy-scanning platform. However, this technical problem can be easily overcome either by using a higher translation speed 2D scanning platform, by a scanning mirror technique or by filling the central part simply using a line-by-line scanning geometry instead of a spiral trajectory.
Our objective is to realize a z-variable-polarized fs Bessel beam that exhibits an evolving but still linear polarization state at different longitudinal z-axis positions and then to apply it, for the first time, to structure a chosen optical material. To validate such a "helical polarization distribution" along the focus, we firstly convert a horizontally linearly polarized (0\({}^{\circ}\)) Gaussian fs laser beam (from an amplified Ti: Sa laser operating at \(\lambda=800\ nm\), 120 fs pulse duration and 1 kHz repetition rate) to be a Bessel beam. This was performed by applying using a spatial light modulator, a conical phase onto the input Gaussian beam: \(\Phi(r)=-2\pi r\ \sin(\theta)\ /\lambda\), which was imaged 1:1 by a first telescope consisting of two identical lenses (L1) positioned as a classical 4\(f\)-configuration, as shown in **Figure 2(b)**.
Then our space varying half-wave plate was placed precisely at the relay image of the spatial light modulator to shape the 2D polarization (see **Figure 2(c)**). The polarization shaped beam was demagnified by a combination of a 750 mm lens (L2) and a \(\times\)50 microscope objective, forming a telescope with lateral magnification \(\times\)1/278. Here, a spatial filter ensures that only the zero-order Bessel beam is transmitted to the following focusing stage. Owing to the long focal region (\(\sim\)50 \(\upmu\)m) of the Bessel beam, the polarization distribution in the transverse plane can be transformed so as to vary along the z axis, as shown conceptually in the polarization evolution schematics reported in **Figure 1(a)**. The produced z-variable-polarized Bessel beam has a cone half angle of \(\theta=12.5^{\circ}\) (central core of 0.7 \(\upmu\)m FWHM). In these conditions due to the use of Bessel beam, we can shape the polarization state on the axis itself i.e., along a line. Eventually, a second \(\times\)50 objective and a lens L3 are used to enable the imaging in a charge coupled device (CCD) camera. The cross-section intensity profile image of the middle of the structured light beam (z=25 \(\upmu\)m) is recorded in **Figure 2(d)**, which well confirms the spatial profile of a Bessel beam. The intensity distribution and the polarization state distribution along the propagation of the shaped Bessel beam are also characterized experimentally, as shown in **Figure 2(e)**. Since
Figure 2: Generation of the z-variable-polarized fs Bessel beam using a space varying half-wave plate. (a) Schematics of z-variable-polarized fs Bessel beam writing nanogratings in silica glass. \(k\): the direction of the light wavevector; \(v\): fs laser scanning direction. (b) Schematics of light propagation with z-variable polarization control (rotation). SLM: spatial light modulator; L1,2,3: focusing lenses; \(f_{12,2}\) focal distance of the corresponding lens; \(\times\)50: objective of 50 times; CCD: charge coupled device. The dashed blue rectangle region refers to the Stokes imaging polarimeter. \(k\): the direction of the light wavevector. (c) Polarization distribution (color map) of cross-section of the generated z-variable-polarized fs Bessel beam (recorded at z=0 \(\upmu\)m). (d) Intensity profile of cross-section of the generated z-variable-polarized fs Bessel beam (recorded at z=25 \(\upmu\)m). (e) Characterization of the generated z-variable-polarized fs Bessel beam along the propagation: intensity distribution (top panel), polarization distribution color map (middle panel) and Bessel core polarization distribution quantitative curve (bottom panel).
the Bessel beam has a focal region about 50 \(\mu\)m, we move the CCD camera to obtain the intensity profile of the z-variable-polarized Bessel beam at each z axis position. Besides, the polarization distribution along the propagation is measured similarly moving the camera but combined with a rotating polarizer to build a Stokes imaging polarimeter [40].
It is to be mentioned that one can adjust the focusing conditions, for example, the diameter of the incident fs Gaussian beam, the apex angle of the axicon through the spatial light modulators and the telescope demagnification, so as to obtain either longer or shorter focal region of the z-variable-polarized Bessel beam. Then both the polarization spatial distribution (middle panel) and the quantitative curve (polarization of the Bessel central core, bottom panel) show a good "helical polarization distribution" after \(z=9\ \mu m\) i.e., the polarization evolves from \(90^{\circ}\) at the beginning to \(0^{\circ}\) at the end of the fs Bessel beam.
### Longitudinal rotation of nanogratings fabricated by the z-variable-polarized fs Bessel beam
Since the nanogratings orientation is well controlled by the fs laser polarization (ideally perpendicular, considering no PFT and normal incidence), to gain insight into their manipulation during a single laser scan, we used the z-variable-polarized fs Bessel beam to tentatively imprint "twisted nanogratings".
Experimentally, the laser track is written by scanning the z-variable-polarized fs Bessel beam along x axis inside the silica glass, as depicted in **Figure 2(a)**. Then the written line is cut perpendicular to the x axis to get the cross-section of the laser track which is in y-z plane. Thus, we are able to see the nanogratings orientation rotation along z axis in single laser track, as shown in "Step 3" of **Figure 1(a)**. This schematic image shows the cross-section view of the fs laser written track where the nanogratings orientation is rotating. Note that this kind of "twisted nanogratings" is obtained in one single structure written by z-variable-polarized fs Bessel beam single scanning and is then captured by SEM (Field-Emission Gun Scanning Electron Microscope, ZEISS SUPRA 55 VP, 1 kV accelerating voltage) using secondary electrons imaging as seen in **Figure 3**. Obviously, the SEM images shows a rotation morphology from head to tail of the laser-written track due to the linear polarization evolving along the fs Bessel laser beam. **Figure 3(a)** depicts the global view of the z-variable-polarized fs Bessel beam induced track. The head, middle and tail regions of the laser track are zoomed in **Figures 3(b)**. 3(b)** and 3(b)**, respectively. Note that because the polarization rotation plane is perpendicular to the SEM observation cross-section (making it difficult to describe the polarization rotation in a 2D drawing), the axial labels on the right of each detailed SEM images are slightly tilted as to depict the different polarization directions at different positions from a 3D viewpoint. Here, in order to describe the SEM images related to the nanogratings distribution, we define the polarization configuration as "scanning direction + polarization direction" since the laser polarization is different at different z positions of the z-variable-polarized fs Bessel beam.
One can observe clear nanopores at the head of the laser track due to an "Xx" polarization configuration and well oriented nanolayers at the tail of the laser track due to an "Xy" polarization configuration. In the middle of the laser track, one can see the nanoporous layers due to a "X+45" configuration since the layers are tilted out of the plane. The depicted nanopores located in the nanolayer are mainly attributed to the silica decomposition from \(SiO_{2}\) to \(SiO_{2(1-x)}+x\cdot O_{2}\). This subsequently results in an expulsion of ionized oxygen atoms to the surrounding lattice assisted by a tensile stress, which has been recently revisited as a tensile stress assisted nano-cavitation process leading to the formation of an "self-assembly" of anisotropic nanopores [41, 42].
The experimental results show a kind of "twisted nanogratings" distribution. It is thus a fact that the z-variable-polarized fs Bessel beam possesses different linearly polarization orientations at different z positions leading to different arrangements of the nanogratings (from head to tail). Hence, it can be expected that an arbitrary longitudinal orientation distribution of nanogratings can be enabled by designed polarization conversions of the laser beam through space varying birefringent elements, combined with Bessel focusing.
### Z-variable-polarized fs Bessel beam written chirality
Figure 3: SEM cross-section images of z-variable-polarized fs Bessel beam written nanogratings in SiO\({}_{2}\) with the parameters set at 800 nm, 120 fs, 1 kHz, 0.8 NA, 10 \(\mu\)m/s, 0.6 \(\mu\)J. (a) An overall view of the cleaved laser track. Detailed pictures of the head (b), the middle (b) and the tail (b) of the laser track. The axial labels are presented at the right part correspond to each detailed SEM images, respectively. \(k\) the direction of the light wavevector; _v_: fs laser scanning direction; _E_: the electric field direction of the fs Bessel beam (polarization direction).
As a result of the varying orientation (here, a 3D rotation) of the nanogratings generated by the _z_-variable-polarized fs Bessel beam, strong symmetry breaking is obtained, leading to chiral optical properties. To highlight the achieved optical performance, a 45\({}^{\circ}\) linearly polarized fs Bessel beam was used for comparison. The latter choice is justified because 45\({}^{\circ}\)linearly polarized fs laser beams have been proven to imprint highest chiral optical properties among all linearly polarized fs laser beams [19, 20].
First, the retardance induced by both the 45\({}^{\circ}\) linearly polarized fs Bessel beam and the _z_-variable-polarized fs Bessel beam was characterized by a polarizing microscope (Olympus BX51, Tokyo, Japan) equipped with a "de Senarmont" compensator. The "de Senarmont" arrangement uses a highly precise quarter-wave plate and a 180-degree-rotation analyzing polarizer to perform retardance measurements with a typical error range within \(\pm\)10 nm at the wavelength of 550 nm. Here, the retardance is the optical phase difference related to TLB; the relationship between these two quantities is given by \(TLB/\pi= retardance/\lambda\).
**Figure 4(a)** reports the retardance results of samples irradiated by both 45\({}^{\circ}\) linearly polarized fs Bessel beam and z-variable-polarized fs Bessel beam at different pulse energies. Clearly, the retardance induced by the _z_-variable-polarized fs Bessel beam is less (by one half roughly) than that induced by the 45\({}^{\circ}\) linearly polarized fs Bessel beam. It is mainly attributed to the (partial) suppression of linear birefringence due to the different slow axes at different \(z\) positions, as investigated in our previous work [33]. In addition, the insets show the corresponding crossed-polarizer microscopic images of the irradiated samples at the pulse energy of 1.1 u] as an illustration. The colors of different samples are due to different retardance amplitudes in agreement with Michel-Levy color chart which also confirms a lower retardance when using the _z_-variable-polarized fs Bessel beam instead of the 45\({}^{\circ}\) linearly polarized fs Bessel beam.Further, since an object that possesses chiral optical properties can rotate the polarization plane of a linearly polarized probe light (a so-called optical rotation), one can measure this optical rotation (with an error of \(\pm\)1\({}^{\circ}\)) to unveil the chiral optical properties in a simple way as shown in **Figure 4(b)**. The results clearly indicate higher (twice as high, roughly) optical rotation values for the _z_-variable-polarized fs Bessel beam written samples compared to the 45\({}^{\circ}\)linearly polarized fs Bessel beam one.
As confirmed experimentally, the _z_-variable-polarized fs Bessel beam can modify the nanogratings orientation along the optical path, thus forming a chiral structure on the micrometric scale. This appealing feature allows the fabrication of chiral waveplates (waveplates that exploit chiral optical properties to control the light propagation, especially for polarization manipulation) with high circular birefringence and a quite small (ideally none) linear birefringence. To quantitatively characterize the performances of imprinting extrinsic chirality in SiO\({}_{2}\) using this _z_-variable-polarized fs Bessel beam, a spectroscopic Mueller polarimeter (based on a modified Smart SE ellipsometer, JY HORIBA) was used to obtain the Mueller matrix of each sample, which was subsequently decomposed by the differential decomposition, thus providing the differential matrix of the irradiated sample [43, 44], which allows to easily determine all the anisotropic optical properties of the sample as follows:
\[M_{diff}=\begin{bmatrix}0&LD&LD^{\prime}&CD\\ LD&0&CB&-LB^{\prime}\\ LD^{\prime}&-CB&0&LB\\ CD&LB^{\prime}&-LB&0\end{bmatrix} \tag{2}\]
As a result, the values (expressed in radians) of both the linear properties and the circular properties are obtained simultaneously. The TLB value can be calculated by \(TLB=\sqrt{LB^{2}+LB^{\prime 2}}\), as shown in **Table 1**.
Experimentally, homogeneous square-shaped samples with a size of 0.2 mm \(\times\)0.2 mm were written using either z-variable-polarized fs Bessel beam, 45\({}^{\circ}\) linearly polarized fs Bessel beam
Figure 4: Optical characterizations enabled by crossed-polarizer microscopy. SiO\({}_{2}\) samples are written at 800 nm, 120 fs, 1 kHz, 0.8 NA, 10 \(\upmu\)m/s. (a) Retardance measurements at different laser pulse energies written by both the 45\({}^{\circ}\) linearly polarized fs Bessel beam and the _z_-variable-polarized fs Bessel beam. (b) Optical rotation measurements with different laser pulse energies written by both the 45\({}^{\circ}\) linearly polarized fs Bessel beam and the _z_-variable-polarized fs Bessel beam. Measurements are done at 550 nm.
and 0\({}^{\circ}\) linearly polarized fs Gaussian beam (for comparison). Each square is made of a set of lines with a pitch of 1 \(\upmu\)m and the line scanning direction is always along "+X" direction to avoid any non-reciprocal or "quill" writing [45, 46]. Then the samples were measured with the spectroscopic Mueller polarimeter in transmission mode. **Figure 5** displays the spectroscopic results for both TLB (solid lines) and CB (dashed lines) of a \(z\)-variable-polarized fs Bessel beam (red), a 45\({}^{\circ}\) linearly polarized fs Bessel beam (blue) and a 0\({}^{\circ}\) linearly polarized fs Gaussian beam (gray). A large difference in both TLB and CB values can be observed between the different fs laser beam configurations. Clearly, the \(z\)-variable-polarized fs Bessel beam simultaneously induces large CB yet small TLB over the entire measuring wavelength range from 450 nm to 1000 nm. By the way, both anisotropic optical properties exhibit large values at small wavelengths and decrease with increasing the wavelength due to their 1/\(\upmu\)m inherent behavior according to the common descriptions depicted in **Table 1**.
To further explore the optical response and to tailor the optical properties of the \(z\)-variable-polarized fs Bessel beam written extrinsic chirality in silica glass, a set of pulse energies (all the other laser parameters being fixed) was adopted for investigating both TLB and CB properties, as shown in **Figure 6**. Thus, for the \(z\)-variable-polarized fs Bessel beam written samples, the CB value reaches -0.2 rad (which, in absolute value, is three times as much as the CB created by the 45\({}^{\circ}\) linearly polarized fs Bessel beam) over the laser pulse energy range of 0.7-1.0 \(\upmu\)J. Considering the simultaneously induced TLB, 0.7 \(\upmu\)J is definitely the optimal laser energy parameter since it exhibits a TLB value of only 0.2 rad, which is four times as less as the TLB induced by the 45\({}^{\circ}\) linearly polarized fs Bessel beam. In similar conditions, a 0\({}^{\circ}\) linearly polarized fs Bessel (or Gaussian) beam writing would generate negligible circular properties [20]. These results explicitly provide a new pathway to fabricate chiral waveplates with high CB and almost suppressed TLB. This is a novel and more integrated approach, which differs from our previous strategy of utilizing multilayer configurations [33].
## III Discussion and Conclusion
_Controlling the nanoparticles orientation by fs laser polarization:_ Since the first morphology observations performed by Shimotsuma et al. in 2003 [8], fs laser written nanogratings have attracted tremendous attention in the ultrafast light-matter interaction research field. However, until now, the mechanism of nanogratings formation still remains under discussion. Several compelling assumptions emerge in explaining the fundamentals of the nanogratings formation process. For instance, interference between the light field and a bulk plasma wave may account for the nanogratings formation, especially in explaining their periodicity, as proposed by Kazansky et al. [8] whereas other groups proposed a scattered waves interference model [15, 16]. Recently, our group has revealed the occurrence of elongated nanopores thus forming the nanolayers. Ultrafast silica decomposition through nano-cavitation process was proposed as a tentative explanation [41]. Although a direct evidence for the ultrafast process is difficult to be obtained because of the limited frame rate and time resolution of camera, the imprinted orientation of nanogratings distribution has been proved to be almost perpendicular to the laser polarization [8]. In addition, it has been discussed that PFT can lead to a small tilt in the nanogratings orientation due to spatiotemporal chirp and angular dispersion. When the light propagates to the focal plane, PFT-related effects grow exponentially and can induce a few degrees of tilt at 100 fs/mm to the nanogratings [47].
Fig. 5: Spectroscopic Mueller polarimetry results for circular birefringence (dashed lines) and total linear birefringence (solid lines) of SiO\({}_{2}\) samples written by a 0\({}^{\circ}\) linearly polarized fs Gaussian beam (gray), a 45\({}^{\circ}\) linearly polarized fs Bessel beam (blue) and a \(z\)-variable-polarized fs Bessel beam (red). Laser parameters are 800 nm, 120 fs, 1 kHz, 0.8 NA, 10 \(\upmu\)m/s, 0.7 \(\upmu\)J.
Fig. 6: Mueller polarimetry results of circular birefringence (red) and total linear birefringence (blue) of the SiO\({}_{2}\) samples written at different laser pulse energies by both 45\({}^{\circ}\) linearly polarized fs Bessel beam (full symbols) and \(z\)-variable-polarized fs Bessel beam (hollow symbols). Measurements are done at 550 nm. SiO\({}_{2}\) samples are written at 800 nm, 120 fs, 1 kHz, 0.8 NA, 10 \(\upmu\)m/s.
Nevertheless, this tilt in question represents only a small deviation from the nanogratings orientation.
_Origin of the written chirality:_ The chirality originates from molecular chiral arrangement at nanoscale, that is, intrinsic chirality contributes to optical activity, and thus, to the manifestation of chiral optical properties. However, the chiral optical properties can also be due to the so called extrinsic chirality which is related to a chiral object constructed in an experimental arrangement not coincident with its mirror image [48]. Hence, recently we have advanced a novel conceptual view according to which the fs laser written extrinsic chirality arises from two contributions: a form-related birefringence and a stress-related birefringence with non-parallel and non-perpendicular neutral axes. We therefore deal with extrinsic chirality arising from the experimental arrangement of linear optical properties [20]. Compared to intrinsic chirality, extrinsic chirality can exhibit a larger optical rotation due to the strong symmetry breaking at a larger, typically micrometric scale. In this paper, the z-variable-polarized fs Bessel beam driven orientation rotated nanogratings structure is a form of experimental chiral arrangement corresponding to extrinsic chirality and is potentially useful in the fabrication of chiral devices with large and controllable optical rotation.
_Z-variable-polarized fs Bessel beam writing vs multilayer strategy:_ Concerning extrinsic chirality implantation, the multilayer strategy has proven to be a realizable approach in generating high CB with minimal TLB due to the (partial) suppression of linear birefringence through the alignment of fast/slow axes of different layers [20, 33]. However, such a multilayer strategy features limitations such as low fabrication speed and requirement for precise alignment between layers. Geometrically, the multilayer strategy amounts to fabricating a staircase-like chiral arrangement without transition region between layers. Conversely, the z-variable-polarized fs Bessel beam writing approach can (1) perform the writing task in a single scan, which provides high efficiency and thus, unlocks the versatility of fabricating chiral devices in a fast way; (2) more importantly, z-variable-polarized fs Bessel beam irradiation enables a quasi-gradual evolution of polarization, that is, a smoother variation of the nanogratings orientation compared to the staircase-like structure written by using the multilayer strategy. Note that by further engineering the space varying birefringent elements, one can tailor a continuous chiral arrangement with choosing the pitch and the length of the "twisted silica glass". However, the absolute CB value appears not so large if compared, for example, to our best multilayer structure results (CB=-2.25 rad with a total thickness of 340 \(\upmu\)m, measured at 550 nm) [33]. Therefore, for a multilayer strategy, the CB per unit length is calculated to be -6.62 rad/mm whereas for a z-variable-polarized fs Bessel beam writing it is around -5 rad/mm (i.e, -0.2 rad with a thickness of 40 \(\upmu\)m). This preliminary result is attributed to the relatively short extent of the Bessel beam used in this study, which could be enlarged by for example, using a smaller demagnification factor of the imaging telescope. Nevertheless, if compared to 45\({}^{\circ}\) linearly polarized fs Bessel beam writing under identical conditions, our results clearly prove the potential of implanting high CB by using such kind of controllable "helical polarization distribution".
Towards practical applications, one needs to consider the limitations of this strategy as well: (1) To date, the fs laser written space varying birefringent elements are based on self-organized nanogratings whose nanoporous structures can lead to strong scattering losses. The spectral signature possesses a \(1/\lambda^{4}\) dependence and corresponds to a strong anisotropic loss in ultraviolet-visible range [49], which exemplified to be a 74% transmittance measured at 550 nm. However, this restriction is expected to be partly solved by employing a "Type X" modification with its ultralow loss [18]. (2) It is difficult to use our shaping strategy in mid-Infrared range that needs further work to imprint higher birefringence such as in ZnO-Barium Gallo-Germanate glasses [50]. (3) The spatial resolution of the fs written space varying waveplate is on the order of micrometer size, which might restrict the potentials of applications in sub-micrometer and nanometer scale integrated systems.
_Conclusion and potential applications:_ In summary, in the sight of the current state of light control, we proposed a novel concept to extend the polarization manipulation along the optical path, which provides new perspectives in polarization transformation and therefore, enriches the state of the art of light manipulation. A stable "helical polarization distribution" fs Bessel beam was generated for the first time by using a fs written space varying half-wave plate. Besides, for the first time at our knowledge such a z-variable-polarized laser beam was implemented for laser materials processing namely in silica glass. This enables the experimental achievement of twice as large circular birefringence with, simultaneously, four times smaller linear birefringence, compared to a 45\({}^{\circ}\) linearly polarized fs Bessel beam or Gaussian beam.
Several applications would benefit from this 3D polarization structuring strategy: (1) this technique can be directly used for chiral waveplates fabrication towards linear polarization rotators with high optical rotation and ultimately no linear birefringence. This kind of optical rotators own advantages of alignment-free compared to conventional half-wave plate and potential achromatic properties. (2) Since polarization is an essential parameter in 5D data storage [5], our strategy of 3D control of polarization may be useful for increasing the writing and reading speed. The optical chirality could even be a new dimension for increasing the capacity of the information storage. (3) This 3D polarization structuring approach will benefit the fundamental understanding of light-matter interaction and will provide ultrafast light manipulation with additional degrees of freedom. For example, this could provide a platform for laser-based production of cholesteric liquid crystal analogous optical devices using tiny lengths of inorganic glass i.e., "twisted silica glass". In general, it will not only facilitate the implementation of polarization optics through the mediation of space varying optical devices but also underpin numerous potential chiroptic applications such as passive isolators, quantum emitters, optical tweezers,
achromatic devices and beyond.
## Acknowledgments
This work is supported by Agence Nationale pour la Recherche, FLAG-IR project (ANR-18-CE08-0004-01) and European Research Council (ERC) 682032-PULSAR; Jiafeng Lu acknowledges the China Scholarship Council (CSC) for the funding No. 202006890077.
## Author Declarations
### Conflict of Interest
The authors have no conflicts to disclose.
### Author Contributions
M.L. supervised the project J.L, F.C. and M.L. wrote the manuscript. J.L. carried out the design and fabrication of the space varying birefringent elements. M.H. and F.C. carried out the Bessel beam generation and laser writing J.L. carried out the sample treatment and optical properties characterization. J.L. and E.G.-C. performed the Mueller polarimetry measurements. J.L., F.B. and M.L. performed the SEM measurements. J.L., M.H., F.C. and M.L. processed and analyzed the data. J.L, M.L., F.C., E.G.-C, R.O., B.P. and X.Z. discussed the experimental results. All authors commented and discussed this work.
## Data Availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2305.14070 | Assessing Linguistic Generalisation in Language Models: A Dataset for
Brazilian Portuguese | Much recent effort has been devoted to creating large-scale language models.
Nowadays, the most prominent approaches are based on deep neural networks, such
as BERT. However, they lack transparency and interpretability, and are often
seen as black boxes. This affects not only their applicability in downstream
tasks but also the comparability of different architectures or even of the same
model trained using different corpora or hyperparameters. In this paper, we
propose a set of intrinsic evaluation tasks that inspect the linguistic
information encoded in models developed for Brazilian Portuguese. These tasks
are designed to evaluate how different language models generalise information
related to grammatical structures and multiword expressions (MWEs), thus
allowing for an assessment of whether the model has learned different
linguistic phenomena. The dataset that was developed for these tasks is
composed of a series of sentences with a single masked word and a cue phrase
that helps in narrowing down the context. This dataset is divided into MWEs and
grammatical structures, and the latter is subdivided into 6 tasks: impersonal
verbs, subject agreement, verb agreement, nominal agreement, passive and
connectors. The subset for MWEs was used to test BERTimbau Large, BERTimbau
Base and mBERT. For the grammatical structures, we used only BERTimbau Large,
because it yielded the best results in the MWE task. | Rodrigo Wilkens, Leonardo Zilio, Aline Villavicencio | 2023-05-23T13:49:14Z | http://arxiv.org/abs/2305.14070v2 | # Assessing Linguistic Generalisation in Language Models
###### Abstract
Much recent effort has been devoted to creating large-scale language models. Nowadays, the most prominent approaches are based on deep neural networks, such as BERT. However, they lack transparency and interpretability, and are often seen as blackboxes, which affect their applicability in downstream tasks as well as the comparison of different architectures or even the same model trained on different corpora or hyperparameters. In this paper, we propose a set of intrinsic evaluation tasks that inspect the linguistic information encoded in models developed for Brazilian Portuguese. These tasks are designed to evaluate how different language models generalise information related to grammatical structures and multiword expressions (MWEs), thus allowing for an assessment of whether the model has learned different linguistic phenomena. The dataset that was developed for these tasks is composed by a series of sentences with a single masked word and a cue that narrows down the context. This dataset is divided into MWEs and grammatical structures, and the latter is subdivided into 6 tasks: impersonal verbs, subject agreement, verb agreement, nominal agreement, passive and connectors. The subset for MWEs was used to test BERTimbau Large, BERTimbau Base and mBERT. For the grammatical structures, we used only BERTimbau Large, because it yielded the best results in the MWE task. In both cases, we eval
uated the results considering the best candidates and the top ten candidates. The evaluation was done both automatically (for MWEs) and manually (for grammatical structures). The results obtained for MWEs show that BERTimbau Large surpassed both the other models in predicting the correct masked element. However, the average accuracy of the best model was only 52% when only the best candidates were considered for each sentence, going up to 66% when the top ten candidates were taken into account. As for the grammatical tasks, results presented better prediction, but also varied depending on the type of morphosyntactic agreement. Cases such as connectors and impersonal verbs, which do not require any agreement in the produced candidates, had precision of 100% and 98.78% among the best candidates, while other tasks that require morphosyntactic agreement to produce good candidates had results consistently below 90% overall precision, with the lowest scores being reported for nominal agreement and verb agreement, both having below 80% overall precision among the best candidates. Therefore, we identified that a critical and widely adopted resource for Brazilian Portuguese NLP although mostly proficient in these tests, also presents issues concerning MWE vocabulary and morphosyntactic agreement. These models are the de-facto core component in current NLP systems, and our findings demonstrate the need of additional improvements in these models and the importance of widely evaluating computational representations of language.
Keywords:Grammatical Structures Multiword Expression Intrinsic Evaluation Brazilian Portuguese Contextualised Word Embeddings Language Models +
Footnote †: journal: Journal of LaTeX Templates
## 1 Introduction
Learning computational representations that accurately model language is a critical goal of Natural Language Processing (NLP) research. These representations, or word embeddings, are not only helpful for language technology tasks like machine translation, question answering and text adaptation. They can also improve our understanding of human cognition, for instance, correlating well with human predictions about words (Mandera, Pawel and Keuleers, Emmanuel and Brysbaert, Marc, 2017; Schrimpf et al., 2020), and also helping to highlight the existence of explicit and implicit social constructs, including gender and racial bias (Bolukbasi et al., 2016; Kumar et al., 2020), or helping to detect misinformation (Oshikawa et al., 2018; Su et al., 2020) and cultural change (Savoldi et al., 2021; Dinan et al., 2020), among others. With the recent popularisation of NLP and the availability of large scale computing resources, we have witnessed a quick evolution of many new computational architecture proposals for language representation. As a consequence, there are currently pre-trained language models available in an unprecedented scale, for over a hundred languages (Devlin et al., 2018), including low-resourced ones. However, these models lack transparency and interpretability (Marcus, 2020) and are often seen as blackboxes, all of which limit their application in tasks that
require explainability. This fast-paced evolution raises the need for careful assessment of the kinds of information that a representation incorporates, both in terms of linguistic and common-sense or world information. Moreover, the evaluation procedures need to adopt standard protocols that are independent of the architecture proposed, and be applicable to classes of models in general, setting the standards for what we expect them to have learned and be proficient in.
In fact, much recent effort has been devoted to defining evaluation protocols that allow us to have an insight into the knowledge that the model incorporates (Ettinger, 2020; Warstadt et al., 2020; Vulic et al., 2020). These evaluations may target different phenomena, but their main distinction is between intrinsic and extrinsic evaluations. Intrinsic evaluations usually involve comparing predictions made by a model to human judgements, existing resources like WordNet (Miller et al., 1990) or psycholinguistic norms. Extrinsic evaluations are often based on applications (Bakarov, 2018) and examine the ability of embeddings to be used as the feature vectors of supervised machine learning algorithms for tasks like natural language entailment and question answering. The assumption is that better quality and accurate embeddings lead to better quality applications. Recently, the BERT (Devlin et al., 2018) architecture (and its variations, such as RoBERTa (Liu et al., 2019), DistilBERT (Sanh et al., 2019), XLNet (Yang et al., 2019) and ALBERT (Lan et al., 2019)) has been used as a language encoding component in several applications that enhanced the state-of-the-art in their several NLP fields. These developments can be seen as an extrinsic evaluation of the BERT architecture, as they seem to suggest that BERT more successfully encodes syntactic and semantic information of the language than alternative models. However, what these evaluations do not indicate is which specific information is encoded, and if it is used for a given task in ways that are compatible with human use of language. In this paper we focus on intrinsic evaluations with an aim at gaining a better understanding of how linguistically proficient these models really are.
For different languages, the stage of development and proficiency of the models varies considerably, and so do the datasets available for evaluation. For Portuguese, in particular, only a few evaluation initiatives are available, such as Souza et al. (2020); Abdaoui et al. (2020); Schneider et al. (2020), and, despite being valuable evaluation contributions to NLP in Portuguese, these correspond to only extrinsic evaluations and target very specific tasks. Overall, the Portuguese versions of BERT have not been subjected to much intrinsic evaluations in comparison to their English counterparts, and there is great uncertainty about the coverage and quality of the linguistic information about Portuguese encoded in these models. As a consequence, there is also uncertainty about the stability and quality of the output produced by systems built using these models. In this paper, we address some of these issues, assessing the linguistic generalisation of language models for Portuguese. In particular, we propose a model-agnostic test set to measure the ability of a
model to capture expected linguistic patterns found in Brazilian Portuguese. The main contributions of this study are:
* A dataset for testing generalisation both related to multiword expression (MWE) and grammatical information. The MWE dataset targets 33 non-compositional MWEs, while the set for testing grammatical information is divided into 6 different tasks.
* An analysis that generated a language profile of BERT models for Brazilian Portuguese.
* A comparison of the BERT models' quality considering only the best generated candidates and the ten best candidates, also highlighting cases where the model puts more confidence on wrong candidates.
The proposed dataset will be available on github.
The following sections are organised as follows: Section 2 summarises the most influential model-agnostic proposals for intrinsic evaluation, including those targeting Portuguese. Section 3 presents the BERT models for Portuguese, describing their characteristics, and the models used in this work. Then we detail the methodology employed to create seven tasks comprised on our test set in Section 4. The results of the different models and their analyses are presented in Section 5. Finally, in Section 6 we discuss the conclusions of our findings as well as future work.
## 2 Related Work
The performance that state-of-the-art models achieve on language tasks is seen as an indication of their ability to capture linguistic information. However, their lack of transparency and interpretability prevents an in-depth analysis of the linguistic patterns and generalisations that they capture. Therefore, considerable attention has been devoted to the development of intrinsic evaluation protocols for inspecting the information captured by these models.
Intrinsic evaluations usually involve experiments in which word embeddings are compared with human judgements about word relations, and may be divided into various categories. For example, Bakarov (2018) arranges them as follows:
* conscious evaluation including tasks like semantic similarity, analogy, thematic fit, concept categorisation, synonym detection, and outlier word detection;
* subconscious evaluation in tasks like semantic priming, neural activation patterns, and eye movement data;
* thesaurus-based evaluations, including thesaurus vectors, dictionary definition graph, cross-match test, semantic difference and semantic networks; and
* language-driven of phonosemantic analysis and bi-gram co-occurrence frequency.
In this paper, we concentrate mainly on the conscious evaluation tasks. These vary from testing morphosyntactic agreement patterns, like number and gender (Warstadt et al., 2020), to more semantic-related information, like the implications of negation (Ettinger, 2020; Kassner and Schutze, 2020). As an approach to evaluate sensitivity to syntactic structures in English, Linzen et al. (2016) proposed an evaluation focusing on number agreement for subject-verb dependencies, which is something that we also address in this paper. They generated a dataset of 1.35 million number prediction problems based on Wikipedia (9% for training, 1% for validation, and 90% for testing). These problems were used to evaluate four tasks:
* number prediction, with a binary prediction task for the number of a verb;
* verb inflection, with a variation of the number prediction when the singular form of the upcoming verb is also given to the model;
* grammaticality judgements as another classification task, but the model must indicate the grammaticality of a given sentence; and
* language modelling in which the model must predict a correct word with the highest probability than a wrong word in a given sentence.
A related dataset, the "colorless green ideas" test set, was proposed by Gulordava et al. (2018) including four languages (Italian, English, Hebrew, Russian). The test items focus on long-distance number agreement evaluation, evaluating the accuracy in cases of subject-verb agreement with an intervening embedded clause and agreement between conjoined verbs separated by a complement of the first verb. Their test set is composed of syntactically correct sentences from a dependency treebank which are converted into nonce sentences by replacing all content words with random words with the same morphology, aiming to avoid semantic cues during the evaluation.
To avoid very implausible sentences that violate selectional restrictions (e.g., the apple laughs), Marvin and Linzen (2018) present an automatically constructed dataset for evaluating the grammaticality of the predictions of a language model. Marvin and Linzen (2018) use templates to automatically generate 350,000 English sentence pairs, consisting of pairs of grammatical and ungrammatical sentences, for examining subject-verb agreement, reflexive anaphora and negative polarity.
To evaluate the performance obtained by BERT in these datasets (Gulordava et al., 2018; Linzen et al., 2016; Marvin and Linzen, 2018), Goldberg (2019) fed a masked version of a complete sentence into BERT, then compared the score assigned to the original correct verb to the score assigned to the incorrect one.
Mueller et al. (2020) evaluated LSTM language models and the monolingual and multilingual BERT versions on the subject-verb agreement challenge sets. They used CLAMS (Cross-Linguistic Assessment of Models on Syntax), which extends Marvin and Linzen (2018) to include English, French, German, Hebrew and Russian. To construct their challenge sets, Mueller et al. (2020) use a lightweight grammar engineering framework (attribute-varying grammars), aiming at more flexibility than the hard-coded templates of Marvin and
Linzen (2018). One may argue that this approach is related to ours because they use a grammar with attributes to guide the sentence generation while our work uses pre-selected seeds. Our seeds are related to their attributes, and our generation of grammatical sentences is the combination of seed and original target word while the ungrammatical sentences may be generated by a cross-combination of seed and original target word. Their experiments on English BERT and mBERT suggest that mBERT seems to learn syntactic generalisations in multiple languages, but not in the same way in all languages. In addition, its sensitivity to syntax is lower than that of monolingual BERT.
Regarding evaluations for Portuguese, Bacon and Regier (2019) evaluated BERT's sensitivity to four types of structure-dependent agreement relations for 26 languages, including Portuguese. They showed that both the monolingual and multilingual BERT models capture syntax-sensitive agreement patterns. Bacon and Regier (2019) evaluated the models in a cloze task, in which target words would share the morphosyntactic features of the word with which they agree. In the cloze task, the data comes from version 2.4 of the Universal Dependencies (UD) treebanks (Nivre et al., 2017), using the part-of-speech and dependency information to identify potential agreement relations. BERT responses are then annotated with morphosyntactic information from both the UD and the UniMorph projects (Sylak-Glassman, 2016) to be compared with the morphosyntactic features of the word with which they agree. With respect to Portuguese, Bacon and Regier (2019) evaluate 47,038 sentences in the cloze test and 2,107 in the feature bundles using mBERT, and the results obtained show that it performed remarkably well, achieving an accuracy close to 100%.
In another multilingual evaluation, Sahin et al. (2020) introduced 15 probing tasks at type level for 24 languages, finding that some probing tests correlate to downstream tasks, especially for morphologically rich languages. They performed an extrinsic evaluation using downstream tasks, assessing the performance in POS tagging, dependency parsing, semantic role labelling, named entity recognition, and natural language inference targeting German, Russian, Turkish, Spanish and Finnish. Moreover, they propose the contextless prediction of the morphological feature of a given word, aiming to identify the feature indicated in the UniMorph dictionary (Sylak-Glassman, 2016). For that, Sahin et al. (2020) removed ambiguous forms and partially filtered infrequent words. This evaluation included features like the following for Portuguese: number, gender, person, and tense. However, they did not report the models' performance for Portuguese.
## 3 BERT models for Portuguese
Large pre-trained language models are valuable assets for language processing. These models support technological and scientific improvements. For English, for example, we can easily find 2,795 models accessing Huggingface website1.
However, this abundance of models is not true for most languages. For example, there are only 67 models for Portuguese available in the same repository2, and most of these are for Machine Translation and Automatic Speech Recognition, and Text2Text Generation. In this section, we highlight three models dedicated to language processing: smallermBert and BERTimbau, which are based on Portuguese for general purposes, and BioBERTpt, which is based on Portuguese for special Purposes.
Footnote 2: [https://huggingface.co/models?filter=pt](https://huggingface.co/models?filter=pt)
Improving the mBERT model, Abdaoui et al. (2020) proposed to generate smaller models based on mBERT that are language-specific. In other words, remove the parameters related to the non-target language. Then, they identified the vocabulary of each language and rebuilt the embedding layer to generate the corresponding models. Following the original mBERT, Abdaoui et al. (2020) started from the entire Wikipedia dump of each language, initially covering 15 languages, but since the work's publication, they have been extending it to other languages, including Portuguese3.
Footnote 3: [https://huggingface.co/Geotrend/bert-base-pt-cased](https://huggingface.co/Geotrend/bert-base-pt-cased)
In a different perspective, Souza et al. (2019, 2020) trained language-specific models for Brazilian Portuguese (nickname BERTimbau), using data from brWaC (Wagner Filho et al., 2018). Their models were trained over 1,000,000 steps using the BERT-Base and BERT-Large Cased variants architectures. In order to evaluate their models, they resorted to three probing tasks (sentence textual similarity, recognising textual entailment, and named entity recognition) in which BERTimbau improved the state-of-the-art compared to multilingual models and previous monolingual approaches.
Targeting clinical NLP, Schneider et al. (2020) trained BioBERTpt, a deep contextual embedding model for Portuguese, by fine-tuning mBERT. BioBERTpt is trained using clinical narratives and biomedical-scientific papers in Brazilian Portuguese. Schneider et al. (2020) trained three versions of BioBERTpt: BioBERTpt(bio) trained only on biomedical literature from scientific papers from Pubmed and Scielo; BioBERTpt(clin) trained on clinical narratives from electronic health records from Brazilian Hospitals; and BioBERTpt(all) trained on clinical narratives and biomedical literature in the Portuguese language.
Aiming to evaluate the BERT models for Brazilian Portuguese, we selected two models that were trained using brWaC (Wagner Filho et al., 2018) and multilingual BERT, allowing a comparison point with BERT models trained in other languages. We used both the BERTimbau Base and Large models (Souza et al., 2019, 2020), which have, respectively, 12 layers and 110M parameters, and 24 layers and 335M parameters. These models were chosen because they are uniquely trained in Brazilian Portuguese, thus avoiding issues from different variations, and they are also widely adopted, being the most popular models for Portuguese on the Huggingface webpage (BERTimbau Base was downloaded 17k times and BERTimbau Large was downloaded 11k times).
Therefore, the findings of this paper may have a wide impact in downstream applications for Brazilian Portuguese.
## 4 Methodology
In this section, we describe the steps performed for the development of the dataset proposed in this work. We first compiled the MWE test set (see Subsection 4.1) by selecting idiomatic MWEs and five context sentences for each of them. For the grammar tests (Subsection 4.2) we also selected context sentences, but we looked for specific grammar patterns in this step. Once we obtained the sentences for both tasks, we created seeds for the sentences, aiming to use them as cues to the specific target. Finally, we fed BERT model with our evaluation sentences, and manually evaluate the model's output (see Section 5). An overview of this methodology is presented on Figure 1.
### Multiword Expression Tests
Multiword Expressions (MWEs), like nominal compounds (NCs) and idioms, can be defined as linguistic structures that cross word boundaries (Sag et al., 2002). Due to their many potential linguistic and statistical idiosyncrasies (Sag et al., 2002), knowledge about MWEs is often considered a mark of proficiency for language learners.
In the MWE task, our goal is to assess the MWE inventory of a model by measuring the models' ability to identify an MWE given a context. For that, we use 33 highly idiomatic two-word NCs4 from LexSubNC (Wilkens et al.,
Figure 1: Methodology for creating the test dataset.
2017), such as **pao duro** (stingy person; lit. _hard bread_) and **pe quente** (person who brings good luck; lit. _hot foot_), and then we select 5 sentences from Corpus Brasileiro (Sardinha, 2010) for each MWE. The sentence selection was done by shuffling concordance results on Sketch Engine (Kilgarriff et al., 2004) and selecting 5 full sentences of different lengths. This process resulted in 165 sentences for testing MWEs.
For assessing the models' capacity to retrieve the target NC given a compatible context, we adopt a masking protocol, in which we feed the model an input sentence using a mask to replace one word of the NC. The model output is then evaluated in terms of whether or not the model produces the correct word for replacing the mask. Although the mask expect only one possible answer, different words may also be used to replace the mask, and we analyse if the target word is among the responses produced by the model. Moreover, as the NCs are composed of two words (and adjective and a noun), for each NC we generated two test items, keeping one of the NC components as cue for the model and masking the other one. For example, the sentence _Presidente, trago unaoticia em **primeira mao**_. (_President, I bring you first-hand news._), which contains the NC _primeira mao_ (_first-hand_), is processed twice:
* _Presidente, trago unaoticia em [MASK]_ **mao**_.
* _Presidente, trago unaoticia em **primeira** [MASK]_.
We expect that together, these sentences, along with one of the NC components, should provide enough context about the NC to trigger the model to produce the missing word. The output of this process resulted in 10 candidate words for each test item, in other words, 20 candidate words for each MWE, with 10 for the first component word of the compound and 10 for the second.
### Grammatical Tests
To assess the models' grammatical information, we developed a broad range of cloze tests, each one targeting a specific phenomenon, with several sub-phenomena, where each sub-phenomenon was tested using a set of sentences and a series of variations. The targets of these tests were: impersonal verbs, subject agreement, verb agreement, nominal agreement, passive and connectors. These test were inspired by the work of Kurita et al. (2019), who propose a template-based method to quantify social bias in BERT. Each test sentence is composed of a mask and a set of seeds as parameters. Like in the MWE test, the mask is the "blank", and the seeds work as a linguistic cue to the expected answer. For example, in the nominal agreement test, the sentence _"E precisamente na revelacao \(<\)SEED\(>\)[MASK] que reside o valor dessas cartas, diz."_ (_It is precisely in the revelation \(<\)SEED\(>\)[MASK] that lies the value of those letters, he/she says._) is used 40 times, each one using a different seed such as _das relacoes_ (_of the relations_), _das capacidades_ (_of the capacities_), _da alma_ (_of the soul_), _da conducao_ (_of the condition_), _dos seres_ (_of the beings_), _dos segedos_ (_of the secrets_), _do ser_ (_of the being_), and _do espirito_ (_of the
spirit_). In this example, we use four types of seeds each one targeting a different output in the masked work (male singular, male plural, female singular and female plural). The different seed sets are specific for each sentence, because they have to take into consideration its syntactic and semantic restrictions.
For nominal agreement, we automatically generated seed sets using a mask where the original seed would be, this allowed us to have more extensive coverage. In this process, we selected the top 10 candidates from base-BERT and then we manually evaluated the candidates to eliminate bad fits (i.e., we removed the candidates with agreement errors or a different meaning). For verb and subject agreement, and for impersonal verbs, we automatically generated a set of seeds based on the UNITEX-PB dictionary (Muniz et al., 2005; Vale and Baptista, 2015), and then also validated each seed in context. In the case of passive voice, due to the complexity of the seeds, we had to manually generate sets of seeds for each sentence. Finally, in the case of connectors, we could not use any type of seed, because this would require us to generate full clauses, so we only used five sentences per connector that was tested.
Using the manually validated templates composed of sentences and their seeds, we generated 10 candidate answers for each masked word using BERTimbau Large5. These candidates were also annotated with part-of-speech using the Unitex-PB dictionary (Muniz et al., 2005; Vale and Baptista, 2015).6 Then, all candidate words from BERTimbau Large were evaluated by a linguist according to their syntactic and semantic suitability in context. Although we used seeds as cues for the expected type of answer (for instance, we used generic passive structures to induce the model to produce a nominal participle as candidate in the passive voice tests), the actual answer of the system could be from a different category, and still the sentence would be grammatically and semantically correct. As such, the evaluation took into account any answer that would fit in the context, not necessarily only the ones that were expected. The ones that were correct, but deviated from the expected target answer, were identified, and are further discussed in the results.
Footnote 5: Given the size of the manual evaluation task, with more than 12k items, we only used the BERTimbau Large model, as it performed best in the MWE task, as will be discussed in the next section.
Footnote 6: We do not use a parser in this step for simplicity.
In total, the dataset for grammatical evaluation consists of 6 dimensions that contain 1,231 tests and 688 seeds.
## 5 Results
This section presents the evaluation results considering the two different subsets of tests: MWEs and grammatical information. The MWE results consider three models: BERTimbau Large, BERTimbau Base and mBERT. These models were evaluated in terms of accuracy of the best prediction and accuracy among the top ten candidates. The grammatical evaluation was done on
BERTimbau Large, and considered results in terms of precision of the best candidate (or accuracy) and precision at the top ten candidates.
### MWE Tests
Aiming to assess the model generalisation concerning MWE vocabulary, we calculate the accuracy of the output. However, this information might not be fully representative of its generalisation capacities. Thus, for each masked sentence we looked for the correct word in the top ten predictions of the model. In the evaluation step, we resorted to accuracy at ten (acc@10). In other words, we evaluated the presence or absence of the expected word in the top ten predictions.
The multilingual model, mBERT, performed consistently worse than the dedicated model trained for Portuguese. Moreover, BERTimbau Large performs better than the base version. In both these cases, it was the larger models that performed better, suggesting that the ability to learn MWEs is related to the size of the model. In terms of the difficulty of the task, as shown in Table 1, apart from the multilingual BERT, the models were able to predict the missing MWE component as the first alternative in 40-52.73% of the cases. However, using this more strict evaluation scenario where only the top choice is considered, the quality of the prediction is poor. In a more lenient scenario, When we analyse the ability of a model to predict the missing word among the top 10 most likely words, the results substantially improve. While mBERT has an average increase of 9% accuracy in comparison with only the top prediction, the BERTimbau models have a much more substantial increase: BERTimbau Base has an improvement of 17.58% in relation to the top candidate and BERTimbau Large of 15.75% from 51.52% to 67.27% accuracy. Although mBERT shows good capacities in different NLP tasks, the model captured little to no information to predict idiomatic items that are specific to Brazilian Portuguese. Additionally, we observed that the gain of including more candidate words is mainly restricted to the top two candidates.
The changes from BERTimbau Base to BERTimbau Large allowed the model to learn a larger MWE inventory, considering the target MWEs, and have more accurate prediction given the clues in the masking task (Table 2). However, there were two cases in which performance decreased using the larger model: _sangue azul_ (blue blood) and _pao duro_ (stingy person; lit. _hard bread_); and other cases that are still not accurately represented by either model (Table 3). Overall, the difference in performance between BERTimbau Large and Base was not as big as the difference between them and mBERT, but BERTimbau Large was able to learn more MWE and displayed more confidence in correct predictions. However, we also noticed that the values of accuracy at ten for BERTimbau Base were similar to BERTimbau Large, indicating that exploring more outputs from the Base model might have the same performance of BERTimbau Large, which require more processing power.
### Grammatical Tests
This section presents the results of the grammatical tests to which BERTimbau Large was submitted. As there were six very different grammatical tests, we report and discuss their results individually as precision at 1 (or accuracy) and precision at 10, aiming to analyse the model's proficiency. At the end of this section, we make a brief comment on the overall performance of the model.
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline
**Model** & **Word** & **ACC** & **ACC@10** \\ \hline BERTimbau Base & 1 & 40.00\% & 57.58\% \\ \hline BERTimbau Base & 2 & 45.45\% & 64.24\% \\ \hline BERTimbau Large & 1 & 52.73\% & 65.45\% \\ \hline BERTimbau Large & 2 & 51.52\% & 67.27\% \\ \hline mBERT & 1 & 6.67\% & 16.97\% \\ \hline mBERT & 2 & 3.64\% & 11.52\% \\ \hline \end{tabular}
\end{table}
Table 1: Accuracy of different sentences for Word1 and Word2. acc@10 means the accuracy considering the 10 most likely candidates. For example, if we consider only the word with the highest probability from BERTMbau-Base, it achieves an accuracy of 40%, however, if we consider all 10 candidates, evaluating only if the correct one is on the list, it goes to 57.58%
\begin{table}
\begin{tabular}{|l|l|} \hline
**Word 1** & **Word 2** \\ \hline livro aberto & livro aberto \\ \hline montanha russa & montanha russa \\ \hline no cego & olho m\(\mathrm{\acute{a}}\)gico \\ \hline olho m\(\mathrm{\acute{a}}\)gico & pau mandado \\ \hline p\(\mathrm{\acute{a}}\)o duro & pavio curto \\ \hline p\(\mathrm{\acute{e}}\) frio & p\(\mathrm{\acute{e}}\) frio \\ \hline p\(\mathrm{\acute{e}}\)quente & p\(\mathrm{\acute{e}}\) quente \\ \hline peso morto & peso morto \\ \hline planta baixa & planta baixa \\ \hline sangue azul & saia justa \\ \hline sangue frio & sangue frio \\ \hline \end{tabular}
\end{table}
Table 2: Performance comparison between BERTimbau Large and BERTimbau Base. MWEs which were predicted by the BERTimbau Large, but not by BERTimbau Base.
\begin{table}
\begin{tabular}{|l|l|} \hline
**Word 1** & **Word 2** \\ \hline bode expiatório & bode expiatório \\ \hline cheiro verde & gato pingado \\ \hline elefante branco & longa metragem \\ \hline ovelha negra & n\(\mathrm{\acute{o}}\) ecgo \\ \hline pavio curto & olho gordo \\ \hline pente fino & pente fino \\ \hline roleta russa & vista grossa \\ \hline saia justa & \\ \hline \end{tabular}
\end{table}
Table 3: Performance comparison between BERTimbau Large and BERTimbau Base: MWEs were not predicted by either model.
#### 5.2.1 Impersonal Verbs
The set of sentences with personal verbs tested whether the model would produce candidates that were not subjects. The verbs used as cue represented meteorological verbs (such as to rain, to snow) and existential verbs (such as to exist, to have been), which do not accept any subject, and are therefore defective in their conjugation. As such, we expected the model to produce answers that were not nouns or pronouns.
As we can see in Table 4, the models perform well in this task, with meteorological verbs having slightly worse results, as the model still produced some answers that did not fit. Results considering the top candidate were near 100%, and the average precision considering the top 10 candidates decreases to 76.04%. For existential verbs, results were much higher, with a precision of 97.50% among the top 10 candidates and 100% for the top 1. The model was able to generate punctuation marks, such as parenthesis or quotation marks, in most of the templates, which were a good fit for the test sentences.
Looking at the different tenses, to check whether there is any impact of the verb form used as a cue for the model, we see that only the pluperfect form produced results below 100% for the top candidate, and was also the worse cue on the precision at 10 evaluation, with 65.33%. This could be a reflect of the fact that the pluperfect tense in Brazilian Portuguese is usually written in a compound form, using "ter" (have) as auxiliary verb, conjugated in the imperfect tense, associated with the past participle of the main verb, so that the pluperfect simple form, which was the form used in the test sentences, is not widely used anymore. Assuming that lower frequencies have an impact on the quality of the representation, they might not have been learned well by the model. Nonetheless, the performance confirms the proficiency of the model with impersonal verbs.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Case name** & **P@1** & **P@10** \\ \hline Meteorological Verbs & 97.56\% & 76.04\% \\ \hline Existential Verbs & 100.00\% & 97.50\% \\ \hline \end{tabular}
\end{table}
Table 4: Impersonal verbs: Verb types
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**Case name** & **P@1** & **P@10** \\ \hline Past Indicative & 100.00\% & 82.00\% \\ \hline Present Indicative & 100.00\% & 85.63\% \\ \hline Future Indicative & 100.00\% & 80.00\% \\ \hline Imperfect Indicative & 100.00\% & 95.71\% \\ \hline Pluperfect Indicative & 86.67\% & 65.33\% \\ \hline Future Subjunctive & 100.00\% & 79.00\% \\ \hline \end{tabular}
\end{table}
Table 5: Impersonal verbs: Results by tense and mode
#### 5.2.2 Subject Agreement
For the subject agreement task, the model was expected to produce a subject that would agree in person and number with the provided verb seeds. Given that Portuguese allows for hidden or indeterminate subject, we expected that the system would produce some results that would not fit as subjects, but that would fit in the sentences.
Considering the type of subject that was expected given the verb conjugation, as seen in Table 6, results both in the top 1 and in the top 10 were varied, ranging from 64.10% for the second person singular to 100% for the third person singular. The model had a hard time to produce good candidates when the expected subject should fit a second person singular or plural, which are less commonly used conjugations, but it is interesting to see that the model has higher confidence in wrong answers when we look at the results for the third person plural.
When we look at the tense and mode of the conjugated seed (Table 7, results also varied, and it is interesting to notice that the tense that yielded worst results in the top 1 candidates (barely above 75% precision) was present indicative, which is one of the most common tenses, while future indicative was the tense with best results (above 92% precision). Among the top 10 results, pluperfect indicative was the worst, with 71.78%, and the best result was again the future indicative, with above 90% precision.
To sum up, the model has a fairly precise capacity of generating subjects that fit in the sentences, but some verb conjugations (second persons, and third person plural) and tenses (present and pluperfect indicative) proved to be a challenge.
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**Case name** & **P@1** & **P@10** \\ \hline First Person Singular & 97.06\% & 93.46\% \\ \hline Second Person Singular & 64.81\% & 61.73\% \\ \hline Third Person Singular & 100.00\% & 95.85\% \\ \hline First Person Plural & 94.59\% & 92.33\% \\ \hline Second Person Plural & 66.67\% & 63.70\% \\ \hline Third Person Plural & 85.00\% & 93.52\% \\ \hline \end{tabular}
\end{table}
Table 6: Subject agreement: Results by expected person and number
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**Case name** & **P@1** & **P@10** \\ \hline Present Indicative & 75.61\% & 75.13\% \\ \hline Past Indicative & 78.72\% & 87.10\% \\ \hline Pluperfect Indicative & 81.25\% & 71.78\% \\ \hline Future Indicative & 92.86\% & 90.46\% \\ \hline Past Tense Future Indicative & 82.05\% & 79.78\% \\ \hline Imperfect Indicative & 79.49\% & 75.28\% \\ \hline \end{tabular}
\end{table}
Table 7: Subject agreement: Results by tense and mode
#### 5.2.3 Nominal Agreement
For the task of nominal agreement, we looked into the adjective agreement using a noun as cue. Since Portuguese adjectives agree with nouns in gender and number, we had four different test categories: masculine singular, masculine plural, feminine singular and feminine plural.
The results (Table 8 shows that results were good across all categories, reaching up to 93.22% in masculine plural when all 10 candidates were considered for each mask. Here we see that the suggestion in which the model has most confidence is not always the best, as the results on p@1 were consistently worse then in the top 10, especially for feminine singular, which achieved only 78% precision on the top 1 candidates. Nonetheless, the model displays fairly good proficiency in nominal agreement.
#### 5.2.4 Verb Agreement
The task of verb agreement was designed to check whether the model can produce verb candidates that correctly agree with the cue. The sentences used for this test had temporal cues or specific language patterns that would induce or require certain verb conjugations. As seeds, we used pronouns and nouns in singular and plural, but also used some verb structures to check for the production of infinitives and gerunds.
Table 9 shows the results for each expected verb form. Indicative forms (first two rows) had much better results than subjunctive forms (last three rows), while the non-conjugated forms (rows 3 and 4) had the best results overall, reaching up to 100% precision considering the top 1 candidate.
In this specific case, it was observed that some cues were not as effective as others, so we investigated the results based on the different pronouns and nouns that were used as cues. In Table 10, pronouns such as "tu" (_you singular_), "nos" (_we_) and "vos" (_you plural_) presented a much worse cue, as the model did not seem to be able to produce forms that agree with them. Even when considering only the top 1 candidates, "tu" had a result barely above 50%, while "vos", a very formal and infrequent pronoun, did not reach 10% of precision. While "tu" and "vos" are pronouns that are less commonly used in Brazilian Portuguese, being frequently substituted, respectively, by "voce" and "voces", it is hard to explain why the model does not produce good candidates for a cue like "nos". One possibility is that this form may be replaced by a more informal option (_a gente_, we; lit. _the people_), which uses a conjugation that is homograph with the third person singular.
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**Case name** & **P@** & **P@10** \\ \hline Feminine Singular & 78.00\% & 92.00\% \\ \hline Masculine Singular & 86.84\% & 91.58\% \\ \hline Feminine Plural & 87.18\% & 88.46\% \\ \hline Masculine Plural & 91.89\% & 93.22\% \\ \hline \end{tabular}
\end{table}
Table 8: Nominal agreement: Gender and number
Although responses with the expected verb tenses were induced in most cases, 5.00% of the responses provided by the model were correct, meaning that they fitted well in the sentence, but were not verbs or did not agree with the cue provided.
#### 5.2.5 Connectors
In this task the goal was to check whether the model could produce cohesive elements to link sentences together. We used specific connectors, which are shown in Table 11, as seeds for selecting the original set of five sentences for each connector. In this task we had no cues for an expected connector, as usually it is the connectors that establish a meaningful relation between clauses, either by coordination or subordination. This means that, although we had an original connector in the test sentences, the evaluation accepted as correct other forms of cohesion that could change the semantics of the sentence, as long as they produced a sentence with meaning.
As Table 11 shows, the model was able to predict connectors with very good precision, reaching 100% in all cases among the top 1 candidates, and then varying in precision among the top 10 candidates. In terms of correct candidates, 10.77% were not conjunctions in the traditional sense, but most of these were textual connectors, such as "finalmente" (_finally_), "tambem" (_also_). Some of the sentences had a very specific requirement in terms of connector, such as the ones with "ora" (now), which is a dual connector ("ora..., ora" \(\sim\)_now..., now_), and thus had no margin for many other connector options, which explains the poor precision among the top 10 candidates in some cases.
The results obtained suggest that these models can proficiently use connectors.
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**Case name** & **P@1** & **P@10** \\ \hline Eu (_I_) & 80.00\% & 52.67\% \\ \hline Tu (_You singular_) & 51.61\% & 34.84\% \\ \hline Ele/Ela (_He/She_) & 79.41\% & 60.59\% \\ \hline N\(\ddot{o}\)s (_We_) & 30.77\% & 13.08\% \\ \hline V\(\ddot{o}\)s (_You plural_) & 7.69\% & 6.92\% \\ \hline Eles/Elas (_They_) & 85.71\% & 44.29\% \\ \hline \end{tabular}
\end{table}
Table 10: Verb Agreement: Breakdown of pronouns
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**Case name** & **P@1** & **P@10** \\ \hline Past Indicative & 87.51\% & 78.44\% \\ \hline Present or Future Indicative & 94.63\% & 68.59\% \\ \hline Infinitive & 100.00\% & 96.10\% \\ \hline Gerunds & 94.81\% & 74.88\% \\ \hline Present Subjunctive & 58.00\% & 33.40\% \\ \hline Past Subjunctive & 62.78\% & 25.08\% \\ \hline Future Subjunctive/Conditional & 68.15\% & 57.10\% \\ \hline \end{tabular}
\end{table}
Table 9: Verb agreement: Different expected tenses and modes
#### 5.2.6 Passive
Passive voice in Portuguese has an important characteristic, shared with other Romance languages, which is a somewhat long distance agreement of the nominal participity with the subject. This agreement is illustrated in the following example: _A escolha foi feita_. (_The choice was made_.; lit._The\({}_{FemSing}\) choice\({}_{FemSing}\) was made\({}_{FemSing}\)_). This is different to what happens, for instance, with compound verb tenses, such as the compound pluperfect, where the verb participity of the main verb is used and thus there is no requirement for agreement. An example of this can be seen in this example: _Ela tinha escolhido aquela cor_. (_She had chosen that colour_.; lit. She\({}_{FemSing}\) had chosen\({}_{NoAgreement}\) that\({}_{FemSing}\) colour\({}_{FemSing}\)_).
To test this case, we used 13 sentences with a varied number of cues that were made up by verbal constructions with the verb "ser" (_to be_). Results in Table 12 show that the model was able to produce correct candidates most of the time, with 86.65% precision among the top 1 candidates and 78.38% among the top 10 candidates.
Interestingly, among the candidates that were not correct, 44.99% were a participle, but had incorrect nominal agreement with the subject of the sentence. The results in the table also point to the model having trouble producing good candidates for the feminine cases, in particular for the singular form, since for the plural it had better confidence in the correct candidates.
Finally, within the list of correctly generated candidates, not many deviated from the expected word form, as only 5.07% of the candidates were adjectives that fit the context resulting in a grammatically correct option, instead of the target nominal participible.
For this test too, the models are proficient in generating the agreement for the passive form for most cases apart from the feminine singular.
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**Case name** & **P@1** & **P@10** \\ \hline Caso (_If_) & 100.00\% & 26.67\% \\ \hline Conforme (_According to_) & 100.00\% & 46.67\% \\ \hline Conudo (_However_) & 100.00\% & 86.67\% \\ \hline Enquanto (_While_) & 100.00\% & 66.67\% \\ \hline Nem (_Nor_) & 100.00\% & 50.00\% \\ \hline Ora (_Now_) & 100.00\% & 20.00\% \\ \hline Pois (_Because_) & 100.00\% & 50.00\% \\ \hline Porque (_Because_) & 100.00\% & 40.00\% \\ \hline Portanto (_Therefore_) & 100.00\% & 90.00\% \\ \hline Quando (_When_) & 100.00\% & 26.67\% \\ \hline Se (_If_) & 100.00\% & 50.00\% \\ \hline Todavia (_However_) & 100.00\% & 96.67\% \\ \hline \end{tabular}
\end{table}
Table 11: Connector Prediction
#### 5.2.7 Discussion
Summing up the results of the grammatical tests, we can see in Table 13 that tasks that require no agreement had the best results, with 100% precision for connectors and 98.78% for impersonal verbs. Where morphsyntactic characteristics of the language played a role, we see that the results fall below 90% even considering only the candidate in which the model has the most confidence.
Interestingly, the nominal agreement test was the only one that shows better results among the top 10 candidates in comparison to the top 1. This could possibly mean that, for selecting adjectives, lexical cues on the context are stronger than the morphosyntactic information in the cue word. Considering the results reported by Bacon and Regier (2019) for Portuguese on mBERT, we see that the model here had much worse performance for nominal agreement. This could be because we evaluated all candidates, and not only the ones that had the same part-of-speech as the masked word.
As the only closed class task, we expected the largest difference from the evaluation among the top 1 to the top 10 candidates for connectors, as the model would eventually run out of options to fit in the context. However, the worse performance overall was seen in the task of verb agreement, where the model had problems finding good candidates for a few personal pronouns, both at the top 1 and at the top 10.
Moreover, frequency also seems to play a role in model performance, and prediction accuracy decreases for less frequent forms, including very formal pronouns or those that are frequently omitted.
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**Test** & **P@1** & **P@10** \\ \hline Nominal Agreement & 77.53\% & 89.67\% \\ \hline Verb Agreement & 79.56\% & 59.55\% \\ \hline Subject Agreement & 83.38\% & 75.70\% \\ \hline Connectors & 100.00\% & 54.17\% \\ \hline Impersonal Verbs & 98.78\% & 86.77\% \\ \hline Passive & 86.65\% & 78.38\% \\ \hline \end{tabular}
\end{table}
Table 13: Summary of Grammatical Proficiency Tests
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**Case name** & **P@1** & **P@10** \\ \hline Feminine Singular & 17.86\% & 50.71\% \\ \hline Mascline Singular & 95.89\% & 89.32\% \\ \hline Feminine Plural & 100.00\% & 55.00\% \\ \hline Mascline Plural & 98.53\% & 81.47\% \\ \hline \end{tabular}
\end{table}
Table 12: Passive: Gender and number breakdown
## 6 Conclusions and Future Work
In this work we addressed the problem of intrinsic evaluation of large scale language models and proposed a battery of model-agnostic tests to assess linguistic proficiency in Portuguese. This paper focused on some widely adopted neural language models of Brazilian Portuguese, and evaluated their performance in lexical, syntactic and semantic tasks. We developed a dataset with evaluation items that cover two particular areas: MWE inventory of idiomatic items, and proficiency in 6 grammatical tasks.
Overall, the larger and language-specific models performed better in the MWE task. Although mBERT shows good capacities in different NLP tasks, the model captured little to no information to predict idiomatic items of Brazilian Portuguese. Despite the small difference in performance between BERTimbau Large and Base, the larger version presented a better recognition of MWE. However, exploring more outputs from the BERTimbau Base might have the same performance as BERTimbau Large.
The grammatical tests showed that BERTimbau Large has a good overall precision in the generation of candidates for masked items, especially when we looked only at the top 1 candidates, going up (or very close) to 100% precision both for connectors and impersonal verbs. Even so, for tasks that required morphosyntactic agreement, there was a fall in precision, with the worse results (below 80% among the top 1 candidates) being reported for nominal and verb agreement. The case of verb agreement was especially challenging for the model, because it consistently failed to produce good results for certain personal pronouns (first person plural, and second person singular and plural), which could be a sign of poor morphosyntactic generalisation, or be a side effect of the training corpus.
We adopted two evaluation criteria, one more strict, considering only the best candidate, and a more lenient one, which includes the top 10 candidates. Moreover, by considering not only the expected target word forms during the evaluation, but also considering alternative, grammatically correct outputs, we were able to detect the capacity of the model to produce different types of word forms for different contexts. Although deviant word forms did not represent much of the correct responses, amounting to around 5% in a few tasks, they showed that a syntactic cue might not be as strong as the overall context of the sentence, as argued by Gulordava et al. (2018). The results obtained confirm that the model achieves proficiency levels in tasks that do not require morphosyntactic agreement. However, it still lacks quality in certain items, in particular related to feminine singular forms. We also observed that there are instances (e.g. nominal agreement) in which the model has higher confidence (i.e. higher probability) in inadequate responses. All these evaluations led to a profile of the model's linguistic information.
As future work, we intend to extend the battery of tests to other linguistic aspects, such as selectional preferences for verbs, and inventory of collocations and terminology. We also intend to investigate whether possible biases in the distribution of the training data can affect the performance of these patterns.
Finally, we plan to develop a multilingual version of the test, adapting it to closely related languages that share some of these linguistic patterns, and assessing if language proximity can be beneficial for few-shot learning scenarios.
###### Acknowledgements.
This work has been developed in the framework of the project COURAGE (no. 95567), funded by the Volkswagen Foundation in the topic Artificial Intelligence and the Society of the Future. It is also partly funded by the EPSRC project MIA: Modeling Idiomaticity in Human and Artificial Language Pro-cessing (EP/T02450X/1) and by Research England, in the form of the Expanding Excellence in England (E3) programme.
## Conflict of interest
The authors declare that they have no conflict of interest.
|
2301.06183 | On abstract results of operator representation of frames in Hilbert
spaces | In this paper, we give a multiplication operator representation of bounded
self-adjoint operators T on a Hilbert space H such that -- is a frame for H,
for some -- . We state a necessary condition in order for a frame -- to have a
representation of the form -- . Frame sequence -- with the synthesis operator
U, have also been characterized in terms of the behavior spectrum of -- . Next,
using the operator response of an element with respect to a unit vector in H,
frames -- of the form -- are characterized. We also consider stability frames
as -- . Finally, we conclude this note by raising a conjecture connecting frame
theory and operator theory | Jahangir Cheshmavar, Ayyaneh Dallaki, Javad Baradaran | 2023-01-15T20:46:31Z | http://arxiv.org/abs/2301.06183v1 | # On abstract results of operator representation of frames in Hilbert spaces
# On abstract results of operator representation of frames in Hilbert spaces
Jahangir Cheshmavar\({}^{*}\), Ayyaneh Dallaki,
Javad Baradaran
###### Abstract.
In this paper, we give a multiplication operator representation of bounded self-adjoint operators \(T\) on a Hilbert space \(\mathcal{H}\) such that \(\{T^{k}\varphi\}_{k=0}^{\infty}\) is a frame for \(\mathcal{H}\), for some \(\varphi\in\mathcal{H}\). We state a necessary condition in order for a frame \(\{f_{k}\}_{k=1}^{\infty}\) to have a representation of the form \(\{T^{k}f_{1}\}_{k=0}^{\infty}\). Frame sequence \(\{T^{k}\varphi\}_{k=0}^{\infty}\) with the synthesis operator \(U\), have also been characterized in terms of the behavior spectrum of \(U^{*}U|_{N(U)^{\perp}}\). Next, using the operator response of an element with respect to a unit vector in \(\mathcal{H}\), frames \(\{f_{k}\}_{k=1}^{\infty}\) of the form \(\{T^{n}f_{1}\}_{n=0}^{\infty}\) are characterized. We also consider stability frames as \(\{T^{k}\varphi\}_{k=0}^{\infty}\). Finally, we conclude this note by raising a conjecture connecting frame theory and operator theory.
Key words and phrases:Frame; iterated action; operator representation; Riesz representation theorem; Hahn-Banach theorem; spectrum of an operator. \({}^{*}\) Corresponding author. 2010 Mathematics Subject Classification: Primary 42C15; Secondary 47B99.
Introduction
Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}\) be a Hilbert space. Let \(\mathcal{H}\) be a Hilbert space.
and the frame operator is defined by
\[S:=UU^{*}:\mathcal{H}\rightarrow\mathcal{H};\ \ Sf=\sum_{k=1}^{\infty}\langle f,f_{ k}\rangle f_{k}\.\]
The availability of the representation \(\{f_{k}\}_{k=1}^{\infty}=\{T^{k}f_{1}\}_{k=0}^{\infty}\) is characterized in [9]:
**Proposition 1.2**.: _Consider any sequence \(\{f_{k}\}_{k=1}^{\infty}\) in \(\mathcal{H}\) for which \(span\{f_{k}\}_{k=1}^{\infty}\) is infinite-dimensional. Then the following are equivalent:_
1. \(\{f_{k}\}_{k=1}^{\infty}\) _is linearly independent._
2. _There exists a linear operator_ \(T:span\{f_{k}\}_{k=1}^{\infty}\rightarrow\mathcal{H}\) _such that_ \(\{f_{k}\}_{k=1}^{\infty}=\{T^{k}f_{1}\}_{k=0}^{\infty}\) _._
We need the following results in the sequel.
**Proposition 1.3**.: _[_9_]_ _If \(\{T^{k}\varphi\}_{k=0}^{\infty}\) is a frame for some operator \(T\in B(\mathcal{H})\) and some \(\varphi\in\mathcal{H}\), then \(T\) has closed range._
**Theorem 1.4**.: _[_1_]_ _Let \(T\in B(\mathcal{H})\) and \(f\in\mathcal{H}\) such that \(\{T^{n}f\}_{n=0}^{\infty}\) is a frame for \(\mathcal{H}\). Then for every \(\varphi\in\mathcal{H},(T^{*})^{n}\varphi\to 0\) as \(n\rightarrow\infty\)._
**Theorem 1.5**.: _[_6_]_ _Let \(\{f_{k}\}_{k=1}^{\infty}\) be a frame for \(\mathcal{H}\) with bounds \(A,B\). Let \(\{g_{k}\}_{k=1}^{\infty}\subseteq\mathcal{H}\) and assume that there exist constants \(\lambda_{1},\lambda_{2},\mu\geq 0\) such that \(\max(\lambda_{1}+\frac{\mu}{\sqrt{A}},\lambda_{2})<1\) and_
\[\|\sum_{k=1}^{n}c_{k}(f_{k}-g_{k})\|\leq\lambda_{1}\|\sum_{k=1}^{n}c_{k}f_{k} \|+\lambda_{2}\|\sum_{k=1}^{n}c_{k}g_{k}\|+\mu(\sum_{k=1}^{n}|c_{k}|^{2})^{1/2},\]
_for all \(c_{1},\cdots,c_{n}(n\in\mathbb{N})\). Then \(\{g_{k}\}_{k=1}^{\infty}\) is a frame with bounds_
\[A(1-\frac{\lambda_{1}+\lambda_{2}+\frac{\mu}{\sqrt{A}}}{1+\lambda_{2}})^{2},\ \ B(1+\frac{\lambda_{1}+\lambda_{2}+\frac{\mu}{\sqrt{B}}}{1-\lambda_{2}})^{2}.\]
## 2. **The results**
For a given operator \(T\in B(\mathcal{H})\), we define
\[\mathcal{V}(T):=\left\{\varphi\in\mathcal{H}:\{T^{k}\varphi\}_{k=0}^{\infty} \text{ is a frame for }\mathcal{H}\right\},\]
and
\[E(\mathcal{H}):=\left\{T\in B(\mathcal{H}):\ \{T^{k}\varphi\}_{k=0}^{\infty} \text{ is a frame for }\mathcal{H},\text{ for some }\varphi\in\mathcal{H}\right\}.\]
The following proposition is a multiplication operator representation of self-adjoint operators in \(E(\mathcal{H})\). Recall that \(C(X)\) denotes the set
of all continuous functions on a locally compact space \(X\) and \(C_{0}(X)\) contains all functions that vanish at infinity. If \(X\) is compact, then \(C_{0}(X)=C(X)\).
**Proposition 2.1**.: _Let \(T\in E(\mathcal{H})\) be self-adjoint and the sequence \(\{T^{k}\varphi\}_{k=0}^{\infty}\) be a frame in \(\mathcal{H}\) for some \(\varphi\in\mathcal{H}\). Then there exists a unitary operator \(V\) from \(\mathcal{H}\) to \(L^{2}(\sigma(T),\mu_{\varphi})\) such that_
\[(VTV^{*}f)(x)=xf(x),\;\text{for all}\;f\in L^{2}(\sigma(T),\mu_{\varphi}).\]
Proof.: It is easy to see that the linear functional defined by
\[\Lambda:C_{0}(\sigma(T))\longrightarrow\mathbb{C},\;\;f{\longmapsto}\Lambda( f)=\langle\varphi,f(T)\varphi\rangle,\]
is continuous, so by Riesz representation theorem there exists a positive measure \(\mu_{\varphi}\) with \(\mu_{\varphi}(\sigma(T))=||\varphi||^{2}\) such that for any \(f\in C_{0}(\sigma(T))\), we have
\[\Lambda(f)=\int_{\sigma(T)}f(\lambda)d\mu_{\varphi}(\lambda).\]
Now, for each \(f\in C_{0}(\sigma(T))\), we define \(V(f(T)\varphi):=f\). Clearly, \(V\) is well-defined and we obtain
\[||f(T)\varphi||^{2} = \langle\varphi,f(T)^{*}f(T)\varphi\rangle=\langle\varphi,( \overline{f}f)(T)\varphi\rangle\] \[= \int_{\sigma(T)}|f(\lambda)|^{2}d\mu_{\varphi}(\lambda)=||f||^{2}.\]
This shows that \(V\) is an isometry on \(C_{0}(\sigma(T))\), so it is continuous. Hence, we can extend \(V\) to the closure of \(\{f(T)\varphi;f\in C_{0}(\sigma(T))\}\) which is equal to \(\mathcal{H}\); because on the one hand
\[\{P(T)\varphi:\;P\;is\;a\;polynomial\}=span\{T^{k}\varphi,k=0,1,..\},\]
is dense in \(\mathcal{H}\) and on the other hand, we have
\[\{P(T)\varphi:\;P\;is\;a\;polynomial\}\subseteq\{f(T)\varphi:f\in C_{0}(\sigma( T))\}.\]
Moreover, the extension is an isometry as well. Since \(V\) is isometry, \(ranV\) is closed and contains \(C_{0}(\sigma(T))\). But \(C_{0}(\sigma(T))\) is dense in \(L^{2}(\sigma(T),\mu_{\varphi})\), so it follows that \(ranV=L^{2}(\sigma(T),\mu_{\varphi})\). Therefore, \(V\) is unitary and \(V^{*}f=f(T)\varphi\), for all \(f\in L^{2}(\sigma(T),\mu_{\varphi})\). Now, using the definition of \(V\) and functional calculus for bounded self-adjoint operators, for any \(f\in L^{2}(\sigma(T),\mu_{\varphi})\), we get
\[(VTV^{*}f)(x)=(VTf(T)\varphi)(x)=xf(x),\]
which completes the proof.
By Proposition 1.3, if \(\{f_{k}\}_{k=1}^{\infty}\) has a representation of the form \(\{T^{k}\varphi\}_{k=0}^{\infty}\), for some \(T\in B(\mathcal{H})\) and some \(\varphi\in\mathcal{H}\), then \(T\) has closed range. The following result provides a necessary condition under which a frame \(\{f_{k}\}_{k=1}^{\infty}\) to has a structure form \(\{T^{k}f_{1}\}_{k=0}^{\infty}\):
**Proposition 2.2**.: _Let \(\{f_{k}\}_{k=1}^{\infty}\) be a frame for \(\mathcal{H}\) of the form \(\{T^{k}f_{1}\}_{k=0}^{\infty}\), where \(T\in B(\mathcal{H})\). Then for all \(\lambda\in\mathbb{C}\), the range of \(T-\lambda I\) is dense in \(\mathcal{H}\)._
Proof.: Since \(\{T^{k}f_{1}\}_{k=0}^{\infty}\) is a frame for \(\mathcal{H}\), \(T\in E(\mathcal{H})\). If for \(\lambda\in\mathbb{C}\) the range of \(T-\lambda I\) is not dense in \(\mathcal{H}\), then pick \(\varphi_{0}\in\mathcal{H}\backslash\overline{(T-\lambda I)\mathcal{H}}\) such that \(\overline{span}\{T^{k}\varphi_{0}\}_{k=0}^{\infty}=\mathcal{H}\). Therefore, there exists a \(c>0\) such that
\[\|\phi-\phi_{0}\|>c,\ \ \text{for all}\ \,\varphi\in(T-\lambda I)\mathcal{H}\.\]
Let \(\mathcal{M}\) be the subspace generated by \((T-\lambda I)\mathcal{H}\) and \(\varphi_{0}\). We define
\[\Lambda:\mathcal{M}\longrightarrow\mathbb{C},\ \ \Lambda(\varphi+\alpha \varphi_{0})=\alpha\,\]
for any \(\varphi\in(T-\lambda I)\mathcal{H}\) and for all \(\alpha\in\mathbb{C}\). Then since
\[c|\alpha|<|\alpha|\|\varphi_{0}+\alpha^{-1}\varphi\|=\|\alpha \varphi_{0}+\varphi\|\,\]
it is easy to see that \(\Lambda\) is a continuous linear functional on \(\mathcal{M}\) and its norm is at most \(c^{-1}\). Also, \(\Lambda=0\) on \((T-\lambda I)\mathcal{H}\) and \(\Lambda(\varphi_{0})=1\). Hence by Hahn-Banach theorem it can be extend to \(\mathcal{H}\), i.e., \(\Lambda\) is a continuous linear functional on \(\mathcal{H}\) with \(\Lambda((T-\lambda I)\mathcal{H})=\{0\}\) and \(\Lambda(\varphi_{0})\neq 0\). Therefore, we obtain
\[\Lambda(T\varphi)=\lambda\Lambda(\varphi),\ \text{for all}\ \,\varphi\in \mathcal{H}\,\]
and then
\[\Lambda(T^{n}\varphi)=\lambda^{n}\Lambda(\varphi),\ \text{for all}\ n\in \mathbb{N}\ \ \text{and}\ \varphi\in\mathcal{H}.\]
In particular, \(\Lambda(T^{n}\varphi_{0})=\lambda^{n}\Lambda(\varphi_{0})\).
Because \(0\), \(\varphi\in\mathcal{H}\) and \(\overline{span}\{T^{k}f_{1}\}_{k=0}^{\infty}=\mathcal{H}\), hence there exist two sequences of positive integers \(\{n_{k}\}_{k=1}^{\infty}\) and \(\{m_{k}\}_{k=1}^{\infty}\) which tend to \(\infty\) as \(k\longrightarrow\infty\) such that
\[\sum_{i=1}^{n_{k}}c_{n_{i}}T^{n_{i}}\varphi_{0}\longrightarrow 0,\ \ \text{as}\ \,k \longrightarrow\infty\,\]
and
\[\sum_{i=1}^{m_{k}}d_{m_{i}}T^{m_{i}}\varphi_{0}\longrightarrow\varphi_{0},\ \ \text{as}\ \,k \longrightarrow\infty\.\]
Then the continuity of \(\Lambda\) implies that
\[\sum_{i=1}^{n_{k}}c_{n_{i}}\lambda^{n_{i}}\Lambda(\varphi_{0})\longrightarrow 0,\ \ \text{as}\ \,k\longrightarrow\infty\,\]
and
\[\sum_{i=1}^{m_{k}}d_{m_{i}}\lambda^{m_{i}}\Lambda(\varphi_{0})\longrightarrow \Lambda(\varphi_{0}),\ \ \text{as}\ \,k\longrightarrow\infty\.\]
Therefore, it follows that \(\Lambda(\varphi_{0})=0\), and this contradiction proves the claim.
We give a characterization of frame sequence \(\{T^{k}\varphi\}_{k=0}^{\infty}\) in terms of the behavior spectrum of \(U^{*}U|_{N(U)^{\perp}}\):
**Proposition 2.3**.: _Let \(0\neq T\in B(\mathcal{H})\) and for some \(\varphi\in\mathcal{H}\), the series \(\sum_{k=0}^{\infty}c_{k}T^{k}\varphi\) converges, for all \(\{c_{k}\}_{k=0}^{\infty}\in\ell^{2}(\mathbb{N}_{0})\). Then \(\{T^{k}\varphi\}_{k=0}^{\infty}\) is a frame sequence for \(\mathcal{H}\) if and only if the spectrum of \(U^{*}U|_{N(U)^{\perp}}\) is contained in the half-opened interval \(]0,\|U\|^{2}]\), where \((N(U))^{\perp}\) denotes the orthogonal complement of null space \(N(U)\). In this case, we have_
\[\sigma(U^{*}U)\subseteq[0,\|U\|^{2}].\]
Proof.: Because \(\sum_{k=0}^{\infty}c_{k}T^{k}\varphi\) is well-defined for all \(\{c_{k}\}_{k=0}^{\infty}\in\ell^{2}(\mathbb{N}_{0})\), \(\{T^{k}\varphi\}_{k=0}^{\infty}\) is a Bessel sequence for \(\mathcal{H}\) and whose the synthesis operator defined by
\[U(\{c_{k}\}_{k=0}^{\infty})=\sum_{k=0}^{\infty}c_{k}T^{k}\varphi,\]
is well-defined and bounded on \(\ell^{2}(\mathbb{N}_{0})\). We first recall two well-known results for a bounded linear operator \(\Lambda:\mathcal{H}\longrightarrow\mathcal{K}\), where \(\mathcal{K}\) denotes a Hilbert space:
* \(R(\Lambda)\) is closed in \(\mathcal{K}\) if and only if \(R(\Lambda^{*}\Lambda)\) is closed in \(\mathcal{H}\), and \[\overline{R(\Lambda^{*})}=(N(\Lambda))^{\perp}=(N(\Lambda^{*}\Lambda))^{ \perp}=\overline{R(\Lambda^{*}\Lambda)}.\]
* \(N(\Lambda)\) and \((N(\Lambda))^{\perp}\) are invariant under \(\Lambda^{*}\Lambda\), and \[\sigma(\Lambda^{*}\Lambda)=\sigma(\Lambda^{*}\Lambda|_{N(\Lambda)})\cup\sigma (\Lambda^{*}\Lambda|_{N(\Lambda)^{\perp}})\subseteq[0,\|\Lambda\|^{2}].\]
We now consider positive operator:
\[U^{*}U|_{N(U)^{\perp}}:(N(U))^{\perp}\rightarrow(N(U))^{\perp}.\]
Clearly, it is injective and whose range \(R(U^{*}U|_{N(U)^{\perp}})=R(U^{*}U)\) is dense in \((N(U))^{\perp}\). Now, we have
\[R(U)\ \,\text{is closed in}\ \,\mathcal{H}\ \ \Longleftrightarrow\ \ R(U^{*}U)\,\ \text{is closed in}\ \,\mathcal{H},\]
\[TST^{*}f = \sum_{k=0}^{\infty}\langle T^{*}f,T^{k}f_{1}\rangle T^{k+1}f_{1}\] \[= \sum_{k=0}^{\infty}\langle f,T^{k+1}f_{1}\rangle T^{k+1}f_{1}\] \[= \sum_{k=0}^{\infty}\langle f,T^{k}f_{1}\rangle T^{k}f_{1}-\langle f,f_{1}\rangle f_{1}\]
\[= Sf-\|f_{1}\|\Lambda^{e}_{f_{1}}f\] \[= (S-\|f_{1}\|\Lambda^{e}_{f_{1}})f,\]
which proves (ii). Conversely, suppose that (i) and (ii) are satisfied and pick \(\Lambda:=\|f_{1}\|\Lambda^{e}_{f_{1}}\). Then we obtain
\[T^{2}S(T^{*})^{2}=T(TST^{*})T^{*}=T(S-\Lambda)T^{*}\] \[=TST^{*}-T\Lambda T^{*}=S-(\Lambda+T\Lambda T^{*}).\]
By induction, we get
\[T^{n}S(T^{*})^{n}=S-\sum_{k=0}^{n-1}T^{k}\Lambda(T^{*})^{k}. \tag{4}\]
On the other hand, for any \(f\in\mathcal{H}\) we get
\[\Lambda(T^{*})^{k}f=\|f_{1}\|T^{e}_{f_{1}}(T^{*})^{k}f = \|f_{1}\|\langle(T^{*})^{k}f,f_{1}\rangle e\] \[= \langle(T^{*})^{k}f,f_{1}\rangle f_{1}. \tag{5}\]
Therefore, by (4) and (5) we have
\[T^{n}S(T^{*})^{n}f=Sf-\sum_{k=0}^{n-1}\langle f,T^{k}f_{1}\rangle T^{k}f_{1}.\]
Hence, for any \(f\in\mathcal{H}\) we obtain
\[\langle S(T^{*})^{n}f,(T^{*})^{n}f\rangle=\langle Sf,f\rangle-\sum_{k=0}^{n-1} |\langle f,T^{k}f_{1}\rangle|^{2}.\]
Applying (i) for any \(f\in\mathcal{H}\), we get
\[\sum_{k=0}^{\infty}|\langle f,T^{k}f_{1}\rangle|^{2}=\langle Sf,f\rangle-\lim _{n\to\infty}\langle S(T^{*})^{n}f,(T^{*})^{n}f\rangle=\langle Sf,f\rangle.\]
Therefore, \(S\) is self-adjoint and nonnegative operator. Now, if \(S_{1}\) denote the frame operator of the Bessel sequence \(\{f_{k}\}_{k=1}^{\infty}=\{T^{n}f_{1}\}_{n=0}^{\infty}\), then we have
\[\langle S_{1}f,f\rangle=\sum_{k=0}^{\infty}|\langle f,T^{k}f_{1}\rangle|^{2}.\]
Therefore, for any \(f\in\mathcal{H}\), \(\langle Sf,f\rangle=\langle S_{1}f,f\rangle\), which implies \(S=S_{1}\). Now the boundedly invertible of the operator \(S\), concludes the proof.
In the following lemma, we give a characterization of frames \(\{f_{k}\}_{k=1}^{\infty}\) in terms of the frame coefficients \(\{\langle f,f_{k}\rangle\}_{k=1}^{\infty}\):
**Lemma 2.5**.: _The sequence \(\{f_{k}\}_{k=1}^{\infty}\subset\mathcal{H}\) is a frame for \(\mathcal{H}\) if and only if for any \(0\neq f\in\mathcal{H}\), \(\{\langle f,f_{k}\rangle\}_{k=1}^{\infty}\) is a frame for \(\mathbb{C}\)._
Proof.: Let \(0\neq f\in\mathcal{H}\), and let \(\{f_{k}\}_{k=1}^{\infty}\) be a frame for \(\mathcal{H}\) with bounds \(A,B\). For any \(\alpha,\beta\in\mathbb{C}\), we have \(\langle\alpha,\beta\rangle=\alpha\overline{\beta}\). Therefore, we have
\[\sum_{k=1}^{\infty}|\left\langle\left\langle f,f_{k}\right\rangle,g\right\rangle |^{2}=\sum_{k=1}^{\infty}|\left\langle f,f_{k}\right\rangle\overline{g}|^{2}=| g|^{2}\sum_{k=1}^{\infty}|\left\langle f,f_{k}\right\rangle|^{2},\ \ \forall g\in\mathbb{C}.\]
Using lower and upper frame inequalities of the frame \(\{f_{k}\}_{k=1}^{\infty}\), we get
\[A|g|^{2}\|f\|^{2}\leq\sum_{k=1}^{\infty}|\left\langle\left\langle f,f_{k} \right\rangle,g\right\rangle|^{2}\leq B|g|^{2}\|f\|^{2},\ \ \forall g\in\mathbb{C}.\]
That is, \(\{\langle f,f_{k}\rangle\}_{k=1}^{\infty}\) is a frame for \(\mathbb{C}\) with bounds \(A\|f\|^{2}\) and \(B\|f\|^{2}\). Conversely, let for any \(0\neq f\in\mathcal{H},\{\langle f,f_{k}\rangle\}_{k=1}^{\infty}\) be a frame for \(\mathbb{C}\) with bounds \(A\) and \(B\). Then for any \(0\neq g\in\mathbb{C}\) we have
\[A|g|^{2}\leq\sum_{k=1}^{\infty}|\left\langle\left\langle f,f_{k}\right\rangle,g \right\rangle|^{2}=\sum_{k=1}^{\infty}|\left\langle f,f_{k}\right\rangle \overline{g}|^{2}\leq B|g|^{2}.\]
Therefore, we obtain
\[A\leq\sum_{k=1}^{\infty}|\left\langle f,f_{k}\right\rangle|^{2}\leq B,\ \ \forall f\in\mathcal{H}\setminus\{0\}. \tag{6}\]
If we replace \(f\) by \(\frac{f}{\|f\|}\) in (6), then for any \(f\in\mathcal{H}\setminus\{0\}\), we get
\[A\|f\|^{2}\leq\sum_{k=1}^{\infty}|\left\langle f,f_{k}\right\rangle|^{2}\leq B \|f\|^{2},\]
that is, \(\{f_{k}\}_{k=1}^{\infty}\) is a frame for \(\mathcal{H}\).
If we consider frames with indexing over the set of integer numbers \(\mathbb{Z}\) instead of \(\mathbb{N}\), then we have the following result.
**Proposition 2.6**.: _If \(\{T^{k}f_{0}\}_{k\in\mathbb{Z}}\) is a tight frame in \(\mathcal{H}\) for some invertible operator \(T\in B(\mathcal{H})\), then \(T\) is an unitary._
Proof.: Since \(\{T^{k}f_{0}\}_{k\in\mathbb{Z}}\) is a tight frame, using Theorem 2.3 in [8], for any \(f\in\mathcal{H}\), we have
\[\|f\| = \|T^{-1}Tf\|\leq\|T^{-1}\|\|Tf\|\] \[= \|Tf\|\leq\|T\|\|f\|=\|f\|.\]
Hence, \(\|Tf\|=\|f\|\) for all \(f\in\mathcal{H}\), so \(T\) is an isometry, i.e., \(T^{*}T=I\). Similarly, using that \(f=TT^{-1}f\) in (7), we conclude that \(T^{-1}\) is an isometry i.e, \((T^{-1})^{*}T^{-1}=I\).
Therefore, it follows that \(TT^{*}=T^{*}T=I\) and \(T\) is a unitary.
**Corollary 2.7**.: _Consider a tight frame having a structure form \(\{T^{k}f_{0}\}_{k\in\mathbb{Z}}\), for some invertible operator \(T\in B(\mathcal{H})\). If \(\{U^{k}g_{0}\}_{k\in\mathbb{Z}}\) is a dual frame of \(\{T^{k}f_{0}\}_{k\in\mathbb{Z}}\) for some \(U\in B(\mathcal{H})\) and for some \(g_{0}\in\mathcal{H}\), then \(U=T\)._
Proof.: By the previous Proposition, we have \(T^{*}=T^{-1}\). On the other hand, by the definition, for any \(f\in\mathcal{H}\), we get
\[f=\sum_{k\in\mathbb{Z}}\langle f,\;U^{k}g_{0}\rangle T^{k}f_{0}=T\left(\sum_{k \in\mathbb{Z}}\langle U^{*}f,\;U^{k}g_{0}\rangle T^{k}f_{0}\right)=TU^{*}f,\]
i.e. \(TU^{*}=I.\) Since \(T\) is invertible and \(T^{*}=T^{-1}\), we obtain \(U^{*}=T^{*}\).
The final result states a perturbation condition that preserves operator representation structure of a frame.
**Proposition 2.8**.: _Assume that \(\{f_{k}\}_{k=1}^{\infty}=\{T^{k}f_{k}\}_{k=0}^{\infty}\) is a frame for \(\mathcal{H}\) for some \(T\) on \(\mathcal{H}\) and \(\{g_{k}\}_{k=1}^{\infty}\) is a sequence in \(\mathcal{H}\). If for \(0<\lambda_{1},\;\lambda_{2}<1\) and any \(f\in\mathcal{H}\), we have_
\[\|\sum_{k=1}^{\infty}c_{k}\langle f_{k}-g_{k},f\rangle\|\leq\lambda_{1}\|\sum_ {k=1}^{\infty}c_{k}\langle f_{k},f\rangle\|+\lambda_{2}\|\sum_{k=1}^{\infty}c_ {k}\langle g_{k},f\rangle\|, \tag{8}\]
_for all finite sequences \(\{c_{k}\}_{k=1}^{\infty}\). Then \(\{g_{k}\}_{k=1}^{\infty}\) is a frame for \(\mathcal{H}\) which has an operator representation form._
Proof.: Take \(h_{k}:=\langle f_{k},\;f\rangle\) and \(t_{k}:=\langle g_{k},\;f\rangle\) for all \(f\in\mathcal{H}\). Then the perturbation condition (8) implies that
\[\|\sum_{k=1}^{\infty}c_{k}(h_{k}-t_{k})\|\leq\lambda_{1}\|\sum_{k=1}^{\infty}c _{k}h_{k}\|+\lambda_{2}\|\sum_{k=1}^{\infty}c_{k}t_{k}\|. \tag{9}\]
Since \(\{f_{k}\}_{k=1}^{\infty}\) is a frame for \(\mathcal{H}\), by Lemma 2.5, for all \(0\neq f\in\mathcal{H}\), \(\{h_{k}\}_{k=1}^{\infty}\) is a frame for \(\mathbb{C}\). Now, by Theorem 1.5, \(\{t_{k}\}_{k=1}^{\infty}\) is a frame for \(\mathbb{C}\). Again, by Lemma 2.5, \(\{g_{k}\}_{k=1}^{\infty}\) is a frame for \(\mathcal{H}\).
Suppose that for any finite subset \(J\subset\mathbb{N}\), \(\sum_{k\in J}c_{k}g_{k}=0\). Applying inequality (8), we get
\[\|\sum_{k=1}^{\infty}c_{k}\langle f_{k},f\rangle\|\leq\lambda_{1}\|\sum_{k=1}^ {\infty}c_{k}\langle f_{k},f\rangle\|,\;\;\forall f\in\mathcal{H}.\]
Because \(0<\lambda_{1}<1\), we obtain \(\sum_{k\in J}c_{k}\langle f_{k},f\rangle=0\), i.e.,
\[\langle\sum_{k\in J}c_{k}f_{k},f\rangle=0,\ \ \forall f\in\mathcal{H}.\]
Thus, \(\sum_{k\in J}c_{k}f_{k}=0\). Now, by Proposition 1.2, for any \(k\in J\), we get \(c_{k}=0\). Therefore, \(\{g_{k}\}_{k=1}^{\infty}\) is linearly independent and again applying Proposition 1.2, it has an operator representation form \(\{V^{n}g_{k}\}_{n=0}^{\infty}\) for some linear operator \(V\).
We close this work by raising the following conjecture:
**Conjecture.** Let \(T\in E(\mathcal{H})\). The Hilbert space \(\mathcal{H}\) can be decomposed into a direct sum of Hilbert spaces
\[\mathcal{H}=\bigoplus_{n\in\mathbb{N}}\mathcal{H}_{n}\,\]
where
1. \(T\mathcal{H}_{n}\subset\mathcal{H}_{n}\).
2. For any \(n\in\mathbb{N}\), there exists \(\phi_{n}\in\mathcal{H}_{n}\) such that, \(\{T^{k}\phi_{n}\}_{k=0}^{\infty}\) is a frame for \(\mathcal{H}_{n}\).
|
2303.06922 | Analytic aspects of generalized central trinomial coefficients | The divisibility and congruence of usual and generalized central trinomial
coefficients have been extensively investigated. The present paper is devoted
to analytic properties of these numbers. We show that usual central trinomial
polynomials $T_n(x)$ have only real roots, and roots of $T_n(x)$ interlace
those of $T_{n+1}(x)$, as well as those of $T_{n+2}(x)$, which gives an
affirmative answer to a open question of Fisk. We establish necessary and
sufficient conditions such that the generalized central trinomial coefficients
$T_n(b,c)$ form a log-convex sequence or a Stieltjes moment sequence. | Huyile Liang, Yaling Wang, Yi Wang | 2023-03-13T08:39:45Z | http://arxiv.org/abs/2303.06922v2 | # Analytic aspects of generalized central trinomial coefficients
###### Abstract
The divisibility and congruence of usual and generalized central trinomial coefficients have been extensively investigated. The present paper is devoted to analytic properties of these numbers. We show that usual central trinomial polynomials \(\mathscr{T}_{n}(x)\) have only real roots, and roots of \(\mathscr{T}_{n}(x)\) interlace those of \(\mathscr{T}_{n+1}(x)\), as well as those of \(\mathscr{T}_{n+2}(x)\), which gives an affirmative answer to an open question of Fisk. We establish necessary and sufficient conditions such that the generalized central trinomial coefficients \(T_{n}(b,c)\) form a log-convex sequence or a Stieltjes moment sequence.
keywords: central trinomial coefficient, generalized central trinomial coefficient, recursive matrix, Riordan array Msc: [2010] 26C10, 60F05, 05A15, 15B48 +
Footnote †: journal: JMAA
## 1 Introduction
The central trinomial coefficient \(T_{n}\) is the coefficient of \(x^{n}\) in the expansion \((x^{2}+x+1)^{n}\). By the trinomial theorem, it is clear that \(T_{n}=\sum_{k=0}^{\lfloor n/2\rfloor}T(n,k)\), where
\[T(n,k)=\frac{n!}{k!k!(n-2k)!}=\binom{n}{2k}\binom{2k}{k}. \tag{1.1}\]
Although the study of central trinomial coefficients can be traced back to Euler's work, there are occasional references to them in combinatorics books until Andrews [3]. In recent years, there has been a lot of research work devoted to the study of the central trinomial coefficients and their generalizations. For example, the generalized central trinomial coefficient \(T_{n}(b,c)\) is defined as the coefficient of \(x^{n}\) in the expansion \((x^{2}+bx+c)^{n}\). Clearly, \(T_{n}(1,1)=T_{n}\). It is known [13; 21; 34] that
\[T_{n}(b,c)=\sum_{k=0}^{\lfloor n/2\rfloor}T(n,k)b^{n-2k}c^{k}. \tag{1.2}\]
Also, the generalized central trinomial coefficients have the generating function
\[\mathscr{T}(b,c;x)=\sum_{n\geq 0}T_{n}(b,c)x^{n}=\frac{1}{\sqrt{1-2bx+(b^{2}-4c) x^{2}}}, \tag{1.3}\]
and satisfy the recurrence relation
\[(n+1)T_{n+1}(b,c)=(2n+1)bT_{n}(b,c)-n(b^{2}-4c)T_{n-1}(b,c) \tag{1.4}\]
with \(T_{0}(b,c)=1\) and \(T_{1}(b,c)=b\).
The generalized central trinomial coefficients are generalizations of many well-known combinatorial numbers [21]. For example, \(T_{n}(2,1)\) is the central binomial coefficient \(\binom{2n}{n}\) and \(T_{n}(3,2)\) is the central Delannoy number \(D_{n}=\sum_{k=0}^{n}\binom{n}{k}\binom{n+k}{k}\). On the other hand, \(T_{n}(1,x)\) is the central trinomial polynomial, \(T_{n}(x+1,x)\) is the Narayana polynomial of type B, \(T_{n}(2x+1,x(x+1))\) is the Delannoy polynomial and \(T_{n}(x,(x^{2}-1)/4)\) is the Legendre polynomial.
There have been quite a few papers concerned with the divisibility and congruence of generalized central trinomial coefficients [6; 21; 28; 29]. The objective of the present paper is to investigate analytic properties of usual and generalized central trinomial coefficients.
The paper is organized as follows. In the next section, we show that the central trinomial polynomials \(\mathscr{T}_{n}(x)=\sum_{k=0}^{\lfloor n/2\rfloor}T(n,k)x^{k}\) have only real roots, and roots of \(\mathscr{T}_{n}(x)\) interlace those of \(\mathscr{T}_{n+1}(x)\), as well as those of \(\mathscr{T}_{n+2}(x)\), which gives an affirmative answer to an open question of Fisk. We also show that the matrix \(T=[T(n,k)]_{n,k\geq 0}\) is totally positive and the numbers \(T(n,k)\) are asymptotically normal (by central and local limit theorems). In SS3, we investigate the generalized central trinomial coefficients \(T_{n}(b,c)\) by setting them in a broader context (the leftmost column of the matrix \(T^{+}(b,c)\)). We show that the matrix \(T^{+}(b,c)\) is both a Riordan array and a recursive matrix of Aigner. As applications, we give the binomial and Hankel transforms of \((T_{n}(b,c))_{n\geq 0}\), and establish necessary and sufficient conditions such that \((T_{n}(b,c))_{n\geq 0}\) form a log-convex sequence or a Stieltjes moment sequence. Furthermore, we show that \(T_{n-1}(b,c)T_{n+1}(b,c)-T_{n}^{2}(b,c)\) are polynomials in \(c\) and \(b^{2}-2c\) with nonnegative integer coefficients for \(n\geq 1\). Finally in SS4, we point out that the generalized Motzkin numbers are closely related to the generalized central trinomial coefficients and they possess many similar properties.
## 2 Usual central trinomial coefficients
In this section, we investigate analytic properties of the usual central trinomial coefficients \(T_{n}\). Define the matrix \(T=[T(n,k)]_{n,k\geq 0}\). Then
\[T=\left(\begin{array}{cccc}1&&&\\ 1&&\\ 1&2&&\\ 1&6&&\\ 1&12&6&\\ 1&20&30&\\ \vdots&&&\ddots\end{array}\right). \tag{2.1}\]
Let \(\mathscr{T}_{n}(x)=\sum_{k=0}^{\lfloor n/2\rfloor}T(n,k)x^{k}\) be the \(n\)th row generating function of \(T\). Then \(\deg\mathscr{T}_{n}(x)=\lfloor n/2\rfloor\), \(\mathscr{T}_{n}(x)=T_{n}(1,x)\) and \(T_{n}=\sum_{k}T(n,k)=\mathscr{T}_{n}(1)\). It is clear from (1.4) that
\[(n+1)\mathscr{T}_{n+1}(x)=(2n+1)\mathscr{T}_{n}(x)+n(4x-1)\mathscr{T}_{n-1}(x) \tag{2.2}\]
and
\[(n+1)T_{n+1}=(2n+1)T_{n}+3nT_{n-1}. \tag{2.3}\]
Let \(f(x)\) be a _real-rooted_ polynomial, i.e., a real polynomial with only real roots. Denote by \(r_{i}(f)\) the roots of \(f\) sorted in non-increasing order: \(r_{1}(f)\geq r_{2}(f)\geq r_{3}(f)\geq\cdots\). Let \(f,g\) be two real-rooted polynomials and \(\deg g\leq\deg f\leq\deg g+1\). We say that \(g\)_interlaces_\(f\), denoted by \(g\preccurlyeq f\), if
\[r_{1}(f)\geq r_{1}(g)\geq r_{2}(f)\geq r_{2}(g)\geq\cdots. \tag{2.4}\]
If all inequalities in (2.4) are strict, then we say that \(g\)_strictly interlaces_\(f\) and denote it by \(g\prec f\). For notational convenience, we say that a real number \(a\) is real-rooted and \(a\prec bx+c\) for \(a,b+c>0\) and \(b,c\geq 0\).
It is known that \(\mathscr{T}_{n}(x)\) is real-rooted (see [11, Corollary 13.15] for instance). Fisk posed a question that roots of \(\mathscr{T}_{n}(x)\) interlace those of \(\mathscr{T}_{n+1}(x)\), as well as those of \(\mathscr{T}_{n+2}(x)\)[11, Question 4, P. 721]. We next give an affirmative answer of this question. For completeness, we also include a proof of the real-rootedness of \(\mathscr{T}_{n}(x)\).
**Theorem 2.1**.: _The polynomials \(\mathscr{T}_{n}(x)=\sum_{k=0}^{\lfloor n/2\rfloor}T(n,k)x^{k}\) are real-rooted. Furthermore, \(\mathscr{T}_{n-1}(x)\prec\mathscr{T}_{n+1}(x)\) for \(n\geq 1\) and \(\mathscr{T}_{n}(x)\prec\mathscr{T}_{n+1}(x)\) for \(n\geq 0\)._
Proof.: We first show that all polynomials \(\mathscr{T}_{n}(x)\) are real-rooted and \(\mathscr{T}_{n}(x)\prec\mathscr{T}_{n+1}(x)\). We proceed by induction on \(n\). The case is clear for \(n\leq 2\). Assume that \(\mathscr{T}_{n-1}(x),\mathscr{T}_{n}(x)\) are real-rooted and \(\mathscr{T}_{n-1}(x)\prec\mathscr{T}_{n}(x)\). We need to show that \(\mathscr{T}_{n+1}(x)\) is real-rooted and \(\mathscr{T}_{n}(x)\prec\mathscr{T}_{n+1}(x)\).
Denote by \(\operatorname{sgn}\left(t\right)\) the sign of a real number \(t\), i.e.,
\[\operatorname{sgn}\left(t\right)=\left\{\begin{array}{rl}+1,&\text{for }t>0; \\ 0,&\text{for }t=0;\\ -1,&\text{for }t<0.\end{array}\right.\]
Let \(r_{1}^{(n)}>r_{2}^{(n)}>r_{3}^{(n)}>\cdots\) be roots of \(\mathscr{T}_{n}(x)\). By \(\mathscr{T}_{n-1}(x)\prec\mathscr{T}_{n}(x)\), we have
\[\operatorname{sgn}\mathscr{T}_{n-1}(r_{i}^{(n)})=(-1)^{i-1},\quad i=1,2, \ldots,\lfloor n/2\rfloor.\]
By substituting \(x=r_{i}^{(n)}\) into the equation (2.2), we obtain
\[\operatorname{sgn}\mathscr{T}_{n+1}(r_{i}^{(n)})=-\operatorname{sgn}\mathscr{ T}_{n-1}(r_{i}^{(n)})=(-1)^{i},\quad i=1,2,\ldots,\lfloor n/2\rfloor.\]
It follows that \(\mathscr{T}_{n+1}(x)\) is real-rooted and \(\mathscr{T}_{n}(x)\prec\mathscr{T}_{n+1}(x)\), as required.
We then show that \(\mathscr{T}_{n-1}(x)\prec\mathscr{T}_{n+1}(x)\). It suffices to consider the case \(n\geq 3\). Now \(\mathscr{T}_{n-1}(x)\prec\mathscr{T}_{n}(x)\), which implies that
\[\operatorname{sgn}\mathscr{T}_{n}(r_{i}^{(n-1)})=(-1)^{i},\quad i=1,2,\ldots, \lfloor(n-1)/2\rfloor.\]
Again by (2.2),
\[\operatorname{sgn}\mathscr{T}_{n+1}(r_{i}^{(n-1)})=\operatorname{sgn}\mathscr{ T}_{n}(r_{i}^{(n-1)})=(-1)^{i},\quad i=1,2,\ldots,\lfloor(n-1)/2\rfloor.\]
Thus \(\mathscr{T}_{n-1}(x)\prec\mathscr{T}_{n+1}(x)\). This completes the proof of the theorem.
By the same argument as in the proof of Theorem 2.1, many well-known combinatorial polynomials can be shown to be real-rooted, including the Narayana polynomials \(\operatorname{NB}_{n}(x)=T_{n}(x+1,x)\) of type B, the Delannoy polynomials \(D_{n}(x)=T_{n}(2x+1,x(x+1))\), and the Legendre polynomials \(P_{n}(x)=T_{n}(x,(x^{2}-1)/4)\). We refer the reader to [9, 17, 19, 31].
The real-rootedness of polynomials with nonnegative coefficients is closely related to the total positivity of matrices [10, 11, 15, 23]. Following Karlin [15], we say that a (finite or infinite) matrix \(A\) is _totally positive of order \(r\)_ (TP\({}_{r}\) for short), if its minors of all orders \(\leq r\) are nonnegative. The matrix is called _totally positive_ (TP for short) if its minors of all orders are nonnegative. Let \((a_{k})_{k\geq 0}\) be a (finite or infinite) sequence of nonnegative numbers (we identify a finite sequence
with the infinite sequence \(a_{0},a_{1},\ldots,a_{n},0,0,\ldots\)). We say that the sequence is a _Polya frequency_ (PF for short) sequence if the corresponding infinite Toeplitz matrix
\[[a_{n-k}]_{n,k\geq 0}=\left(\begin{array}{cccccc}a_{0}&0&0&0&\cdots\\ a_{1}&a_{0}&0&0&\\ a_{2}&a_{1}&a_{0}&0&\\ a_{3}&a_{2}&a_{1}&a_{0}&\\ \vdots&&&\ddots\end{array}\right)\]
is TP. The following is a fundamental characterization for PF sequences (see [15, p. 412] for instance).
**Schoenberg-Edrei Theorem**.: _A sequence \((a_{k})_{k\geq 0}\) of nonnegative numbers is PF if and only if its generating function has the form_
\[\sum_{k\geq 0}a_{k}x^{k}=ax^{m}e^{\gamma x}\frac{\prod_{j\geq 0}(1+\alpha_{j}x )}{\prod_{j\geq 0}(1-\beta_{j}x)},\]
_where \(a>0,m\in\mathbb{N},\alpha_{j},\beta_{j},\gamma\geq 0\) and \(\sum_{j\geq 0}(\alpha_{j}+\beta_{j})<+\infty\)._
The following special case of Schoenberg-Edrei Theorem establishes the link between finite PF sequences and real-rooted polynomials.
**Aissen-Schoenberg-Whitney Theorem**.: _A finite sequence \((a_{0},a_{1},\ldots,a_{n})\) of nonnegative numbers is PF if and only if its generating function \(a_{0}+a_{1}x+\cdots+a_{n}x^{n}\) is real-rooted._
**Theorem 2.2**.: _The matrix \(T=[T(n,k)]_{n,k\geq 0}\) is totally positive._
Proof.: Clearly, the sequence \(\left(1/k!\right)_{k\geq 0}\) is PF by Schoenberg-Edrei Theorem. Thus the the corresponding Toeplitz matrix \([1/(n-k)!]_{n,k\geq 0}\) is TP, and so is the submatrix \([1/(n-2k)!]_{n,k\geq 0}\) consisting of even columns of this Toeplitz matrix. It is obvious that if the matrix \([a_{n,k}]_{n,k\geq 0}\) is TP, two sequences \(b_{n}\) and \(c_{k}\) are positive, then the matrix \([a_{n,k}b_{n}c_{k}]_{n,k\geq 0}\) is still TP. The matrix
\[T=\left[\frac{n!}{k!k!(n-2k)!}\right]_{n,k\geq 0}\]
is therefore TP.
We next consider the asymptotic normality of the numbers \(T(n,k)\). Let \(a(n,k)\) be a double-indexed sequence of nonnegative numbers and let
\[p(n,k)=\frac{a(n,k)}{\sum_{j=0}^{n}a(n,j)}\]
denote the probabilities. Following Bender [4], we say that the sequence \(a(n,k)\) is _asymptotically normal by a central limit theorem_, if
\[\lim_{n\to\infty}\sup_{x\in\mathbb{R}}\left|\sum_{k\leq\mu_{n}+x\sigma_{n}}p( n,k)-\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x}e^{-t^{2}/2}dt\right|=0, \tag{2.5}\]
where \(\mu_{n}\) and \(\sigma_{n}^{2}\) are the mean and variance of \(p(n,k)\), respectively. We say that \(a(n,k)\) is _asymptotically normal by a local limit theorem_ on \(\mathbb{R}\) if
\[\lim_{n\to\infty}\sup_{x\in\mathbb{R}}\left|\sigma_{n}p(n,\lfloor\mu_{n}+x \sigma_{n}\rfloor)-\frac{1}{\sqrt{2\pi}}e^{-x^{2}/2}\right|=0. \tag{2.6}\]
In this case,
\[a(n,k)\sim\frac{e^{-x^{2}/2}\sum_{j=0}^{n}a(n,j)}{\sigma_{n}\sqrt{2\pi}}\text{ as }n \rightarrow\infty, \tag{2.7}\]
where \(k=\mu_{n}+x\sigma_{n}\) and \(x=O(1)\). Clearly, the validity of (2.6) implies that of (2.5).
Many well-known combinatorial sequences enjoy central and local limit theorems. For example, the famous de Moivre-Laplace theorem states that the binomial coefficients \(\binom{n}{k}\) are asymptotically normal (by central and local limit theorems). Other examples include the signless Stirling numbers \(c(n,k)\) of the first kind, the Stirling numbers \(S(n,k)\) of the second kind, and the Eulerian numbers \(A(n,k)\)(see [5] for instance). A standard approach to demonstrating asymptotic normality is the following criterion (see [4, Theorem 2] for instance).
**Lemma 2.3**.: _Suppose that \(\mathscr{A}_{n}(x)=\sum_{k=0}^{n}a(n,k)x^{k}\) have only real roots, where all \(a(n,k)\) are non-negative. Let_
\[\mu_{n}=\frac{\mathscr{A}_{n}^{\prime}(1)}{\mathscr{A}_{n}(1)} \tag{2.8}\]
_and_
\[\sigma_{n}^{2}=\frac{\mathscr{A}_{n}^{\prime\prime}(1)}{\mathscr{A}_{n}(1)}+ \mu_{n}-\mu_{n}^{2}. \tag{2.9}\]
_Then if \(\sigma_{n}^{2}\rightarrow+\infty\), the numbers \(a(n,k)\) are asymptotically normal (by central and local limit theorems) with the mean \(\mu_{n}\) and variance \(\sigma_{n}^{2}\)._
**Theorem 2.4**.: _The numbers \(T(n,k)\) are asymptotically normal._
Proof.: It suffices to show that \(\sigma_{n}^{2}\rightarrow+\infty\) by Lemma 2.3. Recall that
\[\mathscr{T}_{n}(x)=\sum_{k=0}^{\lfloor n/2\rfloor}\binom{n}{2k}\binom{2k}{k}x ^{k}\]
and \(\mathscr{T}_{n}(1)=T_{n}\). We have
\[\mathscr{T}_{n}^{\prime}(1) = \sum_{k}k\binom{n}{2k}\binom{2k}{k}=\frac{n}{2}\sum_{k}\binom{n- 1}{2k-1}\binom{2k}{k}\] \[= \frac{n}{2}\sum_{k}\left[\binom{n}{2k}-\binom{n-1}{2k}\right] \binom{2k}{k}\] \[= \frac{n}{2}(T_{n}-T_{n-1}).\]
It is known [32, Corollary 2.6] that \(T_{n-1}T_{n+1}\geq T_{n}^{2}\) for \(n\geq 4\). The sequence \(T_{n-1}/T_{n}\) is therefore non-decreasing and convergent. Furthermore, \(T_{n-1}/T_{n}\to 1/3\) by the difference equation (2.3). It follows that
\[\mu_{n}=\frac{\mathscr{T}_{n}^{\prime}(1)}{\mathscr{T}_{n}(1)}=\frac{n(T_{n}- T_{n-1})}{2T_{n}}=n\left(\frac{1}{3}+\mathrm{o}(1)\right).\]
On the other hand,
\[\mathscr{T}_{n}^{\prime\prime}(1)+\mathscr{T}_{n}^{\prime}(1)=\sum_{k}k^{2} \binom{n}{2k}\binom{2k}{k}=\sum_{k}\frac{n!}{(k-1)!(k-1)!(n-2k)!}=n(n-1)T_{n- 2}. \tag{2.10}\]
By (2.9), we have
\[\sigma_{n}^{2}=\frac{\mathscr{T}_{n}^{\prime\prime}(1)+\mathscr{T}_{n}^{ \prime}(1)}{\mathscr{T}_{n}(1)}-\left[\frac{n(T_{n}-T_{n-1})}{2T_{n}}\right]^ {2}=\frac{n[4(n-1)T_{n-2}T_{n}-nT_{n}^{2}+2nT_{n-1}T_{n}-nT_{n-1}^{2}]}{4T_{n} ^{2}},\]
and so
\[\sigma_{n}^{2}\geq\frac{n[(3n-4)T_{n-2}T_{n}-nT_{n}^{2}+2nT_{n-1}T_{n}]}{4T_{n}^ {2}}=\frac{n[(3n-4)T_{n-2}-nT_{n}+2nT_{n-1}]}{4T_{n}}=\frac{n(T_{n-1}-T_{n-2})}{4 T_{n}}\]
by the inequality \(T_{n-1}^{2}\leq T_{n-2}T_{n}\) and the difference equation \(nT_{n}=(2n-1)T_{n-1}+3(n-1)T_{n-2}\). Note that
\[\frac{n(T_{n-1}-T_{n-2})}{4T_{n}}=\frac{n}{4}\cdot\frac{T_{n-1}}{T_{n}}\left(1 -\frac{T_{n-2}}{T_{n-1}}\right)=\frac{n}{4}\left(\frac{2}{9}+o(1)\right)= \frac{n}{18}+o(n).\]
Thus \(\sigma_{n}^{2}\to+\infty\), as required.
## 3 Generalized central trinomial coefficients
Let
\[\left(x+b+\frac{c}{x}\right)^{n}=\sum_{k=-n}^{n}T_{n,k}(b,c)x^{k}. \tag{3.1}\]
Then by the trinomial theorem, we have
\[T_{n,k}(b,c)=\sum_{j}\frac{n!}{j!(j+k)!(n-k-2j)!}b^{n-k-2j}c^{j}. \tag{3.2}\]
In particular, the constant term \(T_{n,0}(b,c)\) is precisely the generalized central trinomial coefficient \(T_{n}(b,c)\), and \(T_{n,-k}(b,c)=c^{k}T_{n,k}(b,c)\).
Define an infinity lower triangular matrix \(T^{+}(b,c)=[T_{n,k}(b,c)]_{n,k\geq 0}\). Then
\[T^{+}(b,c)=\left(\begin{array}{ccccccccc}1&&&&&&&\\ b&1&&&&\\ b^{2}+2c&2b&1&&&\\ b^{3}+6bc&3b^{2}+3c&3b&1&&\\ b^{4}+12b^{2}c+6c^{2}&4b^{3}+12bc&6b^{2}+4c&4b&1&\\ \vdots&&\cdots&&&\ddots\end{array}\right). \tag{3.3}\]
In this section, we first show that \(T^{+}(b,c)\) is both a recursive matrix and a Riordan array, and then apply the total positivity of matrices to study analytic properties of generalized central trinomial coefficients \(T_{n}(b,c)\).
Let \(\beta=(b_{k})_{k\geq 0}\) and \(\gamma=(c_{k})_{k\geq 1}\) be two sequences of nonnegative numbers and define an infinite lower triangular matrix \(R=[r_{n,k}]_{n,k\geq 0}\) by the recurrence relation
\[r_{0,0}=1,\qquad r_{n+1,k}=r_{n,k-1}+b_{k}r_{n,k}+c_{k+1}r_{n,k+1}, \tag{3.4}\]
where \(r_{n,k}=0\) unless \(n\geq k\geq 0\). Following Aigner [1; 2], we say that \(R\) is the _recursive matrix_ and \(r_{n,0}\) are the _Catalan-like numbers_ corresponding to \((\beta,\gamma)\) respectively. The entry \(r_{n,k}\) in \(R\) counts weighted Motzkin lattice paths. The following result is known as the fundamental theorem of recursive matrices [2, Fundamental Theorem].
**Lemma 3.1**.: _Let \(\delta_{0}=1\) and \(\delta_{n}=c_{1}c_{2}\cdots c_{n}\). Then_
\[\sum_{k}r_{m,k}r_{n,k}\delta_{k}=r_{m+n,0}. \tag{3.5}\]
_Or equivalently, the Hankel matrix \(H=[r_{i+j,0}]_{i,j\geq 0}\) of Catalan-like numbers \(r_{n,0}\) has the decomposition_
\[H=RDR^{t}, \tag{3.6}\]
_where \(D=\operatorname{diag}(\delta_{0},\delta_{1},\delta_{2},\ldots)\) is a diagonal matrix and \(R^{t}\) is the transpose of the matrix \(R\)._
The following corollary follows immediately from (3.6).
**Corollary 3.2**.: _The \(n\)th Hankel determinant of the sequence \((r_{k,0})_{k\geq 0}\) is given by_
\[\det[r_{i+j,0}]_{0\leq i,j\leq n}=\delta_{0}\delta_{1}\cdots\delta_{n}.\]
Let \(f(x)=\sum_{n\geq 0}f_{n}x^{n}\) and \(g(x)=\sum_{n\geq 0}g_{n}x^{n}\) be two formal power series. A _Riordan array_, denoted by \(\mathcal{R}(g(x),f(x))\), is an infinite matrix whose generating function of the \(k\)th column is \(x^{k}f^{k}(x)g(x)\) for \(k\geq 0\). We say that a Riordan array \(\mathcal{R}(g(x),f(x))\) is _proper_ if \(g_{0}=1\) and \(f_{0}\neq 0\). In this case, \(\mathcal{R}(g(x),f(x))\) is an infinite lower triangular matrix. Riordan arrays play an important unifying role in enumerative combinatorics, especially in dealing with combinatorial sums [24, 27]. The following result is known as the fundamental theorem of Riordan arrays by Shapiro [25].
**Lemma 3.3**.: _Let \(R=\mathcal{R}(g(x),f(x))=[r_{n,k}]_{n,k\geq 0}\) be a Riordan array and \(h(x)=\sum_{k\geq 0}h_{k}x^{k}\). Then_
\[\sum_{k=0}^{n}r_{n,k}h_{k}=[x^{n}]g(x)h(xf(x)). \tag{3.7}\]
A proper Riordan array \(R=[r_{n,k}]_{n,k\geq 0}\) can be characterized by two sequences \((a_{k})_{k\geq 0}\) and \((z_{k})_{k\geq 0}\) such that
\[r_{0,0}=1,\quad r_{n+1,0}=\sum_{j\geq 0}z_{j}r_{n,j},\quad r_{n+1,k+1}=\sum_{j \geq 0}a_{j}r_{n,k+j} \tag{3.8}\]
for \(n,k\geq 0\). Let \(A(x)=\sum_{n\geq 0}a_{n}x^{n}\) and \(Z(x)=\sum_{n\geq 0}z_{n}x^{n}\). Then
\[g(x)=\frac{1}{1-xZ(xf(x))} \tag{3.9}\]
and
\[f(x)=A(xf(x)). \tag{3.10}\]
We refer the reader to [14] for details.
**Example 3.4**.: Let \(\beta=(b,b,b,\ldots)\) and \(\gamma=(c,c,c,\ldots)\). Then the corresponding Catalan-like numbers are the generalized Motzkin numbers
\[M_{n}(b,c)=\sum_{k=0}^{\lfloor n/2\rfloor}\frac{n!}{k!(k+1)!(n-2k)!}b^{n-2k}c ^{k}=\sum_{k=0}^{\lfloor n/2\rfloor}\binom{n}{2k}\frac{1}{k+1}\binom{2k}{k}b^ {n-2k}c^{k}. \tag{3.11}\]
Let \(M(b,c)\) denote the corresponding recursive matrix. Then \(M(b,c)\) is also a Riordan array with \(A(x)=1+bx+cx^{2}\) and \(Z(x)=b+cx\). Furthermore, \(M(b,c)=\mathcal{R}(\mathscr{M}(b,c;x),\mathscr{M}(b,c;x))\), where
\[\mathscr{M}(b,c;x)=\sum_{n\geq 0}M_{n}(b,c)x^{n}=\frac{1-bx-\sqrt{1-2bx+(b^{2}-4 c)x^{2}}}{2cx^{2}}. \tag{3.12}\]
See [30] for details.
**Theorem 3.5**.:
1. _The triangle_ \(T^{+}(b,c)\) _is the recursive matrix corresponding to_ \(\beta=(b,b,b,\ldots)\) _and_ \(\gamma=(2c,c,c,\ldots)\) _and the generalized central trinomial coefficients_ \(T_{n}(b,c)\) _are the associated Catalan-like numbers._
2. _The triangle_ \(T^{+}(b,c)\) _is a Riordan array and_ \(T^{+}(b,c)=\mathcal{R}(\mathscr{T}(b,c;x),\mathscr{M}(b,c;x))\)
Proof.: (i) By the definition (3.1), we have \(T_{0,0}(b,c)=1\),
\[T_{n+1,0}(b,c)=T_{n,-1}(b,c)+bT_{n,0}(b,c)+cT_{n,1}(b,c)=bT_{n,0}(b,c)+2cT_{n,1}(b,c) \tag{3.13}\]
since \(T_{n,-1}(b,c)=cT_{n,1}(b,c)\), and
\[T_{n+1,k+1}(b,c)=T_{n,k}(b,c)+bT_{n,k+1}(b,c)+cT_{n,k+2}(b,c) \tag{3.14}\]
for \(n,k\geq 0\). We conclude that \(T^{+}(b,c)=[T_{n,k}(b,c)]_{n,k\geq 0}\) is the recursive matrix corresponding to \(\beta=(b,b,b,\ldots)\) and \(\gamma=(2c,c,c,\ldots)\) and the generalized trinomial coefficients \(T_{n}(b,c)=T_{n,0}(b,c)\) are the associated Catalan-like numbers.
(ii) Clearly, the triangle \(T^{+}(b,c)\) is a Riordan array with \(A(x)=1+bx+cx^{2}\) and \(Z(x)=b+2cx\). Let \(T^{+}(b,c)=\mathcal{R}(g(x),f(x))\). Then by (3.10), we have \(f(x)=1+bxf(x)+cx^{2}f^{2}(x)\), and so
\[cx^{2}f^{2}(x)-(1-bx)f(x)+1=0. \tag{3.15}\]
Solve (3.15) to obtain
\[f(x)=\frac{1-bx-\sqrt{(1-bx)^{2}-4cx^{2}}}{2cx^{2}}=\mathscr{M}(b,c;x).\]
By (3.9), we have
\[g(x)=\frac{1}{1-x[b+2cx\cdot f(x)]}=\frac{1}{\sqrt{(1-bx)^{2}-4cx^{2}}}=\frac {1}{\sqrt{1-2bx+(b^{2}-4c)x^{2}}}=\mathscr{T}(b,c;x).\]
Thus \(T^{+}(b,c)=\mathcal{R}(g(x),f(x))=\mathcal{R}(\mathscr{T}(b,c;x),\mathscr{M}( b,c;x))\).
The following result has occurred in [22, Theorem 4], and is clear from Corollary 3.2.
**Corollary 3.6**.: _The \(n\)th Hankel determinant of the sequence \((T_{k}(b,c))_{k\geq 0}\) is given by_
\[\det[T_{i+j}(b,c)]_{0\leq i,j\leq n}=2^{n}c^{\binom{n+1}{2}}.\]
**Theorem 3.7**.: _We have_
\[\sum_{k=0}^{n}\binom{n}{k}T_{k}(b,c)a^{n-k}=T_{n}(a+b,c). \tag{3.16}\]
Proof.: Let
\[P_{a}=\left[\binom{n}{k}a^{n-k}\right]_{n,k\geq 0}\]
be the generalized Pascal triangle. Then the generating function of the \(k\)th column of \(P_{a}\) is
\[\sum_{n\geq 0}\binom{n}{k}a^{n-k}x^{n}=\frac{x^{k}}{(1-ax)^{k+1}}.\]
Thus \(P_{a}\) is a Riordan array:
\[P_{a}=\mathcal{R}\left(\frac{1}{1-ax},\frac{1}{1-ax}\right).\]
By (3.7) and (1.3), we have
\[\sum_{k=0}^{n}\binom{n}{k}a^{n-k}\cdot T_{k}(b,c) = [x^{n}]\frac{1}{1-ax}\frac{1}{\sqrt{(1-b\frac{x}{1-ax})^{2}-\frac{ 4cx^{2}}{(1-ax)^{2}}}}\] \[= [x^{n}]\frac{1}{\sqrt{[1-(a+b)x]^{2}-4cx^{2}}}\] \[= T_{n}(a+b,c).\]
This proves (3.16).
The _Hankel transform_\((y_{n})_{n\geq 0}\) and the _binomial transform_\((z_{n})_{n\geq 0}\) of a sequence \((x_{n})_{n\geq 0}\) are defined by \(y_{n}=\det[x_{i+j}]_{0\leq i,j\leq n}\) and \(z_{n}=\sum_{k=0}^{n}\binom{n}{k}x_{k}\) respectively. We obtain the Hankel transform of the sequence \((T_{n}(b,c)_{n\geq 0})\) from Corollary 3.6. Also, we can obtain the binomial transform \(\sum_{k=0}^{n}\binom{n}{k}T_{k}(b,c)=T_{n}(b+1,c)\) by taking \(a=1\) in Theorem 3.7.
Let \(\alpha=(a_{k})_{k\geq 0}\) be an infinite sequence of nonnegative numbers. We say that the sequence is _log-convex_ if \(a_{i}a_{j+1}\geq a_{i+1}a_{j}\) for \(0\leq i<j\). If there exists a nonnegative Borel measure \(\mu\) on \([0,+\infty)\) such that
\[a_{k}=\int_{0}^{+\infty}x^{k}d\mu(x), \tag{3.17}\]
then we say that \((a_{k})_{k\geq 0}\) is a _Stieltjes moment_ (SM for short) sequence. Let \(H(\alpha)=[a_{i+j}]_{i,j\geq 0}\) be the _Hankel matrix_ of the sequence \(\alpha\). Then
\[H(\alpha)=\left[\begin{array}{ccccc}a_{0}&a_{1}&a_{2}&a_{3}&\cdots\\ a_{1}&a_{2}&a_{3}&a_{4}&\cdots\\ a_{2}&a_{3}&a_{4}&a_{5}&\cdots\\ a_{3}&a_{4}&a_{5}&a_{6}&\cdots\\ \vdots&\vdots&\vdots&\vdots&\ddots\end{array}\right].\]
Clearly, \(\alpha\) is log-convex if and only if \(H(\alpha)\) is TP\({}_{2}\). It is well known that the sequence \(\alpha\) is SM if and only if the corresponding Hankel matrix \(H(\alpha)\) is totally positive (see [23, Theorem 4.4] for instance). We refer the reader to [12, 16, 18, 33].
**Theorem 3.8**.: _Let \(b,c>0\)._
* _The sequence_ \((T_{n}(b,c))_{n\geq 0}\) _is log-convex if and only if_ \(b^{2}\geq 2c\)_._
* _The sequence_ \((T_{n}(b,c))_{n\geq 0}\) _is SM if and only if_ \(b^{2}\geq 4c\)_._
Proof.: (i) Consider the tridiagonal matrix
\[J=\left(\begin{array}{ccccc}b&1&&&\\ 2c&b&1&&\\ &c&b&1&\\ &&c&b&\ddots\\ &&&\ddots&\ddots\end{array}\right). \tag{3.18}\]
It is known [8, Theorem 2.3] that if the matrix \(J\) is TP\({}_{2}\), then the sequence \((T_{n}(b,c))_{n\geq 0}\) is log-convex. On the other hand, the tridiagonal matrix \(J\) is TP\({}_{2}\) if and only if \(b^{2}\geq 2c\)[7, Proposition 2.6 (i)]. Thus \(b^{2}\geq 2c\) implies that \((T_{n}(b,c))_{n\geq 0}\) is log-convex.
Conversely, if \((T_{n}(b,c))_{n\geq 0}\) is log-convex, then in particular, we have \(T_{1}(b,c)T_{3}(b,c)\geq T_{2}^{2}(b,c)\), which is equivalent to \(b^{2}\geq 2c\).
(ii) It is well known that a sequence \((a_{k})_{k\geq 0}\) is SM if and only if there exists an integer \(m\geq 0\) such that
\[\det[a_{i+j}]_{0\leq i,j\leq n}>0,\qquad\det[a_{i+j+1}]_{0\leq i,j\leq n}>0\]
for \(0\leq n<m\), and
\[\det[a_{i+j}]_{0\leq i,j\leq n}=\det[a_{i+j+1}]_{0\leq i,j\leq n}=0\]
for \(n\geq m\) (see [26, Theorem 1.3] for instance). Now \(\det[T_{i+j}(b,c)]_{0\leq i,j\leq n}=2^{n}c^{\binom{n+1}{2}}\) and
\[\det[T_{i+j+1}(b,c)]_{0\leq i,j\leq n}=2^{n}c^{\binom{n+1}{2}}u_{n},\]
where \(u_{n}\) is the \((n+1)\)th principal leading minor of the matrix \(J\) (see [2, SS8, Result 2] for instance). It follows that \((T_{n}(b,c))_{n\geq 0}\) is SM if and only if \(u_{n}\) are positive for all \(n\geq 0\). On the other hand, an irreducible nonnegative tridiagonal matrix is TP if and only if all its leading principal minors are positive (see [20, Example 2.2, p. 149] for instance), and the tridiagonal matrix \(J\) is TP if and only if \(b^{2}\geq 4c\)[7, Proposition 2.6 (ii)]. Thus \((T_{n}(b,c))_{n\geq 0}\) is SM if and only if \(b^{2}\geq 4c\).
**Remark 3.9**.: When \(b^{2}\geq 4c\), the sequence \((T_{n}(b,c))_{n\geq 0}\) is SM. However, we do not know how to obtain the associated measure and whether it is discrete or continuous.
The central binomial coefficients \(\binom{2n}{n}=T_{n}(2,1)\) and the central Delannoy numbers \(D_{n}=T_{n}(3,2)\) are SM respectively. See [16, 33, 35] also.
We have seen that if \(b,c>0\) and \(b^{2}\geq 2c\), then \((T_{n}(b,c))_{n\geq 0}\) is log-convex, i.e., \(T_{i-1}(b,c)T_{j+1}(b,c)-T_{i}(b,c)T_{j}(b,c)\) are nonnegative for \(j\geq i\geq 1\). Actually, we have the following stronger result.
**Theorem 3.10**.: _Let \(j\geq i\geq 1\). Then_
\[T_{i-1}(b,c)T_{j+1}(b,c)-T_{i}(b,c)T_{j}(b,c)=b^{\frac{1-(-1)^{i+j}}{2}}f_{ij}( b^{2}-2c,c),\]
_where \(f_{ij}(x,y)\) is a polynomial in \(x\) and \(y\) with nonnegative integer coefficients._
To prove Theorem 3.10, we introduce some notations for convenience. Let \(A=[a_{ij}]_{i,j\geq 0}\) be a (finite or infinite) matrix. We use \(A\left(\begin{array}{c}i_{0},i_{1}\\ j_{0},j_{1}\end{array}\right)\) to denote the minor of order \(2\) of \(A\) determined by the rows indexed \(i_{1}>i_{0}\geq 0\) and columns indexed \(j_{1}>j_{0}\geq 0\). Suppose that all entries of \(A\) are polynomials in \(b\) and \(c\). We say that \(A\) is NICE if there exist two polynomials \(f(x,y)\) and \(g(x,y)\) in \(x\) and \(y\) with nonnegative integer coefficients such that
\[A\left(\begin{array}{c}i_{0},i_{1}\\ j_{0},j_{1}\end{array}\right)=\left\{\begin{array}{rl}f(b^{2}-2c,c),&\text{if }i_{0}+i_{1}+j_{0}+j_{1}\equiv 0 \pmod{2};\\ b\cdot g(b^{2}-2c,c),&\text{if }i_{0}+i_{1}+j_{0}+j_{1}\equiv 1\pmod{2}. \end{array}\right.\]
**Lemma 3.11**.: _The product of an arbitrary number of NICE matrices is still NICE._
Proof.: By the associativity of matrix product, it suffices to show that the product of two NICE matrices is still NICE. Recall that the classical Cauchy-Binet formula
\[(AB)\binom{i_{0},i_{1}}{j_{0},j_{1}}=\sum_{k_{0}<k_{1}}\left[A\binom{i_{0},i }{k_{0},k_{1}}\cdot B\binom{k_{0},k_{1}}{j_{0},j_{1}}\right]. \tag{3.19}\]
If \(i_{0}+i_{1}+j_{0}+j_{1}\equiv 0\pmod{2}\), then \(i_{0}+i_{1}+k_{0}+k_{1}\equiv k_{0}+k_{1}+j_{0}+j_{1}\pmod{2}\). Suppose that both \(A\) and \(B\) are NICE. Then each term in the sum on the right hand side of (3.19)
\[A\binom{i_{0},i_{1}}{k_{0},k_{1}}\cdot B\binom{k_{0},k_{1}}{j_{0},j_{1}}= \left\{\begin{array}{rl}f(b^{2}-2c,c),&\text{if }i_{0}+i_{1}+k_{0}+k_{1}\equiv 0 \pmod{2};\\ b^{2}\cdot g(b^{2}-2c,c),&\text{if }i_{0}+i_{1}+k_{0}+k_{1}\equiv 1\pmod{2}, \end{array}\right.\]
which is a polynomial in \(b^{2}-2c\) and \(c\) with nonnegative integer coefficients, and so is the whole sum on the right hand side of (3.19).
Similarly, if \(i_{0}+i_{1}+j_{0}+j_{1}\equiv 1\pmod{2}\), then each term in the sum on the right hand side of (3.19) is form of \(b\cdot g(b^{2}-2c,c)\), and so is the whole sum on the left hand side of (3.19). Thus we conclude that the product \(AB\) is NICE by the definition.
**Lemma 3.12**.: _The tridiagonal matrix \(J\) is NICE._
Proof.: The statement follows from the definition by checking nonzero minors of order \(2\)
**Lemma 3.13**.: _The matrix \(T^{+}(b,c)\) is NICE._
Proof.: Clearly, an infinite matrix is NICE if and only if all leading principal submatrices are NICE. Let \(L_{n}=[T_{n,k}(b,c)]_{0\leq i,j\leq n}\) be the \(n\)th leading principal submatrix of \(T^{+}(b,c)\). Then it suffices to show that \(L_{n}\) are NICE for all \(n\geq 0\). We proceed by induction on \(n\). We have \(|L_{0}|=|L_{1}|=1\). Assume that \(L_{n}\) is NICE. Denote
\[L_{n}^{*}=\left[\begin{array}{cc}1&O\\ O&L_{n}\end{array}\right]\]
and
\[J_{n}^{*}=\left[\begin{array}{c}e_{n}\\ J_{n}\end{array}\right],\]
where \(e_{n}=(1,0,\ldots,0)\) and \(J_{n}\) is the \(n\)th leading principal submatrix of \(J\). Then
\[J_{n}^{*}=\left[\begin{array}{ccccc}1&&&&\\ b&1&&&&&\\ 2c&b&1&&&&&\\ &c&b&\ddots&&\\ &&\ddots&\ddots&1&\\ &&&c&b&1\end{array}\right],\]
and we can rewrite (3.13) and (3.14) as
\[L_{n+1}=L_{n}^{*}\cdot J_{n}^{*}.\]
By the assumption, \(L_{n}\) is NICE, and so \(L_{n}^{*}\) is NICE. By Lemma 3.12, \(J\) is NICE, and so is the leading principal submatrix \(J_{n}\), as well as the matrix \(J_{n}^{*}\). The product \(L_{n+1}\) is therefore NICE. Thus all leading principal submatrices of \(T^{+}(b,c)\) are NICE, as required.
We are now in a position to prove Theorem 3.10.
Proof of Theorem 3.10.: Let \(H=[T_{n+k}(b,c)]_{n,k\geq 0}\) be the Hankel matrix of the Catalan-like numbers \((T_{n}(b,c))_{n\geq 0}\). Then we need to show that the submatrix of \(H\) determined by the first two rows is NICE. We do this by showing that the matrix \(H\) is NICE.
We have shown that \(T_{n}(b,c)\) are the Catalan-like numbers corresponding to \(\beta=(b,b,b,\ldots)\) and \(\gamma=(2c,c,c,\ldots)\). Hence the Hankel matrix
\[H=T^{+}(b,c)\cdot D\cdot[T^{+}(b,c)]^{t} \tag{3.20}\]
by (3.6). Clearly, the diagonal matrix \(D=\operatorname{diag}(1,2c,2c^{2}.2c^{3},\ldots)\) is NICE. By Lemma 3.13, the matrix \(T^{+}(b,c)\) is NICE, so is its transpose \([T^{+}(b,c)]^{t}\). Thus the product \(H\) is NICE by Lemma 3.11, as desired.
**Corollary 3.14**.: _These exists a monic polynomial \(f_{n}(x)\) of degree \(n\) with nonnegative integer coefficients such that_
\[T_{n}(b,c)T_{n+2}(b,c)-T_{n+1}^{2}(b,c)=2c^{n+1}f_{n}\left(\frac{b^{2}-2c}{c} \right).\]
Proof.: Denote \(\Delta_{n}=T_{n}(b,c)T_{n+2}(b,c)-T_{n+1}^{2}(b,c)\). Then
\[\Delta_{n}=H\left(\begin{array}{cc}0,&1\\ n,&n+1\end{array}\right),\]
where \(H=[T_{n+k}(b,c)]_{n,k\geq 0}\). By (3.20) and the Cauchy-Binet formula,
\[\Delta_{n}=2c\cdot T^{+}(b,c)\left(\begin{array}{cc}n,&n+1\\ 0,&1\end{array}\right)=2c\left[T_{n,0}(b,c)T_{n+1,1}(b,c)-T_{n,1}(b,c)T_{n+1,0}( b,c)\right].\]
By (3.2), we have
\[\Delta_{n}=2c\sum_{k=0}^{n}a_{k}b^{2n-2k}c^{k},\]
where all \(a_{k}\) are integers. In particular, \(a_{0}=1\). Thus
\[\Delta_{n}=2c\sum_{k=0}^{n}a^{\prime}_{k}(b^{2}-2c)^{n-k}c^{k},\]
where all \(a^{\prime}_{k}\) are nonnegative integers by Theorem 3.10, and \(a^{\prime}_{0}=1\). Define \(f_{n}(x)=\sum_{k=0}^{n}a^{\prime}_{k}x^{n-k}\). Then
\[\Delta_{n}=2c^{n+1}f_{n}\left(\frac{b^{2}-2c}{c}\right),\]
as desired.
## 4 Remarks
The generalized Motzkin numbers \(M_{n}(b,c)\) are closely related to the generalized central trinomial coefficients \(T_{n}(b,c)\) and they possess many similar properties. Recall that
\[M_{n}(b,c)=\sum_{k=0}^{\lfloor n/2\rfloor}\frac{n!}{k!(k+1)!(n-2k)!}b^{n-2k}c ^{k}\]
and
\[T_{n}(b,c)=\sum_{k=0}^{\lfloor n/2\rfloor}\frac{n!}{k!k!(n-2k)!}b^{n-2k}c^{k}.\]
Then we have
1. \(\frac{\partial(cM_{n}(b,c))}{\partial c}=T_{n}(b,c)\);
2. \(\frac{\partial T_{n}(b,c)}{\partial b}=nT_{n-1}(b,c)\) and \(\frac{\partial M_{n}(b,c)}{\partial b}=nM_{n-1}(b,c)\);
3. \(\sum_{k=0}^{n}\binom{n}{k}M_{k}(b,c)a^{n-k}=M_{n}(a+b,c)\);
4. \(M_{i-1}(b,c)M_{j+1}(b,c)-M_{i}(b,c)M_{j}(b,c)=b^{\frac{1-(-1)^{i}+j}{2}}g_{ij} (b^{2}-c,c)\) for \(j\geq i\geq 1\), where \(g_{ij}(x,y)\) are polynomials in \(x\) and \(y\) with nonnegative integer coefficients;
5. \(M_{n}(b,c)M_{n+2}(b,c)-M_{n+1}^{2}(b,c)=c^{n+1}g_{n}\left(\frac{b^{2}-c}{c}\right)\), where \(g_{n}(x)\) is a monic polynomial of degree \(n\) with nonnegative integer coefficients.
We omit the details for brevity.
**Declaration of competing interest**
The authors declare no conflict of interest.
## Acknowledgements
The authors thank the anonymous referee for valuable suggestions on improving the presentation of this paper. This work was partially supported by the National Natural Science Foundation of China (Grant Nos. 12001301 and 12171068).
|
2302.09938 | SkillRec: A Data-Driven Approach to Job Skill Recommendation for Career
Insights | Understanding the skill sets and knowledge required for any career is of
utmost importance, but it is increasingly challenging in today's dynamic world
with rapid changes in terms of the tools and techniques used. Thus, it is
especially important to be able to accurately identify the required skill sets
for any job for better career insights and development. In this paper, we
propose and develop the Skill Recommendation (SkillRec) system for recommending
the relevant job skills required for a given job based on the job title.
SkillRec collects and identify the skill set required for a job based on the
job descriptions published by companies hiring for these roles. In addition to
the data collection and pre-processing capabilities, SkillRec also utilises
word/sentence embedding techniques for job title representation, alongside a
feed-forward neural network for job skill recommendation based on the job title
representation. Based on our preliminary experiments on a dataset of 6,000 job
titles and descriptions, SkillRec shows a promising performance in terms of
accuracy and F1-score. | Xiang Qian Ong, Kwan Hui Lim | 2023-02-20T12:07:57Z | http://arxiv.org/abs/2302.09938v1 | # SkillRec: A Data-Driven Approach to Job Skill Recommendation for Career Insights
###### Abstract
Understanding the skill sets and knowledge required for any career is of utmost importance, but it is increasingly challenging in today's dynamic world with rapid changes in terms of the tools and techniques used. Thus, it is especially important to be able to accurately identify the required skill sets for any job for better career insights and development. In this paper, we propose and develop the Skill Recommendation (SkillRec) system for recommending the relevant job skills required for a given job based on the job title. SkillRec collects and identify the skill set required for a job based on the job descriptions published by companies hiring for these roles. In addition to the data collection and pre-processing capabilities, SkillRec also utilises word/sentence embedding techniques for job title representation, alongside a feed-forward neural network for job skill recommendation based on the job title representation. Based on our preliminary experiments on a dataset of 6,000 job titles and descriptions, SkillRec shows a promising performance in terms of accuracy and F1-score.
Recommendation Systems, Skill Recommendation, Occupational Analysis, Natural Language Processing, Neural Networks
## I Introduction
Technological advancements and industrial changes are constantly evolving at a rapid pace [1]. With this rapid change, new job scopes are constantly being generated at companies along with a dynamic list of skill sets to meet these ever-growing changes. As such, individuals that are looking to change their career or seeking a new job often face the problem of skill mismatch due to those constant changes. This problem is further highlighted by the recent COVID-19 pandemic which has caused drastic changes in many industries, such as aviation and tourism. During this period, there was a spike in the number of retrenched workers seeking to find employment in other fields [2] and potentially facing skill mismatch.
On top of that, jobs are tightly coupled with skill sets, which are essential for fulfilling the job requirements. Individuals who are transiting into a new career and industry would need to conduct extensive research or seek advice on the latest job requirements and the relevant skills. This process is commonly performed to improve the chances of applicants securing a new job, especially in a new industry. This is necessary due to the huge barrier of entry for individuals transiting over to new jobs or new industries, given their existing skill sets. Different industry fields and even roles within the same industry require different sets of skills to carry out tasks associated with the job. Likewise, hiring managers also look out for those skill sets when assessing suitable candidates for an open role. As such, identifying the skill sets required for a given job is especially important for potential job seekers, even for those staying in the same industry.
### _Related Work_
In terms of academic research, this problem and research area have also garnered keen interest in recent years due to these various developments [1, 3, 4, 5, 6, 7]. For example, there are various streams of research that investigate diverse but related topics such as: (i) recommending job candidates to companies based on their suitability [8, 9, 10]; (ii) recommending jobs to candidates based on the candidates skill sets and career history [11, 12]; (iii) modelling the career trajectories of individuals [13, 14]; (iv) predicting the future career transitions and movements of workers [15, 16].
Among these works, the most closely related works to our research are those on job skill recommendations [17, 18, 19] and those using various embedding techniques [4, 20, 21, 22] for occupational analysis and mining. For example, Dave et al. proposed a representation learning model using job transitions, job-skills and skill co-occurence networks for recommending both jobs and skills [17]. Others have used reinforcement learning in the form of a Deep-Q Network for skill recommendation and better interpretation [19]. Liu et al. developed a bi-directional language model for contextualised job tile embeddings that was demonstrated on a downstream occupational named entity recognition task [4]. Skills2Graph [21] used FastText embeddings to better maintain semantic similarity between taxonomic components of job postings. Also closely related to our research are various works that have developed systems and implemented approaches for related occupational analysis and mining tasks [20, 23, 24].
### _Motivation and Problem Statement_
The fundamental problem is that identifying the relevant skill sets required for a particular job title is both time-consuming and a complex process. However, this process is essential to job seekers looking to enter new jobs and yet very challenging as they are new to both the field and industry.
The problem can be broken down into two sub-problems: the lack of knowledge to learn the relevant skills and the lack of proficiency in the relevant skills, which will be elaborated next.
#### I-B1 Lack of knowledge to learn the relevant skills
To learn the relevant skills required for the particular job, there is a need to understand the current job workflow, the tools that are used on the job and knowledge in that field of interest. The root of the problem is the lack of domain expertise in the new industry to truly understand the skill sets required for the job [25].
There are various different ways in which the problem can be mitigated. The first is to seek advice from experts who have worked in the field for a long time. They understand the job requirements and the necessary skill sets needed, from their extensive years of experience. Alternatively, governments' skills recommendations serve as a good indicator of possible general relevant skill sets that are required in the industry for the near future [26, 27]. Lastly, individuals could perform online research on the industry, given that there is an abundance of articles and forums that share insights and recommendations.
This step is crucial as it serves as the basis for the next step. Only after knowing the skills that need to be learned, then the individual can embark on the next step.
#### I-B2 Lack of proficiency in the relevant skills
After identifying the required skill sets to transit to a new industry or role, it is crucial for individuals to have access to resources to learn those skills. Proficiency in these relevant skills is an essential requirement that leads to successful employment and subsequent progression. The resources could come in many different forms, both physical and digital, which we discuss next.
A classroom setting is a common structure whereby individuals will attend classes and undergo a structured curriculum to gain knowledge and experience from the instructors. At the end of such courses, there are formal certificates that act as proof of completion of the course. Classroom formats could also be virtual, ranging from online courses such as Udemy and Coursera. Similarly, these classes provide both the content and practical lessons, with the added bonus of online certificates upon completion. Openly available resources are unstructured curricula that could comprise different resources from both online and offline. This gives the individual the flexibility of time and learning opportunity but typically do not result in a formal certification upon completion.
### _Problem Definition_
In this paper, our main focus is on the first sub-problem, which is the lack of knowledge to learn the relevant skills required for a job role. This problem is significantly more crucial than the second sub-problem, as the former's outcome will determine the skill sets that an individual is required to learn. Failing at this step will render useless any future efforts at both a industry transition or career progression. In addition, there are multiple open-source and affordable resources online to address the problem of a lack of proficiency, thus this second sub-problem is thus not as crucial as compared to the first sub-problem that we focus on. More formally, we investigate the problem of identifying and recommending the required job skill sets required for a role given the job title.
### _Main Contributions_
The current ways to address the problems are mainly human-driven insights, such as seeking the advice of current practitioners in the new industry or through self-reading of openly available resources. However, we could tap into data-driven insights that perform the analysis and recommend the right skill sets to pick up for the industry or role we are interested in. This could reduce the time taken or the cost incurred in the process.
In this paper, we make the following contributions:
1. We investigate the problem of job skill recommendation, which is to recommend a list of required skill sets given role in the form of a job title.
2. To address the above problem, we propose and develop the Skill Recommendation (SkillRec) system. SkillRec comprises the capability to collect openly available job listing information, process these data to extract job title and required skill sets, and subsequently recommend skill sets given a job title.
3. As part of SkillRec, we also develop two approaches to skill recommendation, utilising word/sentence embedding techniques (using BERT and FastText) for job title representation, before inputing this job title representation to a simple but effective feed-forward neural network for job skill recommendation.
4. Using a dataset of 6,000 job listings, we perform preliminary experiments to evaluate the performance of our proposed SkillRec system. Our preliminary results show promising performance of SkillRec in terms of accuracy and F1-score.
## II Proposed Skill Recommendation System
Our proposed Skill Recommendation (SkillRec) system comprises four main components, including a web collection component, data pre-processing component, job title representation component and job skill recommendation component. Figure 1 shows an overview of our proposed system. Next, we provide a brief summary of each component, followed by a more detail description in the later sections.
1. Web Collection Component. This component retrieves job listings from various web sources, such as online job posting platforms.
2. Data Pre-processing Component. This component performs the necessary data pre-processing and cleaning to extract relevant information from the job descriptions, such as the job title, job description, required skills, company name, etc.
3. Job Title Representation Component. This component utilises word/sentence embedding techniques, such as BERT and FastText, to model job titles in term of its vector representations.
representation from the previous step is then fed into this component, which comprises a feed-forward neural network that outputs a set of recommended skills given this job title.
### _Web Collection Component_
The web collection component retrieves various types of web-based information such as job descriptions from popular online job posting platforms, as well as skill sets from massive open online courses platforms.
The job description data was retrieved over a period of six months at a weekly interval to ensure that the dataset is kept up-to-date. A total of 6,000 job descriptions were retrieved from three popular online job posting platform.
Massive open online courses platforms provide a good source of information on popular educational courses and the skill sets they aim to impart. Using two popular massive open online courses platforms, we collected and identified 589 unique skill sets that are relevant for this work.
### _Data Pre-processing Component_
The data pre-processing component works on the job title and job description data collected by the web collection component. This component first parses the collected webpage to extract the relevant information, i.e., the job title and job description. Thereafter, the standard text pre-processing steps, such as conversion to lowercase, removal of non-textual symbols, tokenisation, are performed on these two fields. The skill sets for a job listing is derived from its corresponding job description based on an exact match of a mentioned skill with our earlier list of 589 unique skill sets.
### _Job Title Representation Component_
As our main task is to determine the required skill sets for a given job title, we proceed to model the job titles using various embedding techniques. In this study, we experiment with two popular word/sentence representation techniques:
* BERT Embedding [28]. The Bidirectional Encoder Representation of Transformer (BERT) is a transformer-based language model comprising multiple layers of bidirectional Transformer encoders, along with self-attention mechanism. BERT is then pre-trained with the Masked
Fig. 1: System Architecture of Our Proposed Skill Recommendation (SkillRec) System
Language Model task of predicting hidden tokens using other non-hidden tokens, and/or Next Sentence Prediction task of predicting the next sentence given an input sentence.
* FastText Embedding [29, 30]. FastText builds upon the skip-gram approach that is popularly used by the Word2Vec model [31, 32]. Using FastText, each word is represented by a bag-of-character n-grams, and in turn each character n-grams is represented by a vector. The vector representation of each word is then derived from the sum of these character n-grams vector representations.
BERT was chosen due to its strong performance in various language modelling tasks, such as text classification, question answering, among others. In addition, BERT has been frequently used in a variety of other recommendation and prediction tasks, such as location prediction [33, 34] and tourism recommendation [35, 36].
After extracting the job title in the previous step, either BERT or FastText is then used to obtain the vector representation of the job title. This job title representation is subsequently used as input to our neural network for job skill recommendation, which we describe next.
### _Job Skill Recommendation Component_
Using the vector representation of the job title from the previous step, this job title representation is then fed into a feed-forward neural network for job skill recommendation based on the job title. We utilise a feed-forward neural network with two fully-connected hidden layer, with the first hidden layer comprising 1280 nodes, the second hidden layer comprising 640 nodes and a final output layer of 589 nodes corresponding to the 589 unique skill sets. We employ Sigmoid as our activation function, AdamW as our optimizer and a dropout of 0.5 to reduce overfitting.
## III Experiments and Results
Using our collected dataset, we use 80% for training our model as described in Section II and use the remaining 20% for evaluation. For evaluation purposes, we use the standard metrics of accuracy and F1-score, based on the recommended job skill against the ground truth job skill in the original job description.
We experimented with the following algorithms and baselines:
* Bag-of-Words. A classical bag-of-words model that is traditionally used for classification tasks involving text-based data. In our application, we utilise a frequency-based approach to model the frequency of which a particular skill is mentioned in relation to a specific job. This model is used as a simple baseline to compare our two proposed approaches.
* FastText+NN. As described earlier in our SkillRec system, this model uses the FastText embedding for job title representation (Section II-C), followed by a neural network (NN) for job skill recommendation (Section II-D).
* BERT+NN. Similar to the earlier FastText+NN model, this model uses the BERT embedding for job title representation (Section II-C), followed by the same neural network (NN) for job skill recommendation (Section II-D).
Table I shows the average results of our experiments for the three models in terms of accuracy and F1-score. The results show that our proposed SkillRec system using BERT+NN is the best performing model, with FastText+NN being a extremely close second. BERT has been shown to perform well for a range of natural language processing tasks, from text classification to question answering. Similarly for this task of occupational skill recommendation, BERT as well as FastText perform well with the add-on of a straightforward neural network. Unsurprisingly, Bag-of-Words performs the worst by a large margin due to the simple job title representation approach that is unable to capture similar job titles obtained.
In addition to the experiments measuring accuracy and F1-score of the job skill recommendations, we also performed a qualitative check on the job title representations learnt and the similar job titles and skill sets based on these representations. Table II shows a sample of some job titles and the related job titles and skill sets. This simple analysis shows the effectiveness of our approach as occupations such as developer and software engineer results in relevant skills like Java, Python, fullstack, embedded, etc. Similarly, non-technical roles such as senior director also resulted in similar roles like partner and planner and relevant skills like strategy. For certain roles like chef, the results are more mixed with relevant terms like pastry and kitchen being mentioned but also non-relevant ones like chemist and accountant. This issue highlights a limitation due to our preliminary experiments and limited dataset, which contains more technical roles than non-technical ones.
## IV Conclusion and Future Work
In this paper, we proposed the SkillRec system to address the problem of job skill recommendation given a job title. SkillRec comprises four main components for web collection, data pre-processing, job title representation and job skill recommendation. In particular, our preliminary experiments show that an approach that combines job title representation
\begin{table}
\begin{tabular}{l l} \hline \hline Job Title & Related Jobs and Skills \\ \hline developer & java, python, fullstack, android, software \\ software & stack, java, python, fullstack, android \\ chef & pastry, kitchen, hair, chemist, accountant \\ senior director & director, partner, pacific, planner, strategy \\ software engineer & software, engineer, python, java, embedded \\ \hline \hline \end{tabular}
\end{table} TABLE II: Example Job Title and Relevant Skills
\begin{table}
\begin{tabular}{l l l} \hline \hline Algorithm & Accuracy & F1-score \\ \hline Bag-of-Words & 0.4768 & 0.2887 \\ FastText+NN & 0.9728 & 0.4931 \\ BERT+NN & 0.9870 & 0.4973 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Experimental Results
using BERT, along with job skill recommendation using a straightforward feed-forward neural network works effectively in terms of accuracy and F1-score.
Future enhancements to SkillRec can involve a more dynamic recommendation of job skills to account for new skill sets that may emerge in future or existing skill sets that evolve over time. Similarly, we can also include more context in addition to the job titles, such as the industry, country, job applicant details, etc, for making more contextualised and personalised job skill recommendations.
## V Acknowledgment
This research is funded in part by the Singapore University of Technology and Design under grant RS-MEFAI-00005-R0201.
|
2306.12151 | Light dark matter search with DarkSide-50 | We present the latest results from the search for light dark matter particle
interactions with the DarkSide-50 dual-phase liquid argon time projection
chamber. This analysis, based on the ionization signal only, improves the
existing limits for spin-independent WIMP-nucleon interactions in the $[1.2,
3.6]$ GeV/c$^2$ mass range. The sensitivity is extended down to 40 MeV/c$^2$ by
assuming the Migdal effect, responsible for an additional ionization signal
from the recoiling atom. Finally, we set new constraints to interactions of
dark matter particles with electrons in the final state, namely WIMPs, galactic
axions, dark photons, and sterile neutrinos. | D. Franco | 2023-06-21T09:54:23Z | http://arxiv.org/abs/2306.12151v1 | # Light dark matter search with DarkSide-50
###### Abstract
We present the latest results from the search for light dark matter particle interactions with the DarkSide-50 dual-phase liquid argon time projection chamber. This analysis, based on the ionization signal only, improves the existing limits for spin-independent WIMP-nucleon interactions in the \([1.2,3.6]\) GeV/c\({}^{2}\) mass range. The sensitivity is extended down to 40 MeV/c\({}^{2}\) by assuming the Migdal effect, responsible for an additional ionization signal from the recoiling atom. Finally, we set new constraints to interactions of dark matter particles with electrons in the final state, namely WIMPs, galactic axions, dark photons, and sterile neutrinos.
## 1 Introduction
DarkSide-50 was a dual-phase liquid argon (LAr) time projection chamber (TPC), operational between 2013 and 2019 in the Hall C of the Gran Sasso National Laboratory (LNGS) in Italy, primarily designed to search for spin-independent interactions between nucleons and high-mass WIMPs (\(>\)10 GeV/c\({}^{2}\)). The first data-taking campaign ran from November 2013 to April 2015 with an atmospheric argon (AAr) target, contaminated with cosmogenic \({}^{39}\)Ar with a specific activity of \(\sim\)1 Bq/kg. The target was then replaced with low-radioactivity argon extracted from deep underground (UAr), where activation of cosmogenic isotopes is highly suppressed.
The active mass, 46.4\(\pm\)0.7 kg, is enclosed in a PTFE cylinder on the side and by two arrays of nineteen 3" photomultiplier tubes (PMTs) on the top and bottom. The PMTs observe scintillation and electroluminescence light pulses, induced by particle interactions in LAr. The scintillation pulse originates from atomic excitation and recombination of ion-electron pairs produced by ionization. The electroluminescence one is induced by ionization electrons, which survive the recombination. These are drifted upwards, across the liquid bulk, thanks to a uniform 200 V/cm electric field, shaped by copper rings surrounding the PTFE sidewalls. Ionization electrons are then extracted and drifted in gaseous argon at the top of the TPC, where they produce, in average, \(\sim\)23 photoelectrons per electron extracted in gas.
All the inner surfaces of the TPC are coated with tetraphenyl butadiene (TPB), a wavelength shifter that absorbs 128 nm photons from argon de-excitation and re-emits photons, whose wavelengths are peaked at 420 nm. The TPC, hosted inside a 120 l cryostat, is shielded against neutrons and cosmic muons by surrounding liquid scintillator and water Cherenkov detectors.
The discovery potential of dual-phase LAr TPCs in searching for high-mass (\(>\)10 GeV/c\({}^{2}\)) WIMPs relies on their extraordinary background discrimination. This was proved by DarkSide-50 with the AAr campaign, by rejecting 1.5\(\times\)10\({}^{7}\) electronic recoils from \({}^{39}\)Ar decays with the scintillation pulse shape discrimination [1]. The potential was later confirmed with the UAr dataset, where no event was observed in the acceptance region for WIMPs over 532 days of live search [2].
In 2018, DarkSide-50 extended its physics case to lighter dark matter particles [3, 4] by lowering the energy threshold from \(\sim\)20 keV\({}_{nr}\) (nuclear recoil equivalent) to few hundreds of eV\({}_{nr}\). This
was achieved by using the ionization-based event energy estimator instead of the scintillation one. Single ionization electrons are detected with 27% resolution and with nearly 100% extraction efficiency in gas, to be compared with 16%, the efficiency in detecting a scintillation photon [5]. The drawback is the reduced background discrimination power due to the absence of scintillation pulse shape and volume fiducialization along the electric field, as the drift time is no longer measurable. Nevertheless, this analysis technique, applied for the first time to a liquid argon experiment, led in 2018 to the improvement of existing limits on WIMP-nucleon and WIMP-electron interactions in the few GeV/c\({}^{2}\) and sub-GeV/c\({}^{2}\) mass ranges, respectively [3, 4].
The analysis presented here is based on 653.1 live-days from the UAr campaign from December 12, 2015, to February 24, 2018, nearly twice the exposure used in the analysis published in 2018 [3]. The new analysis profits from an improved data selection and a more accurate background model. Moreover, it benefits from the extensive effort devoted in 2021 to accurately determine the LAr ionization response to nuclear (NR) and electronic (ER) recoils, down to the sub-keV range [6]. The ER ionization response was measured down to \(\sim\)180 eV\({}_{er}\), exploiting \({}^{37}\)Ar and \({}^{39}\)Ar decays naturally present in the early LAr dataset, and extrapolated to a 3 ionization electrons with the Thomas-Imel box model [7]. The ionization response to NRs was measured down to \(\sim\)500 eV\({}_{nr}\), the lowest ever achieved in liquid argon, using _in situ_ neutron calibration sources and external datasets from neutron beam experiments, like ARIS [8] and SCENE [9]. Other elements of the re-analysis are an improved detector response model and a refined treatment of systematics into the statistical analysis.
## 2 Data selection and background model
Events are required for this analysis to be single-scatter, hence selected with a single reconstructed ionization pulse (S2). A set of quality cuts are applied in order to reject unresolved pile-ups of scintillation-ionization or multiple-ionization pulses. Quality cuts rely on pulse shape estimators like the peak and the FWHM of the S2 time profile, and the pulse start time with respect to the offset of the trigger, which requires at least two PMTs above a threshold of 0.6 PE. Single scatter events detected by the outermost ring of PMTs, mostly due to external radioactive contamination, are discarded, reducing the signal acceptance to 41.9%.
The resulting data sample is contaminated by two classes of background events that can be rejected on an event-by-event basis. The first is associated to spurious ionization electrons. These are trapped along their drift by trace impurities or at the liquid surface and released with delays of up to hundreds of milliseconds. This background component is partially suppressed by applying a veto of 20 ms after each DAQ trigger. The second class is associated to events with a large scintillation component and an anomalously low S2 pulse. Their origin is associated to \(\alpha\) particles generated at the TPC walls, where ionization electrons are suppressed being absorbed by the walls themselves. Some of the scintillation photons may extract electrons from the cathode by photoelectric effect inducing low S2 pulses. These events are rejected on the basis of the ionization-to-scintillation ratio, a cut tuned on ER and NR calibration datasets. The final dataset corresponds to an exposure of (12,306 \(\pm\)184) kg d. It contains about 300,000 events in [4, 170]\(N_{e}\) (number of electrons), the region of interest (RoI) for this analysis, equivalent to [0.06, 21] keV\({}_{er}\) ([0.64, 288] keV\({}_{nr}\)) in the ER (NR) energy scale [10]. The impact of each step of the data selection is shown in Figure 1.
The selected data sample is dominated by background events from radioactive contamination in LAr (internal) and in the detector components surrounding the active target (external). Internal background is mainly due to first forbidden unique beta decays of \({}^{39}\)Ar and \({}^{85}\)Kr, whose activities in the RoI and in the fiducial volume is estimated to be (6.5\(\pm\)0.9)\(\times\)10\({}^{-4}\) Bq and (1.7\(\pm\)0.1)\(\times\)10\({}^{-3}\) Bq, respectively. The associated theoretical energy spectra used in this analysis take into account recent calculations of atomic exchange and screening effects [12]. The external background is mostly due to \(\gamma\)s and X-rays from detector components, whose specific
activities were determined via a comprehensive material screening campaign. As the spectral shapes of backgrounds originating from nearby positions resemble each other in the RoI, they were combined into two main components, here labelled as PMTs and cryostat. Backgrounds from radiogenic and cosmogenic neutrons, as well as coherent elastic neutrino-nucleus scattering from solar and atmospheric neutrinos, are negligible and hence not included in this analysis.
The background model is built by simulating each isotope, uniformly distributed in the associated component materials. Decaying particles are tracked over the DarkSide-50 geometry with G4DS, the DarkSide Geant4-based Monte Carlo [5]. The simulation of the detector response accounts for, among other effects, electron captures by impurities along the drift, the dependence of the S2 response on the event radial position, and the energy dependent inefficiencies from the quality cuts.
Data are analyzed with a binned Profile Likelihood Ratio approach, which accounts for systematics by means of 11 nuisance parameters, classified as "amplitude", _i.e._, acting on the normalization of the background components, and as "shape", accounting for spectral distortions from the ionization response and from uncertainties on \({}^{39}\)Ar and \({}^{85}\)Kr \(\beta\)-decay spectral shapes [10]. The data fit with the background-only model is shown in Fig. 2. The excess below the analysis
Figure 1: Ionization electron spectra at different steps of the data selection, after rejection of events outside the fiducial volume and with multiple interactions [11].
Figure 2: Fit of the background-only model to the DarkSide-50 data, after selection cuts in the [4, 170] Ne range. The associated uncertainty (shaded area), residuals, and the individual contributions from the internal (\({}^{39}\)Ar and \({}^{85}\)Kr) and external radioactive backgrounds (cryostat and PMTs) are also shown [11].
threshold of 4 e\({}^{-}\) is associated to the presence of spurious electrons.
## 3 Results and conclusions
The NR signals from WIMP interactions via elastic scattering are modeled with the same Monte Carlo approach used for the background components. The systematic sources affecting the signal model arise from the NR ionization response uncertainty and from the fiducial volume one (1.5%), accounted for in the likelihood by _shape_ and _amplitude_ nuisance parameters, respectively. The only model uncertainty that cannot be parameterized in the likelihood, due to the lack of an accurate description of the process, is related to the fluctuation induced by the ionization quenching effect. This plays a major role as, in addition to fluctuations resulting from the excitons and ionization electrons partition and from the ion-electron recombination, it increases the probability of observing events above the analysis threshold. We therefore assumed two cases. The first is the suppression of quenching fluctuations (NQ), which, although not physical, represents the most conservative modelling with respect to the WIMP search. In the second, fluctuations are assumed to be governed by binomial statistics (QF), between detectable (ionization electrons and excitons) and undetectable quanta (e.g. phonons).
Exclusion limits above 1.2 GeV/c\({}^{2}\) WIMP mass are shown in Fig. 3 (left). Assuming NQ fluctuations, the most conservative model, DarkSide-50 establishes the current best 90% C.L. limits for WIMPs with masses in the range [1.2, 3.6] GeV/c\({}^{2}\) and improves by a factor of \(\sim\)10 at 3 GeV/c\({}^{2}\) the results obtained in 2018 [10]. The limit was then extended down to 40 MeV/c\({}^{2}\) by taking into account the Migdal effect [11], responsible of extra ionization or excitation components emitted because of the instantaneous momentum change, with respect to the atomic electrons, of the nucleus elastically scattered at rest. As shown in Fig. 3 (right), the limit is entirely driven by the Migdal effect for WIMP masses below 0.5 GeV/c\({}^{2}\). This results was confirmed by an alternative analysis based on Bayesian Networks [14].
As for _leptophilic_ dark matter [13], _i.e._, interacting with an electron in the final state, we have established the best direct-detection 90% C.L. limits on dark matter-electron scattering in the [16, 56] MeV/c\({}^{2}\) mass range for a heavy mediator and above 80 MeV/c\({}^{2}\) for a light mediator. We have also placed the first constraints on galactic axion-like particles and dark photons. Moreover, DarkSide-50 is the first direct dark matter direct-detection experiment to set limits on the sterile neutrino mixing angle. All the leptophilic limits are shown in Fig. 4.
These results may be improved in the future by better defining the LAr ionization response and the stochastic model underlying the NR quenching. In addition, they demonstrate the strong potential of dual-phase LAr TPC technology, with promising prospects from DarkSide-20k, the
Figure 3: Left. DarkSide-50 limits with (QF) and without (NQ) quenching fluctuations, compared to existing 90% C.L. exclusion limits and claimed discovery, and to the neutrino floor for LAr experiments [11]. Right. DarkSide-50 limits with (QF) and without (NQ) quenching fluctuations, extended down to 40 MeV/c\({}^{2}\) WIMP mass by exploiting the Migdal effect [13].
Figure 4: Exclusion limits at 90% C.L. on DM particle interactions with electron final states [10].
next generation 50-ton LAr TPC under construction at LNGS [15].
## Acknowledgments
This work was partially supported by the UnivEarthS LabEx program (Grants No. ANR-10-LABX-0023 and No. ANR-18-IDEX-0001).
|
2305.04263 | Conformal super Virasoro algebra: matrix model and quantum deformed
algebra | In this paper, we construct the super Virasoro algebra with an arbitrary
conformal dimension $\Delta$ from the generalized $\mathcal{R}(p,q)$-deformed
quantum algebra and investigate the $\mathcal{R}(p,q)$-deformed super Virasoro
algebra with the particular conformal dimension $\Delta=1$. Furthermore, we
perform the R(p,q)-conformal Virasoro n-algebra, the
$\mathcal{R}(p,q)$-conformal super Virasoro n-algebra ($n$ even) and discuss a
toy model for the $\mathcal{R}(p,q)$-conformal Virasoro constraints and
R(p,q)-conformal super Virasoro constraints. Besides, we generalized the notion
of the $\mathcal{R}(p,q)$-elliptic hermitian matrix model with an arbitrary
conformal dimension $\Delta$. Finally, we deduce relevant particular cases
generated by quantum algebras known in the literature. | Fridolin Melong | 2023-05-07T13:00:58Z | http://arxiv.org/abs/2305.04263v1 | # Conformal super Virasoro algebra: matrix model and quantum deformed algebra
###### Abstract.
In this paper, we construct the super Virasoro algebra with an arbitrary conformal dimension \(\Delta\) from the generalized \(\mathcal{R}(p,q)\)-deformed quantum algebra and investigate the \(\mathcal{R}(p,q)\)-deformed super Virasoro algebra with the particular conformal dimension \(\Delta=1.\) Furthermore, we perform the \(\mathcal{R}(p,q)\)-conformal Virasoro \(n\) algebra, the \(\mathcal{R}(p,q)\)-conformal super Virasoro \(n\)-algebra (\(n\) even) and discuss a toy model for the \(\mathcal{R}(p,q)\)-conformal Virasoro constraints and \(\mathcal{R}(p,q)\)-conformal super Virasoro constraints. Besides, we generalized the notion of the \(\mathcal{R}(p,q)\)-elliptic hermitian matrix model with an arbitrary conformal dimension \(\Delta\). Finally, we deduce relevant particular cases generated by quantum algebras known in the literature.
Key words and phrases:\(\mathcal{R}(p,q)\)-calculus; quantum algebra; Super Virasoro algebra; conformal; super-Virasoro constraints 2020 Mathematics Subject Classification: 17B37, 17B68, 81R10
###### Contents
* 1 \(\mathcal{R}(p,q)\)-conformal super Virasoro algebra (\(\Delta\neq 0,1\))
* 1.1 Constuction of the \(\mathcal{R}(p,q)\)-conformal generators
* 1.2 Generalized super Virasoro algebra with \(\Delta=1\)
* 2 \(\mathcal{R}(p,q)\)-Virasoro \(n\)-algebra with conformal dimension (\(\Delta\neq 0,1\))
* 2.1 \(\mathcal{R}(p,q)\)-conformal Witt \(n\)-algebra
* 2.2 Another \(\mathcal{R}(p,q)\)-conformal Witt \(n\)-algebra
* 2.3 Conformal Witt \(n\)-algebra and quantum algebras
* 2.4 A toy model for \(\mathcal{R}(p,q)\)-conformal Virasoro constraints
* 3 \(\mathcal{R}(p,q)\)-super Virasoro \(n\)-algebra with conformal dimension (\(\Delta\neq 0,1\))
* 3.1 Another \(\mathcal{R}(p,q)\)-super Witt \(n\)-algebra with conformal dimension (\(\Delta\neq 0,1\))
* 3.2 Conformal super Witt \(n\)-algebra and quantum algebras
* 3.3 A toy model for \(\mathcal{R}(p,q)\)-super Virasoro constraints with conformal dimension (\(\Delta\neq 0,1\))
* 4 Generalized matrix model with conformal dimension (\(\Delta\neq 0,1\))
* 5 Concluding remarks
###### Abstract
We consider the \(q\)-deformed central
The \(\mathcal{R}(p,q)\)-deformed numbers [13]:
\[[n]_{\mathcal{R}(p,q)}:=\mathcal{R}(p^{n},q^{n}),\quad n\in\mathbb{N}\cup\{0\}, \tag{2}\]
the \(\mathcal{R}(p,q)\)-deformed factorials
\[[n]!_{\mathcal{R}(p,q)}:=\left\{\begin{array}{ll}1\quad\mbox{for}\quad n=0\\ \\ \mathcal{R}(p,q)\cdots\mathcal{R}(p^{n},q^{n})\quad\mbox{for}\quad n\geq 1, \end{array}\right.\]
and the \(\mathcal{R}(p,q)\)-deformed binomial coefficients
\[\left[\begin{array}{c}m\\ n\end{array}\right]_{\mathcal{R}(p,q)}:=\frac{[m]!_{\mathcal{R}(p,q)}}{[n]!_{ \mathcal{R}(p,q)}[m-n]!_{\mathcal{R}(p,q)}},\quad m,n\in\mathbb{N}\cup\{0\}, \quad m\geq n.\]
Consider the following linear operators defined on \(\mathcal{O}(\mathbb{D}_{R})\) by [14]:
\[Q:\psi \longmapsto Q\psi(z): =\psi(qz),\] \[P:\psi \longmapsto P\psi(z): =\psi(pz),\] \[\mathcal{D}_{p,q}:\psi \longmapsto\mathcal{D}_{p,q}\psi(z): =\frac{\psi(pz)-\psi(qz)}{z(p-q)},\]
and the \(\mathcal{R}(p,q)\)-derivative
\[\mathcal{D}_{\mathcal{R}(p,q)}:=\mathcal{D}_{p,q}\frac{p-q}{P-Q}\mathcal{R}( P,Q)\]
or
\[\mathcal{D}_{\mathcal{R}(p,q)}\psi(z):=\frac{p-q}{p^{P}-q^{Q}}\mathcal{R}(p^{ P},q^{Q})\frac{\psi(pz)-\psi(qz)}{z(p-q)}. \tag{3}\]
The algebra associated with the \(\mathcal{R}(p,q)-\) deformation is a quantum algebra, denoted \(\mathcal{A}_{\mathcal{R}(p,q)}\), generated by the set of operators \(\{1,A,A^{\dagger},N\}\) satisfying the following commutation relations[14]:
\[AA^{\dagger} =[N+1]_{\mathcal{R}(p,q)}, A^{\dagger}A =[N]_{\mathcal{R}(p,q)}.\] \[=-A, \left[N,\;A^{\dagger}\right] =A^{\dagger}\]
with the realization on \(\mathcal{O}(\mathbb{D}_{R})\) given by:
\[A^{\dagger}:=z,\qquad A:=\partial_{\mathcal{R}(p,q)},\qquad N:=z\partial_{z},\]
where \(\partial_{z}:=\frac{\partial}{\partial z}\) is the derivative on \(\mathbb{C}\).
Moreover, the generalizations of the two parameters deformations (\((p,q)\)-deformed) Heisenberg algebras, also called \(\mathcal{R}(p,q)\)-deformed quantum algebras were investigated in [14]. The importance of this work is to generalize \(q\)-and multi-parameter oscillator algebras well known in the physics literature. The \(\mathcal{R}(p,q)\)-deformation has the advantage to the construction of quantum groups and algebras.[5, 11, 12, 30]. In the same idea, the \(\mathcal{R}(p,q)\)-deformed conformal Virasoro algebra was
constructed, the \(\mathcal{R}(p,q)\)-deformed Korteweg-de Vries equation for a conformal dimension \(\Delta=1\) was derived, and the energy-momentum tensor from the \(\mathcal{R}(p,q)\)-deformed quantum algebra for the conformal dimension \(\Delta=2\) was presented by Hounkonnou and Melong [16].
Recently, the generalizations of Witt and Virasoro algebras were performed, and the corresponding Korteweg-de Vries equations from the \(\mathcal{R}(p,q)\)-deformed quantum algebras were derived. Related relevant properties were investigated and discussed. Also, the \(\mathcal{R}(p,q)\)-deformed Witt \(n\)-algebra was constructed, and the Virasoro constraints for a toy model, which play an important role in the study of matrix models was presented [17]. Following theses works, the super Virasoro \(n\)-algebras from the generalized quantum deformed algebras were constructed by Melong [25]. The generalization of the Heisenberg-Virasoro algebra and matrix models from the \(\mathcal{R}(p,q)\)-deformed quantum algebra were investigated by Melong and Wulkenhaar [26]. Section 4 describes the characterization of the \(\mathcal{R}(p,q)\)-conformal super Virasoro algebra, the \(\mathcal{R}(p,q)\)-conformal super Virasoro \(n\)-algebra and a toy model for the \(\mathcal{R}(p,q)\)-conformal super Virasoro constraints. Particular cases are deduced. In Section 5, the conformal elliptic hermitian matrix model from the \(\mathcal{R}(p,q)\)-quantum algebra is obtained. We end with concluding remarks in section 6.
## 1. \(\mathcal{R}(p,q)\)-conformal super Virasoro algebra \((\Delta\neq 0,1)\)
In this section, we derive an \(\mathcal{R}(p,q)\)-extension of a conformal algebra with an arbitrary conformal dimension \(\Delta\)[9]. The \(\mathcal{R}(p,q)\)-generators are computed using the analogue of the \(\mathcal{R}(p,q)\)-Leibniz rule. The \(\mathcal{R}(p,q)\)-super Witt and super Virasoro algebras are built and discussed. A result deduced by Chaichian _et al_ in [4] is recovered in a specific case.
### Construction of the \(\mathcal{R}(p,q)\)-conformal generators
Let \(\mathcal{B}=\mathcal{B}_{0}\oplus\mathcal{B}_{1}\) be the super-commutative associative superalgebra such that \(\mathcal{B}_{0}=\mathbb{C}\big{[}z,z^{-1}\big{]}\) and \(\mathcal{B}_{1}=\theta\,\mathcal{B}_{0}\), where \(\theta\) is the Grassman variable with \(\theta^{2}=0\)[31]:
We define the algebras endomorphism \(\sigma\) on \(\mathcal{B}\) as follows [25]:
\[\sigma(t^{n}):=\tau_{2}^{n}\,t^{n}\quad\text{and}\quad\sigma(\theta):=\tau_{2} \,\theta, \tag{4}\]
where \((\tau_{i})_{i\in\{1,2\}}\) are functions depending on the parameters \(p\) and \(q\).
The two linear maps \(\partial_{t}\) and \(\partial_{\theta}\) on \(\mathcal{B}\) are defined by:
\[\left\{\begin{array}{l}\partial_{t}(t^{n}):=[n]_{\mathcal{R}(p,q)}\,t^{n}, \ \ \partial_{t}(\theta\,t^{n}):=[n]_{\mathcal{R}(p,q)}\,\theta\,t^{n},\\ \\ \partial_{\theta}(t^{n}):=0,\ \ \ \partial_{\theta}(\theta\,t^{n}):=\tau_{2}^{n}\,t ^{n}.\end{array}\right.\]
We assume that the linear map \(\bar{\Delta}=\partial_{t}+\theta\partial_{\theta}\) on \(\mathcal{B}\) is an even \(\sigma\)-derivation. Then:
\[\bar{\Delta}(x\,y) =\bar{\Delta}(x)\,y+\sigma(x)\delta(y),\] \[\bar{\Delta}(t^{n}) =[n]_{\mathcal{R}(p,q)}\,t^{n}\quad\text{and}\quad\bar{\Delta}( \theta\,t^{n})=\left([n]_{\mathcal{R}(p,q)}+\tau_{2}^{n}\right)\theta\,t^{n}. \tag{5}\]
We consider \(\phi_{\Delta}(z)\) be an arbitrary field with conformal dimension \(\Delta.\) Thus, \(\phi_{\Delta}(z)\) has the transformation property:
\[\phi_{\Delta}(z)\longrightarrow\big{(}\psi^{{}^{\prime}}(z)\big{)}^{\Delta}\, \phi_{\Delta}\big{(}\psi(z)\big{)}, \tag{6}\]
where \(\psi(z)\) is a function of the conformal transformation, "\({}^{\prime}\)" is the derivative and \(\Delta\) is an arbitrary number.
The relation (6) takes the infinitesimal form:
\[\psi(z) =z+\epsilon(z),\] \[\delta\big{(}\phi_{\Delta}(z)\big{)} =\epsilon^{1-\Delta}\,\partial\,\epsilon^{\Delta}\,\phi_{\Delta}( z),\]
where \(\partial\) is the classical derivative.
Taking \(\epsilon=-z^{n+1}\), we obtain:
**Definition 1**.: _The \(\mathcal{R}(p,q)\)-conformal super algebra is generated by bosonic and fermionic operators \(\mathcal{L}_{n}^{\Delta}\) of parity \(0\) and \(\mathcal{G}_{n}^{\Delta}\) of parity \(1\) defined as follows:_
\[\mathcal{L}_{n}^{(\Delta)}\phi_{\Delta}(z) :=-z^{(n+1)(1-\Delta)}\,\bar{\Delta}(z^{(n+1)\Delta}\phi_{\Delta}( z)),\] \[\mathcal{G}_{n}^{(\Delta)}\phi_{\Delta}(z) :=-\theta z^{(n+1)(1-\Delta)}\,\bar{\Delta}(z^{(n+1)\Delta}\phi_{ \Delta}(z)).\]
Note that, taking \(\Delta=0\), we recovered the \(\mathcal{R}(p,q)\)-generators given in [25]:
\[\mathcal{L}_{n}\,\phi(z)=-z^{n}\,\bar{\Delta}\,\phi(z)\quad\text{and}\quad \mathcal{G}_{n}\,\phi(z)=-\theta z^{n}\,\bar{\Delta}\,\phi(z).\]
We assume that for the field \(\phi_{\Delta}(z)\), the linear \(\bar{\Delta}\) can be written in the form of the \(\mathcal{R}(p,q)\)-number (2) as follows:
\[\bar{\Delta}\,\phi_{\Delta}(z):=\frac{1}{z}[z\partial_{z}]_{\mathcal{R}(p,q)} \,\phi_{\Delta}(z). \tag{7}\]
Then,
**Definition 2**.: _The \(\mathcal{R}(p,q)\)-conformal super generators \(\mathcal{L}_{n}^{(\Delta)}\) and \(\mathcal{G}_{n}^{(\Delta)}\) are taking the following form:_
\[\mathcal{L}_{n}^{(\Delta)}\phi_{\Delta}(z) =-[z\partial_{z}+\Delta(n+1)-n]_{\mathcal{R}(p,q)}\,z^{n}\,\phi_ {\Delta}(z), \tag{8}\] \[\mathcal{G}_{n}^{(\Delta)}\phi_{\Delta}(z) =-\theta\,[z\partial_{z}+\Delta(n+1)-n]_{\mathcal{R}(p,q)}\,z^{n }\,\phi_{\Delta}(z). \tag{9}\]
**Proposition 3**.: _For the conformal dimenson \((\Delta\neq 0,1),\) the \(\mathcal{R}(p,q)\)-conformal super generators \(\mathcal{L}_{n}^{(\Delta)}\) and \(\mathcal{G}_{n}^{(\Delta)}\) satisfies the following algebraic structures:_
\[[\mathcal{L}_{n}^{(\Delta)},\mathcal{L}_{m}^{(\Delta)}]_{X_{ \Delta},Y_{\Delta}} =(p-q)^{-1}\bigg{\{}p^{N_{\Delta}}(X_{\Delta}p^{-n}-Y_{\Delta}p^{-m})\] \[-q^{N_{\Delta}}(X_{\Delta}q^{-n}-Y_{\Delta}q^{-m})\bigg{\}} \mathcal{L}_{n+m}^{(\Delta)}, \tag{10}\] \[[\mathcal{L}_{n}^{(\Delta)},\mathcal{G}_{m}^{(\Delta)}]_{\tilde{X}_ {\Delta},\tilde{Y}_{\Delta}} =(p-q)^{-1}\bigg{\{}p^{N_{\Delta}}(\tilde{X}_{\Delta}p^{-n}- \tilde{Y}_{\Delta}p^{-m})\] \[-q^{N_{\Delta}}(\tilde{X}_{\Delta}q^{-n}-\tilde{Y}_{\Delta}q^{-m })\bigg{\}}\mathcal{G}_{n+m}^{(\Delta)}, \tag{11}\]
_and other commutators are zeros, with_
\[\left\{\begin{array}{l}X_{\Delta}=(pq)^{n}\frac{[n(\Delta-1)]_{p,q}[\Delta m ]_{p,q}}{[n]_{p,q}[m]_{p,q}}\frac{f_{n+m}^{\Delta}(p,q)}{f_{n}^{\Delta}(p,q)f_ {m}^{\Delta}(p,q)}\\ \tilde{X}_{\Delta}=X_{\Delta}\\ Y_{\Delta}=(pq)^{m}\frac{[m(\Delta-1)]_{p,q}[\Delta n]_{p,q}}{[n]_{p,q}[m]_{p,q }}\frac{f_{n+m}^{\Delta}(p,q)}{f_{n}^{\Delta}(p,q)f_{m}^{\Delta}(p,q)}\\ \\ \tilde{Y}_{\Delta}=(pq)^{m+1}\frac{[(m+1)(\Delta-1)]_{p,q}[\Delta n]_{p,q}}{[n ]_{p,q}[m+1]_{p,q}}\frac{f_{n+m+1}^{\Delta}(p,q)}{f_{n}^{\Delta}(p,q)f_{m+1}^ {\Delta}(p,q)}\\ \\ f_{n}^{\Delta}(p,q)=\frac{p-q}{p^{\Delta(n+1)}-q^{\Delta(n+1)}}\mathcal{R}(p^ {\Delta(n+1)},q^{\Delta(n+1)})\\ N_{\Delta}=z\partial_{z}+\Delta.\end{array}\right. \tag{12}\]
Proof.: By a straightforward calculation.
**Remark 4**.: _The conformal super generators and algebraic strucutres corresponding to quantum algebras in the literature are deduced as follows:_
* _Taking_ \(\mathcal{R}(x)=\frac{x-x^{-1}}{q-q^{-1}},\) _we obtain the_ \(q\)_-conformal super generators associated to the_ **Biedenharn-Macfarlane algebra**_ _[_3, 24_]_ \[\mathcal{L}_{n}^{(\Delta)}\phi_{\Delta}(z)=-[z\partial_{z}+\Delta(n+1)-n]_{q} \,z^{n}\,\phi_{\Delta}(z),\] \[\mathcal{G}_{n}^{(\Delta)}\phi_{\Delta}(z)=-\theta\,[z\partial_{z}+\Delta(n+ 1)-n]_{q}\,z^{n}\,\phi_{\Delta}(z),\] _with_ \[[n]_{q}=\frac{q^{n}-q^{-n}}{q-q^{-1}}\] _obeying the algebraic structures(_10_) and (_11_) with the coefficients_ \[X_{\Delta}=\frac{[n(\Delta-1)]_{q}[\Delta m]_{q}}{[n]_{q}[m]_{q}},\quad\tilde {X}_{\Delta}=X_{\Delta}\] (13) _and_ \[Y_{\Delta}=\frac{[m(\Delta-1)]_{q}[\Delta n]_{q}}{[n]_{q}[m]_{q}},\quad\tilde {Y}_{\Delta}=\frac{[(m+1)(\Delta-1)]_{q}[\Delta n]_{q}}{[n]_{q}[m+1]_{q}}.\] (14)
2. _Putting_ \(\mathcal{R}(x,y)=\frac{x-y}{p-q},\) _we obtain the_ \((p,q)\)_-super conformal generators corresponding to the deformed_ **Jagannathan-Srinivasa algebra** _[_21_]__:_ \[\mathcal{L}_{n}^{(\Delta)}\phi_{\Delta}(z)=-[z\partial_{z}+\Delta(n+1)-n]_{p,q }\,z^{n}\,\phi_{\Delta}(z),\] \[\mathcal{G}_{n}^{(\Delta)}\phi_{\Delta}(z)=-\theta\,[z\partial_{z}+\Delta(n+1)-n] _{p,q}\,z^{n}\,\phi_{\Delta}(z),\] _satisfying the following algebraic structures (_10_) and (_11_) with the coefficients_ \[X_{\Delta}=(pq)^{n}\frac{[n(\Delta-1)]_{p,q}[\Delta m]_{p,q}}{[n]_{p,q}[m]_{p,q }},\quad\tilde{X}_{\Delta}=X_{\Delta}\] (15) _and_ \[Y_{\Delta}=\frac{[m(\Delta-1)]_{p,q}[\Delta n]_{p,q}}{(pq)^{-m}[n]_{p,q}[m]_{p,q}},\quad\tilde{Y}_{\Delta}=\frac{[(m+1)(\Delta-1)]_{p,q}[\Delta n]_{p,q}}{( pq)^{-m-1}[n]_{p,q}[m+1]_{p,q}}.\] (16)
3. _For the choice_ \(\mathcal{R}(u,v)=(p^{-1}-q)^{-1}u^{-1}(1-uv),\) _we deduce the conformal super generators induced by the_ **Chakrabarti-Jagannathan algebra** _[_10_]__:_ \[\mathcal{L}_{n}^{(\Delta)}\phi_{\Delta}(z)=-[z\partial_{z}+\Delta(n+1)-n]_{p^ {-1},q}\,z^{n}\,\phi_{\Delta}(z),\] \[\mathcal{G}_{n}^{(\Delta)}\phi_{\Delta}(z)=-\theta\,[z\partial_{z}+\Delta(n+1 )-n]_{p^{-1},q}\,z^{n}\,\phi_{\Delta}(z),\] _obeying the algebraic structures (_10_) and (_11_) with_ \[X_{\Delta}=q^{n}\frac{[n(\Delta-1)]_{p^{-1},q}[\Delta m]_{p^{-1},q}}{p^{n}\,[ n]_{p^{-1},q}[m]_{p^{-1},q}},\quad\tilde{X}_{\Delta}=X_{\Delta},\] (17) \[Y_{\Delta}=\frac{q^{m}\,[m(\Delta-1)]_{p^{-1},q}[\Delta n]_{p^{-1},q}}{p^{m}\,[ n]_{p^{-1},q}[m]_{p^{-1},q}},\] (18) _and_ \[\tilde{Y}_{\Delta}=\frac{q^{m+1}[(m+1)(\Delta-1)]_{p^{-1},q}[\Delta n]_{p^{-1},q}}{p^{m+1}[n]_{p^{-1},q}[m+1]_{p^{-1},q}}.\] (19)
4. _Putting_ \(\mathcal{R}(s,t)=((q-p^{-1})t)^{-1}(st-1)\)_, we obtain the conformal super generators associated to the_ **generalized \(q\)-Quesne** _deformed algebra_ _[_18_]__:_ \[\mathcal{L}_{n}^{(\Delta)}\phi_{\Delta}(z)=-[z\partial_{z}+\Delta(n+1 )-n]_{p,q}^{Q}\,z^{n}\,\phi_{\Delta}(z),\] \[\mathcal{G}_{n}^{(\Delta)}\phi_{\Delta}(z)=-\theta\,[z\partial_{z}+\Delta(n+1 )-n]_{p,q}^{Q}\,z^{n}\,\phi_{\Delta}(z),\] _satisfying the commutation relations (_10_) and (_11_) with_ \[X_{\Delta}=p^{n}\frac{[n(\Delta-1)]_{p,q}^{Q}[\Delta m]_{p,q}^{Q}}{q^{n}\,[n]_ {p,q}^{Q}[m]_{p,q}^{Q}},\quad Y_{\Delta}=\frac{p^{m}\,[m(\Delta-1)]_{p,q}^{Q}[ \Delta n]_{p,q}^{Q}}{q^{m}[n]_{p,q}^{Q}[m]_{p,q}^{Q}}\] (20) _and_ \[\tilde{X}_{\Delta}=X_{\Delta},\quad\tilde{Y}_{\Delta}=\frac{p^{m+1}[(m+1)( \Delta-1)]_{p,q}^{Q}[\Delta n]_{p,q}^{Q}}{q^{m+1}[n]_{p,q}^{Q}[m+1]_{p,q}^{Q}}.\] (21)
* _Taking_ \(\mathcal{R}(x,y)=g(p,q)\frac{y^{\nu}}{x^{\mu}}\frac{xy-1}{(q-p^{-1})y}\)_, we obtain the conformal super generators induced by the_ _Hounkonnou-Ngompe generalized algebra__[_19_]__:_ \[\mathcal{L}_{n}^{(\Delta)}\phi_{\Delta}(z)=-[z\partial_{z}+ \Delta(n+1)-n]_{p,q,g}^{\mu,\nu}\,z^{n}\,\phi_{\Delta}(z),\] \[\mathcal{G}_{n}^{(\Delta)}\phi_{\Delta}(z)=-\theta\,[z\partial_{z}+ \Delta(n+1)-n]_{p,q,g}^{\mu,\nu}\,z^{n}\,\phi_{\Delta}(z),\] _obeying the commutation relations (_10_) and (_11_) with_ \[X_{\Delta}=(pq)^{n}\frac{[n(\Delta-1)]_{p,q,g}^{\mu,\nu}[\Delta m]_{p,q,g}^{ \mu,\nu}}{[n]_{p,q,g}^{\mu,\nu}[m]_{p,q,g}^{\mu,\nu}},\quad\tilde{X}_{\Delta} =X_{\Delta}\] (22) _and_ \[Y_{\Delta}=\frac{[m(\Delta-1)]_{p,q,g}^{\mu,\nu}[\Delta n]_{p,q,g}^{\mu,\nu} }{(pq)^{-m}[n]_{p,q,g}^{\mu,\nu}[m]_{p,q,g}^{\mu,\nu}},\quad\tilde{Y}_{\Delta }=\frac{[(m+1)(\Delta-1)]_{p,q,g}^{\mu,\nu}[\Delta n]_{p,q,g}^{\mu,\nu}}{(pq)^ {-m-1}[n]_{p,q,g}^{\mu,\nu}[m+1]_{p,q,g}^{\mu,\nu}}.\] (23)
We redefine the parameters \(p\) and \(q\) as follows:
\[\Theta=\sqrt{\frac{p}{q}}\quad\text{and}\quad\lambda=\sqrt{\frac{1}{pq}},\]
with
\[[n]=\lambda^{1-n}[n]_{\Theta}\quad\text{and}\quad[n]_{\Theta}=\big{(}\Theta- \Theta^{-1}\big{)}^{-1}\big{(}\Theta^{n}-\Theta^{-n}\big{)}.\]
Then, the algebra (10)) and (11), may be mapped to the \(\mathcal{R}(\Theta)\)-super Virasoro algebra which is the generalization of the \(\Theta\)-Virasoro algebra presented by Chaichian et _al_[4].
**Proposition 5**.: _The super generators given as follows:_
\[\mathbb{L}_{n}\phi_{\Delta}(z)=\lambda^{N_{\Delta}-1}\,\mathcal{L}_{n}^{( \Delta)}\phi_{\Delta}(z)\quad\text{and}\quad\mathbb{G}_{n}\phi_{\Delta}(z)= \lambda^{N_{\Delta}-1}\,\mathcal{G}_{n}^{(\Delta)}\phi_{\Delta}(z)\]
_satisfies the following commutation relations:_
\[[\mathbb{L}_{n},\mathbb{L}_{m}]_{X,Y} =K\left\{\Theta^{N_{\Delta}}(X\Theta^{-n}-Y\Theta^{-m})-\Theta^{N _{\Delta}}(X\Theta^{-n}-Y\Theta^{-m})\right\}\mathbb{L}_{n+m},\] \[[\mathbb{L}_{n},\mathbb{G}_{m}]_{\tilde{X},\tilde{Y}} =K\left\{\Theta^{N_{\Delta}}(\tilde{X}\Theta^{-n}-\tilde{Y}\Theta ^{-m})-\Theta^{N_{\Delta}}(\tilde{X}\Theta^{-n}-\tilde{Y}\Theta^{-m})\right\} \mathbb{G}_{n+m},\]
_and other commutation relations are zeros, where_
\[\left\{\begin{array}{l}X=\frac{[n(\Delta-1)]_{\Theta}[\Delta m]_{\Theta}}{[n ]_{\Theta}[m]_{\Theta}}\frac{f_{n+m}^{\Delta}(\Theta)}{f_{n}^{\Delta}(\Theta) f_{m}^{\Delta}(\Theta)}\\ \tilde{X}=X\\ Y=\frac{[m(\Delta-1)]_{\Theta}[\Delta n]_{\Theta}}{[n]_{\Theta}[m]_{\Theta}} \frac{f_{n+m}^{\Delta}(\Theta)}{f_{n}^{\Delta}(\Theta)f_{m}^{\Delta}(\Theta)} \\ \tilde{Y}=\frac{[(m+1)(\Delta-1)]_{\Theta}[\Delta n]_{\Theta}}{[n]_{\Theta}[m+1 ]_{\Theta}}\frac{f_{n+m+1}^{\Delta}(\Theta)}{f_{n}^{\Delta}(\Theta)f_{m+1}^{ \Delta}(\Theta)}\\ K=\big{(}\Theta-\Theta^{-1}\big{)}^{-1}.\end{array}\right.\]
Now, we determine the \(\mathcal{R}(p,q)\)-conformal super Witt algebra. The result is presented in the lemma bellow:
**Lemma 6**.: _The \(\mathcal{R}(p,q)\)-super Witt algebra with conformal dimension \((\Delta\neq 0,1)\) is driven by the following commutation relations:_
\[\big{[}\mathcal{L}_{n}^{(\Delta)},\mathcal{L}_{m}^{(\Delta)}\big{]} _{x,y}\phi_{\Delta}(z) =[m-n]_{\mathcal{R}(p,q)}\,\mathcal{L}_{n+m}^{(\Delta)}\phi_{ \Delta}(z), \tag{24}\] \[\big{[}\mathcal{L}_{n}^{(\Delta)},\mathcal{G}_{m}^{(\Delta)} \big{]}_{\tilde{x},\tilde{y}}\phi_{\Delta}(z) =\big{(}[m+1]_{\mathcal{R}(p,q)}-[n]_{\mathcal{R}(p,q)}\big{)} \,\mathcal{G}_{n+m}^{(\Delta)}\phi_{\Delta}(z),\] (25) \[\big{[}\mathcal{G}_{n}^{(\Delta)},\mathcal{G}_{m}^{(\Delta)}\big{]} =0, \tag{26}\]
_where_
\[\left\{\begin{array}{l}x=(p-q)\,[n-m]_{\mathcal{R}(p,q)}\,\chi_{ nm}(p,q),\\ y=(p-q)\,[n-m]_{\mathcal{R}(p,q)}\,\chi_{mn}(p,q),\\ \tilde{x}=(p-q)\,\big{(}[m+1]_{\mathcal{R}(p,q)}-[n]_{\mathcal{R}(p,q)}\big{)} \,\Lambda_{nm}(p,q),\\ \tilde{y}=(p-q)\,\big{(}[m+1]_{\mathcal{R}(p,q)}-[n]_{\mathcal{R}(p,q)}\big{)} \,\Lambda_{mn}(p,q),\end{array}\right. \tag{27}\]
\[\chi_{mn}(p,q) =\left\{p^{N_{\Delta}}(p^{-n}-\frac{[m(\Delta-1)]_{p,q}[\Delta n] _{p,q}}{[n(\Delta-1)]_{p,q}[\Delta m]_{p,q}}\frac{q^{m-n}}{p^{n}})\right.\] \[-q^{N_{\Delta}}(q^{-n}-\frac{[m(\Delta-1)]_{p,q}[\Delta n]_{p,q}} {[n(\Delta-1)]_{p,q}[\Delta m]_{p,q}}\frac{p^{m-n}}{q^{n}})\bigg{\}}^{-1}\]
_and_
\[\Lambda_{mn}(p,q) =\left\{p^{N_{\Delta}}\big{(}p^{-n}-\frac{[(m+1)(\Delta-1)]_{p,q} [\Delta n]_{p,q}[m]_{p,q}}{[m+1]_{p,q}[n(\Delta-1)]_{p,q}[\Delta m]_{p,q}} \frac{pq^{m+1}}{(pq)^{n}}\big{)}\right.\] \[-q^{N_{\Delta}}(q^{-n}-\frac{[(m+1)(\Delta-1)]_{p,q}[\Delta n]_{ p,q}[m]_{p,q}}{[m+1]_{p,q}[n(\Delta-1)]_{p,q}[\Delta m]_{p,q}}\frac{qp^{m+1}}{(pq) ^{n}})\bigg{\}}^{-1}.\]
Proof.: We use the algebraic structure (10), (11) and setting
\[\chi_{mn}(p,q) =\left\{p^{N_{\Delta}}(p^{-n}-\frac{Y_{\Delta}}{X_{\Delta}}p^{-m} )-q^{N_{\Delta}}(q^{-n}-\frac{Y_{\Delta}}{X_{\Delta}}q^{-m})\right\}^{-1}\]
and
\[\Lambda_{mn}(p,q) =\left\{p^{N_{\Delta}}(p^{-n}-\frac{\tilde{Y}_{\Delta}}{\tilde{X }_{\Delta}}p^{-m})-q^{N_{\Delta}}(q^{-n}-\frac{\tilde{Y}_{\Delta}}{\tilde{X }_{\Delta}}q^{-m})\right\}^{-1}\]
Setting
\[\rho(\mathcal{L}_{n}^{(\Delta)})=\frac{[2\,n]_{\mathcal{R}(p,q)}}{[n]_{ \mathcal{R}(p,q)}}\,\mathcal{L}_{n}^{(\Delta)}\quad\text{and}\quad\rho( \mathcal{G}_{n}^{(\Delta)})=\frac{[2(n+1)]_{\mathcal{R}(p,q)}}{[n+1]_{\mathcal{ R}(p,q)}}\,\mathcal{G}_{n}^{(\Delta)}, \tag{28}\]
then the \(\mathcal{R}(p,q)\)-deformed super Jacobi identity is presented by the following relation:
\[\sum_{(i,j,l)\in\mathcal{C}(n,m,k)}(-1)^{|B_{i}||B_{l}|}\Big{[}\rho(B_{i}), \Big{[}B_{j},B_{l}\Big{]}_{\mathcal{R}(p,q)}\Big{]}_{\mathcal{R}(p,q)}=0, \tag{29}\]
where \(\mathcal{C}(n,m,k)\) denotes the cyclic permutation of \((n,m,k)\) and the symbol \(|B|\) appearing in the exponent of \((-1)\) is to be understood as the parity of \(B\).
**Proposition 7**.: _The \(\mathcal{R}(p,q)\)-super Virasoro algebra with conformal dimension \((\Delta\neq 0,1)\) is generated by the following commutation relations:_
\[[L_{n}^{(\Delta)},L_{m}^{(\Delta)}]_{x,y}=[n-m]_{\mathcal{R}(p,q)} \,L_{n+m}^{(\Delta)}+\delta_{n+m,0}\,C_{n}^{\Delta}(p,q), \tag{30}\] \[[L_{n}^{(\Delta)},G_{m}^{(\Delta)}]_{\hat{x},\hat{y}}=\big{(}[m+1] _{\mathcal{R}(p,q)}-[n]_{\mathcal{R}(p,q)}\big{)}G_{n+m}^{(\Delta)}+\delta_{n+ m+1,0}\,C_{n}^{\Delta}(p,q),\] (31) \[[G_{n}^{(\Delta)},G_{m}^{(\Delta)}]=0, \tag{32}\]
_where \(C_{n}^{\Delta}(p,q)\) is the central charge provided by [16]:_
\[C_{n}^{\Delta}(p,q)=C(p,q)\frac{[n]_{\mathcal{R}(p,q)}}{[2n]_{\mathcal{R}(p,q )}}(pq)^{\frac{N_{\Delta}}{2}+n}[n-1]_{\mathcal{R}(p,q)}[n]_{\mathcal{R}(p,q) }[n+1]_{\mathcal{R}(p,q)}, \tag{33}\]
\(x,\,y,\,\tilde{x},\,\tilde{y}\) _are given by (_27_), and \(C(p,q)\) is a function of \((p,q)\)._
**Remark 8**.: _The particular cases are also attract our attention. We deduce the conformal super Virasoro induced by quantum algebras in the literature._
1. _The conformal super Virasoro algebra corresponding to the_ **Biedenharn-Macfarlane algebra**_[_3, 24_]__:_ \[[L_{n}^{(\Delta)},L_{m}^{(\Delta)}]_{x,y}=[n-m]_{q}\,L_{n+m}^{( \Delta)}+\delta_{n+m,0}\,C_{n}^{\Delta}(q),\] \[[L_{n}^{(\Delta)},G_{m}^{(\Delta)}]_{\hat{x},\hat{y}}=\big{(}[m+1]_{q}-[n]_ {q}\big{)}G_{n+m}^{(\Delta)}+\delta_{n+m+1,0}\,C_{n}^{\Delta}(q),\] _and other commutator is zero, with_ \[\left\{\begin{array}{l}x=\left(q^{n-m}-q^{m-n}\right)\chi_{nm}(q),\\ y=\left(q^{n-m}-q^{m-n}\right)\chi_{mn}(q),\\ \tilde{x}=\left(q^{m+1}-q^{n}+q^{-n}-q^{-m-1}\right)\Lambda_{nm}(q),\\ \tilde{y}=\left(q^{n-m}-q^{m-n}\right)\chi_{mn}(q),\\ \tilde{x}=\left(q^{m+1}-q^{n}+q^{-n}-q^{-m-1}\right)\Lambda_{mn}(q),\end{array}\right.\] \[C_{n}^{\Delta}(q)=C(q)(q^{n}+q^{-n})^{-1}[n-1]_{q}[n]_{q}[n+1]_{q}.\]
2. _The conformal super Virasoro algebra from the_ _Jagannathan-Srinivasa algebra_ _[21]_ _is driven by:_ \[[L_{n}^{(\Delta)},L_{m}^{(\Delta)}]_{x,y}=[n-m]_{p,q}\,L_{n+m}^{( \Delta)}+\delta_{n+m,0}\,C_{n}^{\Delta}(p,q),\] \[[L_{n}^{(\Delta)},G_{m}^{(\Delta)}]_{\hat{x},\hat{y}}=\big{(}[m+1]_{p,q}-[ n]_{p,q}\big{)}G_{n+m}^{(\Delta)}+\delta_{n+m+1,0}\,C_{n}^{\Delta}(p,q),\] _and other commutator is zero, with_ \[\left\{\begin{array}{l}x=\left(p^{n-m}-q^{n-m}\right)\chi_{nm}(p,q),\\ y=\left(p^{n-m}-q^{n-m}\right)\chi_{mn}(p,q),\\ \tilde{x}=\left(p^{m+1}-p^{n}+q^{n}-q^{m+1}\right)\Lambda_{nm}(p,q),\\ \tilde{y}=\left(p^{m+1}-p^{n}+q^{n}-q^{m+1}\right)\Lambda_{mn}(p,q),\end{array}\right.\] \[C_{n}^{\Delta}(p,q)=C(p,q)(pq)^{\frac{N_{\Delta}}{2}+n}(p^{n}+q^{n })^{-1}[n-1]_{p,q}[n]_{p,q}[n+1]_{p,q}.\] (34)
1. _The conformal super Virasoro algebra induced by the_ _Chakrabarti- Jaagannathan algebra_______\([10]\)_:_ \[[L_{n}^{(\Delta)},L_{m}^{(\Delta)}]_{x,y}=[n-m]_{p^{-1},q}\,L_{n+m}^{( \Delta)}+\delta_{n+m,0}\,C_{n}^{\Delta}(p^{-1},q),\] \[[L_{n}^{(\Delta)},G_{m}^{(\Delta)}]_{\hat{x},\hat{y}}=\big{(}[m+1]_{p^{-1},q}-[n]_{p^{-1},q}\big{)}G_{n+m}^{(\Delta)}+\delta_{n+m+1,0}\,C_{n}^{\Delta}(p^ {-1},q),\] _and other commutator is zero, with_ \[\left\{\begin{array}{l}x=(p-q)\,[n-m]_{p^{-1},q}\,\chi_{nm}(p^{-1},q),\\ y=(p-q)\,[n-m]_{p^{-1},q}\,\chi_{mn}(p^{-1},q),\\ \tilde{x}=(p-q)\,\big{(}[m+1]_{p^{-1},q}-[n]_{p^{-1},q}\big{)}\,\Lambda_{nm}(p^ {-1},q),\\ \tilde{y}=(p-q)\,\big{(}[m+1]_{p^{-1},q}-[n]_{p^{-1},q}\big{)}\,\Lambda_{mn}(p^ {-1},q),\end{array}\right.\] \[C_{n}^{\Delta}(p^{-1},q)=C(p,q)(\frac{q}{p})^{\frac{N_{\Delta}}{2}+n}(p^{- n}+q^{n})^{-1}[n-1]_{p^{-1},q}[n]_{p^{-1},q}[n+1]_{p^{-1},q}.\]
2. _The conformal super Virasoro algebra induced by the_ _Generalized \(q\)-Quesne algebra_ __\([18]\)_:_ \[[L_{n}^{(\Delta)},L_{m}^{(\Delta)}]_{x,y}=[n-m]_{p,q}^{Q}\,L_{n+m}^{( \Delta)}+\delta_{n+m,0}\,C_{n}^{\Delta}(p,q),\] \[[L_{n}^{(\Delta)},G_{m}^{(\Delta)}]_{\hat{x},\hat{y}}=\big{(}[m+1]_{p,q}^{Q}-[n]_{p,q}^{Q}\big{)}G_{n+m}^{(\Delta)}+\delta_{n+m+1,0}\,C_{n}^{\Delta }(p,q),\] _and other commutator is zero, with_ \[\left\{\begin{array}{l}x=(p-q)\,[n-m]_{p,q}^{Q}\,\chi_{nm}^{Q}(p,q),\\ y=(p-q)\,[n-m]_{p,q}^{Q}\,\chi_{mn}^{Q}(p,q),\\ \tilde{x}=(p-q)\,\big{(}[m+1]_{p,q}^{Q}-[n]_{p,q}^{Q}\big{)}\,\Lambda_{nm}^{Q }(p,q),\\ \tilde{y}=(p-q)\,\big{(}[m+1]_{p,q}^{Q}-[n]_{p,q}^{Q}\big{)}\,\Lambda_{mn}^{Q }(p,q),\end{array}\right.\] \[C_{n}^{\Delta}(p,q)=C(p,q)(\frac{p}{q})^{\frac{N_{\Delta}}{2}+n}(p^{n }+q^{-n})^{-1}[n-1]_{p,q}^{Q}[n]_{p,q}^{Q}[n+1]_{p,q}^{Q}\] (35)
3. _The conformal super Virasoro algebra generated by the_ _Hounkonnou-Ngompe generalized algebra_ __\([19]\)_:_ \[[L_{n}^{(\Delta)},L_{m}^{(\Delta)}]_{x,y}=[n-m]_{p,q,g}^{\mu,\nu}\,L_{n+m}^{( \Delta)}+\delta_{n+m,0}\,C_{n}^{\Delta}(p,q),\] \[[L_{n}^{(\Delta)},G_{m}^{(\Delta)}]_{\hat{x},\hat{y}}=\big{(}[m+1]_{p,q,g}^{\mu,\nu}-[n]_{p,q,g}^{\mu,\nu}\big{)}G_{n+m}^{(\Delta)}+\delta_{n+m+1,0 }\,C_{n}^{\Delta}(p,q),\] _and other commutator is zero, with_ \[\left\{\begin{array}{l}x=(p-q)\,[n-m]_{p,q,g}^{\mu,\nu}\,\chi_{nm}^{\mu,\nu} (p,q,g),\\ y=(p-q)\,[n-m]_{p,q,g}^{\mu,\nu}\,\chi_{mn}^{\mu,\nu}(p,q,g),\\ \tilde{x}=(p-q)\,\big{(}[m+1]_{p,q,g}^{\mu,\nu}-[n]_{p,q,g}^{\mu,\nu}\big{)} \,\Lambda_{nm}^{\mu,\nu}(p,q,g),\\ \tilde{y}=(p-q)\,\big{(}[m+1]_{p,q,g}^{\mu,\nu}-[n]_{p,q,g}^{\mu,\nu}\big{)} \,\Lambda_{mn}^{\mu,\nu}(p,q,g),\end{array}\right.\] \[\tilde{C}_{n}^{R}(p,q)=C(p,q)(pq)^{\frac{N_{\Delta}}{2}+n}(p^{n }+q^{n})^{-1}[n-1]_{p,q,g}^{\mu,\nu}[n]_{p,q,g}^{\mu,\nu}[n+1]_{p,q,g}^{\mu, \nu}.\] (36)
### Generalized super Virasoro algebra with \(\Delta=1\)
In this section, we consider the algebra given by the relation (10) and (11) for \(\Delta=1\), and derive the \(\mathcal{R}(p,q)\)-deformed super Virasoro algebra. Then, the \(\mathcal{R}(p,q)\)-deformed super generators \(\mathcal{L}^{1}_{n}\) and \(\mathcal{G}^{1}_{n}:\)
\[\mathcal{L}^{1}_{n}\phi(z)=-[z\partial_{z}+1]_{\mathcal{R}(p,q)} \,z^{n}\,\phi(z),\] \[\mathcal{G}^{1}_{n}\phi(z)=-\theta\,[z\partial_{z}+1]_{\mathcal{R} (p,q)}\,z^{n}\,\phi(z)\]
defined the superalgebra obeying the following relations:
\[[\mathcal{L}^{1}_{n},\mathcal{L}^{1}_{m}]_{\hat{u},\hat{v}}\phi(z) =[n-m]_{\mathcal{R}(p,q)}q^{N_{1}-m}\mathcal{L}^{1}_{n+m}\phi(z), \tag{37}\] \[[\mathcal{L}^{1}_{n},\mathcal{G}^{1}_{m}]_{\hat{u},\hat{v}}\phi(z) =\big{(}[m+1]_{\mathcal{R}(p,q)}-[n]_{\mathcal{R}(p,q)}\big{)}q^{ N_{1}-m}\mathcal{G}^{1}_{n+m}\phi(z),\] (38) \[[\mathcal{G}^{1}_{n},\mathcal{G}^{1}_{m}] =0,\]
with
\[\left\{\begin{array}{l}\hat{u}=\hat{\Lambda}_{nm}(p,q),\quad\tilde{u}=\tilde {\Lambda}_{nm}(p,q),\\ \\ \hat{v}=p^{m-n}\hat{\Lambda}_{nm}(p,q),\quad\tilde{v}=p^{m-n}\,\tilde{\Lambda}_ {nm}(p,q)\\ \\ \end{array}\right. \tag{39}\]
\[\hat{\Lambda}_{nm}(p,q)=\frac{[n-m]_{\mathcal{R}(p,q)}}{[m-n]_{p,q}}\,\frac{f ^{1}_{n+m}(p,q)}{f^{1}_{n}(p,q)f^{1}_{m}(p,q)},\]
and
\[\tilde{\Lambda}_{nm}(p,q)=\frac{[m+1]_{\mathcal{R}(p,q)}-[n]_{\mathcal{R}(p,q )}}{[m-n]_{p,q}}\frac{f^{1}_{n+m}(p,q)}{f^{1}_{n}(p,q)f^{1}_{m}(p,q)}.\]
Now, we rewrite the \(\mathcal{R}(p,q)\)-deformed super generators in the following way:
\[L^{1}_{n}\phi(z)=q^{-N_{1}}\,\mathcal{L}^{1}_{n}\phi(z)\quad\text{and}\quad G ^{1}_{n}\phi(z)=q^{-N_{1}}\,\mathcal{G}^{1}_{n}\phi(z).\]
Then, they obeys the following commutation relations:
\[[L^{1}_{n},L^{1}_{m}]_{u,v}\phi(z) =[n-m]_{\mathcal{R}(p,q)}L^{1}_{n+m}\phi(z), \tag{40}\] \[[L^{1}_{n},G^{1}_{m}]_{a,b}\phi(z) =\big{(}[m+1]_{\mathcal{R}(p,q)}-[n]_{\mathcal{R}(p,q)}\big{)}\, G^{1}_{n+m}\phi(z),\] (41) \[=0, \tag{42}\]
where
\[u=q^{m-n}\,\hat{\Lambda}_{nm}(p,q),\quad v=p^{m-n}\,\hat{\Lambda}_{ mn}(p,q), \tag{43}\] \[a=q^{m-n}\,\tilde{\Lambda}_{nm}(p,q),\quad b=p^{m-n}\,\tilde{ \Lambda}_{nm}(p,q), \tag{44}\]
equivalently,
\[[L^{1}_{n},L^{1}_{m}]\phi(z) =[n-m]_{\mathcal{R}(p,q)}\,p^{N_{1}-m}q^{-N_{1}+n}\,\hat{\Lambda }_{nm}(p,q)L^{1}_{n+m}\phi(z)\] \[=\big{(}[m+1]_{\mathcal{R}(p,q)}-[n]_{\mathcal{R}(p,q)}\big{)}p^{ N_{1}-m}q^{-N_{1}+n}\,\tilde{\Lambda}_{nm}(p,q)G^{1}_{n+m}\phi(z)\] \[\quad[G^{1}_{n},G^{1}_{m}] =0.\]
**Remark 9**.: _From the particular cases of the intergers \(n\) and \(m,\) we deduce the \(\mathcal{R}(p,q)\)-deformed super \(su(1,1)\) subalgebra is generated by the commutation relations:_
\[[L_{0}^{1},L_{1}^{1}]_{u,v}\phi(z) =[-1]_{\mathcal{R}(p,q)}L_{1}^{1}\phi(z)\] \[[L_{0}^{1},G_{1}^{1}]_{a,b}\phi(z) =\left([2]_{\mathcal{R}(p,q)}\right)G_{1}^{1}\phi(z)\] \[\quad[G_{0}^{1},G_{1}^{1}] =0,\]
_where_
\[u =q\,\hat{\Lambda}_{01}(p,q),\quad v=p\,\hat{\Lambda}_{10}(p,q),\] \[a =q\,\tilde{\Lambda}_{01}(p,q),\quad b=p\,\tilde{\Lambda}_{10}(p,q),\]
\[[L_{-1}^{1},L_{0}^{1}]_{u,v}\phi(z) =[-1]_{\mathcal{R}(p,q)}L_{-1}^{1}\phi(z)\] \[[L_{-1}^{1},G_{0}^{1}]_{a,b}\phi(z) =\left([1]_{\mathcal{R}(p,q)}-[-1]_{\mathcal{R}(p,q)}\right)G_{- 1}^{1}\phi(z)\] \[\quad[G_{-1}^{1},G_{0}^{1}] =0,\]
_where_
\[u =q\,\hat{\Lambda}_{-10}(p,q),\quad v=p\,\hat{\Lambda}_{0(-1)}(p,q),\] \[a =q\,\tilde{\Lambda}_{-10}(p,q),\quad b=p\,\tilde{\Lambda}_{0(-1)} (p,q),\]
_and_
\[[L_{-1}^{1},L_{1}^{1}]\phi(z) =[2]p^{N_{1}-1}q^{-N_{1}-1}\,\hat{\Lambda}_{-10}(p,q)L_{0}^{1} \phi(z)\] \[=\left([2]_{\mathcal{R}(p,q)}-[-1]_{\mathcal{R}(p,q)}\right)p^{N _{1}-1}q^{-N_{1}-1}\,\tilde{\Lambda}_{-11}(p,q)G_{0}^{1}\phi(z)\] \[=0.\]
The central extension of the algebra (40), (41), and (42) is generated by the commutation relations:
\[[L_{n}^{1},L_{m}^{1}]_{u,v}\phi(z) =[n-m]_{\mathcal{R}(p,q)}L_{n+m}^{1}\phi(z)+\delta_{n+m,0}\,C_{n} ^{R}(p,q), \tag{45}\] \[=\left([m+1]_{\mathcal{R}(p,q)}-[n]_{\mathcal{R}(p,q)}\right)G_{n +m}^{1}\phi(z)\] \[+\delta_{n+m+1,0}\,C_{n}^{R}(p,q),\] (46) \[=0, \tag{47}\]
where \(u,\)\(v,\)\(a,\) and \(b\) are given by (43).
**Remark 10**.: _The \((p,q)\)-deformed super generators \(\mathcal{L}_{n}^{1}\) and \(\mathcal{G}_{n}^{1}:\)_
\[\mathcal{L}_{n}^{1}\phi(z) =-[z\partial_{z}+1]_{p,q}\,z^{n}\,\phi(z),\] \[\mathcal{G}_{n}^{1}\phi(z) =-\theta\,[z\partial_{z}+1]_{p,q}\,z^{n}\,\phi(z)\]
_defined the superalgebra satisfying the following relations:_
\[[\mathcal{L}^{1}_{n},\mathcal{L}^{1}_{m}]_{\hat{u},\tilde{v}}\phi(z) =[n-m]_{p,q}d^{N_{1}-m}\mathcal{L}^{1}_{n+m}\phi(z),\] \[[\mathcal{L}^{1}_{n},\mathcal{G}^{1}_{m}]_{\bar{u},\tilde{v}}\phi(z) =\big{(}[m+1]_{p,q}-[n]_{p,q}\big{)}q^{N_{1}-m}\mathcal{G}^{1}_{n+ m}\phi(z),\] \[[\mathcal{G}^{1}_{n},\mathcal{G}^{1}_{m}]=0,\]
_with_
\[\left\{\begin{array}{l}\hat{u}=\hat{\Lambda}_{nm}(p,q),\quad\tilde{u}=\tilde {\Lambda}_{nm}(p,q),\\ \\ \hat{v}=p^{m-n}\hat{\Lambda}_{nm}(p,q),\quad\tilde{v}=p^{m-n}\,\tilde{\Lambda}_ {nm}(p,q)\\,\end{array}\right.\]
\[\hat{\Lambda}_{nm}(p,q)=\frac{[n-m]_{p,q}}{[m-n]_{p,q}}\frac{f^{1}_{n+m}(p,q)} {f^{1}_{n}(p,q)f^{1}_{m}(p,q)},\]
_and_
\[\tilde{\Lambda}_{nm}(p,q)=\frac{[m+1]_{p,q}-[n]_{p,q}}{[m-n]_{p,q}}\frac{f^{1} _{n+m}(p,q)}{f^{1}_{n}(p,q)f^{1}_{m}(p,q)}.\]
_Now, we rewrite the \((p,q)\)-deformed super generators in the following way:_
\[L^{1}_{n}\phi(z)=q^{-N_{1}}\,\mathcal{L}^{1}_{n}\phi(z)\quad\text{and}\quad G ^{1}_{n}\phi(z)=q^{-N_{1}}\,\mathcal{G}^{1}_{n}\phi(z).\]
_Then, they obeys the following commutation relations:_
\[[L^{1}_{n},L^{1}_{m}]_{u,v}\phi(z) =[n-m]_{p,q}L^{1}_{n+m}\phi(z),\] \[[L^{1}_{n},G^{1}_{m}]_{a,b}\phi(z) =\big{(}[m+1]_{p,q}-[n]_{p,q}\big{)}\,G^{1}_{n+m}\phi(z),\] \[[G^{1}_{n},G^{1}_{m}] =0,\]
_where_
\[u =q^{m-n}\,\hat{\Lambda}_{nm}(p,q),\quad v =p^{m-n}\,\hat{\Lambda}_{mn}(p,q),\] \[a =q^{m-n}\,\tilde{\Lambda}_{nm}(p,q),\quad b =p^{m-n}\,\tilde{\Lambda}_{nm}(p,q),\]
_equivalently,_
\[[L^{1}_{n},L^{1}_{m}]\phi(z) =[n-m]_{p,q}\,p^{N_{1}-m}q^{-N_{1}+n}\,\hat{\Lambda}_{nm}(p,q)L^{ 1}_{n+m}\phi(z)\] \[[L^{1}_{n},G^{1}_{m}]\phi(z) =\big{(}[m+1]_{p,q}-[n]_{p,q}\big{)}p^{N_{1}-m}q^{-N_{1}+n}\,\tilde {\Lambda}_{nm}(p,q)G^{1}_{n+m}\phi(z)\] \[[G^{1}_{n},G^{1}_{m}] =0.\]
_Furthermore, the \((p,q)\)-super \(su(1,1)\) subalgebra is generated by the following commutation relations:_
\[[L^{1}_{0},L^{1}_{1}]_{u,v}\phi(z) =[-1]_{p,q}L^{1}_{1}\phi(z)\] \[[L^{1}_{0},G^{1}_{1}]_{a,b}\phi(z) =\big{(}[2]_{p,q}\big{)}\,G^{1}_{1}\phi(z)\] \[[G^{1}_{0},G^{1}_{1}] =0,\]
_where_
\[u =q\,\hat{\Lambda}_{01}(p,q),\quad v =p\,\hat{\Lambda}_{10}(p,q),\] \[a =q\,\tilde{\Lambda}_{01}(p,q),\quad b =p\,\tilde{\Lambda}_{10}(p,q),\]
\[[L^{1}_{-1},L^{1}_{0}]_{u,v}\phi(z) =[-1]_{p,q}L^{1}_{-1}\phi(z)\] \[[L^{1}_{-1},G^{1}_{0}]_{a,b}\phi(z) =\left([1]_{p,q}-[-1]_{p,q}\right)G^{1}_{-1}\phi(z)\] \[[G^{1}_{-1},G^{1}_{0}] =0,\]
_where_
\[u =q\,\hat{\Lambda}_{-10}(p,q),\quad v =p\,\hat{\Lambda}_{0(-1)}(p,q),\] \[a =q\,\tilde{\Lambda}_{-10}(p,q),\quad b =p\,\tilde{\Lambda}_{0(-1)}(p,q),\]
_and_
\[[L^{1}_{-1},L^{1}_{1}]\phi(z) =[2]p^{N_{1}-1}q^{-N_{1}-1}\,\hat{\Lambda}_{-10}(p,q)L^{1}_{0} \phi(z)\] \[[L^{1}_{-1},G^{1}_{1}]\phi(z) =\left([2]_{p,q}-[-1]_{p,q}\right)p^{N_{1}-1}q^{-N_{1}-1}\,\tilde {\Lambda}_{-11}(p,q)G^{1}_{0}\phi(z)\] \[[G^{1}_{-1},G^{1}_{1}]\phi(z) =0.\]
## 2. \(\mathcal{R}(p,q)\)-Virasoro \(n\)-algebra with conformal dimension \((\Delta\neq 0,1)\)
In this section, we construct the quantum conformal Virasoro \(n\)-algebra from the \(\mathcal{R}(p,q)\)-quantum algebra. Besides, particular cases induced by quantum algebras in the literature are deduced.
### \(\mathcal{R}(p,q)\)-conformal Witt \(n\)-algebra
The \(\mathcal{R}(p,q)\)-Witt \(n\)-algebra with conformal dimension \((\Delta\neq 0,1)\) is investigated. Moreover, several examples are presented.
**Definition 11**.: _[_31_]_ _The Levi-Civita symbol is defined by:_
\[\epsilon^{j_{1}\cdots j_{p}}_{i_{1}\cdots i_{p}}=det\left(\begin{array}{ccc} \delta^{j_{1}}_{i_{1}}&\cdots&\delta^{j_{1}}_{i_{p}}\\ \vdots&&\vdots\\ \delta^{j_{p}}_{i_{1}}&\cdots&\delta^{j_{p}}_{i_{p}}\end{array}\right). \tag{48}\]
Let us firstly introduced the \(\mathcal{R}(p,q)\)-conformal Witt \(n\)-algebra (\(n\) even). The result is contained in the following lemma.
**Lemma 12**.: _The \(\mathcal{R}(p,q)\)-Witt \(n\)-algebra with conformal dimension \((\Delta\neq 0,1)\) is given by:_
\[\left[\mathcal{L}^{\Delta}_{m_{1}},\mathcal{L}^{\Delta}_{m_{2}}, \ldots,\mathcal{L}^{\Delta}_{m_{n}}\right] =\frac{\left(q-p\right)^{\binom{n-1}{2}}}{\left(\tau_{1}\tau_{2} \right)^{\lfloor\frac{n-1}{2}\rfloor\bar{m}}}\Bigg{(}\frac{[-2\bar{m}]_{ \mathcal{R}(p,q)}}{2[-\bar{m}]_{\mathcal{R}(p,q)}}\Bigg{)}^{\beta}\] \[\times\prod_{j=1}^{n}\left(\tau_{1}^{\Delta(\bar{m}-1)}+(-1)^{n} \tau_{2}^{\Delta(t\partial_{t}-\bar{m})}\right)\] \[\times\prod_{1\leq i<j\leq n}\Big{(}[m_{i}]_{\mathcal{R}(p,q)}-[m _{j}]_{\mathcal{R}(p,q)}\Big{)}\mathcal{L}^{\Delta}_{\bar{m}}, \tag{49}\]
_where \(\bar{m}=\sum_{l=1}^{n}m_{l}\)._
Proof.: The generalization of the generators of the conformal Witt algebra from the \(\mathcal{R}(p,q)\)-quantum algebras is first introduced by Hounkonnou and Melong as follows [16]:
\[\mathcal{L}^{\Delta}_{m}\phi(t)=[t\partial_{t}+\Delta(m+1)-m]_{ \mathcal{R}(p,q)}\,t^{m}\phi(t) \tag{50}\]
and recently, the generators of the Witt algebra were constructed in the form [17]:
\[l_{m}\phi(t)=[t\partial_{t}-m]_{\mathcal{R}(p,q)}\,t^{m}\phi(t). \tag{51}\]
By using the relation
\[[u+v]_{\mathcal{R}(p,q)} =\tau_{1}^{v}\,[u]_{\mathcal{R}(p,q)}+\tau_{2}^{u}\,[v]_{ \mathcal{R}(p,q)}\] \[=\tau_{2}^{v}\,[u]_{\mathcal{R}(p,q)}+\tau_{1}^{u}\,[v]_{ \mathcal{R}(p,q)}, \tag{52}\]
where \(\big{(}\tau_{i}\big{)}_{i\in\{1,2\}}\) are parameters depending on \(p\) and \(q\), the operators (50) can be re-written in the simpler form:
\[\mathcal{L}^{\Delta}_{m}\phi(t)=l_{m}^{\Delta}\phi(t)+S_{m}^{ \Delta}\,\phi(t), \tag{53}\]
where
\[l_{m}^{\Delta}\phi(t)=\tau_{1}^{\Delta(m+1)}\,[t\partial_{t}-m] _{\mathcal{R}(p,q)}\,t^{m}\phi(t) \tag{54}\]
and
\[S_{m}^{\Delta}\phi(t)=\tau_{2}^{(t\partial_{t}-m)}\,[\Delta(m+1) ]_{\mathcal{R}(p,q)}\,t^{m}\phi(t). \tag{55}\]
From our previous works [17, 25] and Wang et _al_[31], we can consider the \(n\)-brackets (\(n\geq 3\)) defined as follows:
\[\left[l_{m_{1}}^{\Delta},l_{m_{2}}^{\Delta},\cdots,l_{m_{n}}^{ \Delta}\right] :=\left(\frac{[-2\bar{m}]_{\mathcal{R}(p,q)}}{2[-\bar{m}]_{ \mathcal{R}(p,q)}}\right)^{\beta}\epsilon_{12\cdots n}^{i_{1}i_{2}\cdots i_{n} }\prod_{j=1}^{n}\tau_{1}^{\Delta(m_{i_{j}}-1)}\] \[\times(\tau_{1}\tau_{2})^{\sum_{j=1}^{n}\left(\lfloor\frac{n}{2} \rfloor-j+1\right)m_{i_{j}}}l_{m_{i_{1}}}^{\Delta}\cdots l_{m_{i_{n}}}^{ \Delta}, \tag{56}\]
and
\[\left[S_{m_{1}}^{\Delta},S_{m_{2}}^{\Delta},\cdots,S_{m_{n}}^{\Delta}\right] :=\left(\frac{[-2\bar{m}]_{\mathcal{R}(p,q)}}{2[-\bar{m}]_{\mathcal{ R}(p,q)}}\right)^{\beta}\epsilon_{12\cdots n}^{i_{1}i_{2}\cdots i_{n}}\prod_{j=1}^{n} \tau_{2}^{\Delta(t\partial_{t}-m_{i_{j}})}\] \[\times(\tau_{1}\tau_{2})^{\sum_{j=1}^{n}\left(\lfloor\frac{n}{2} \rfloor-j+1\right)m_{i_{j}}}S_{m_{i_{1}}}^{\Delta}\cdots S_{m_{i_{n}}}^{\Delta}, \tag{57}\]
where \(\beta=\frac{1+(-1)^{n}}{2},\)\(\lfloor n\rfloor=\max\{m\in\mathbb{Z}\ m\leq n\}\) is the floor function. Putting the operators (54) and (55) in the relation (56) and (57), we obtain the generators \(\mathcal{L}_{m}^{\Delta}\) satisfying the \(\mathcal{R}(p,q)\)-conformal Witt \(n\)-algebra (49).
**Remark 13**.: _Some examples of \(\mathcal{R}(p,q)\)-deformed conformal Witt \(n\)-algebra are deduced as:_
1. _For_ \(n=3,\) _we obtain the_ \(\mathcal{R}(p,q)\)_-conformal Witt_ \(3\)_-algebra as follows:_ \[\left[\mathcal{L}_{m_{1}}^{\Delta},\mathcal{L}_{m_{2}}^{\Delta}, \mathcal{L}_{m_{3}}^{\Delta}\right] =\frac{q-p}{(\tau_{1}\tau_{2})^{m_{1}+m_{2}+m_{3}}}\] \[\times\prod_{j=1}^{3}\left(\tau_{1}^{\Delta(m_{1}+m_{2}+m_{3}-1)} -\tau_{2}^{\Delta(t\partial_{t}-m_{1}-m_{2}-m_{3})}\right)\] \[\times\prod_{1\leq i<j\leq 3}\left([m_{i}]_{\mathcal{R}(p,q)}-[m_{ j}]_{\mathcal{R}(p,q)}\right)\mathcal{L}_{m_{1}+m_{2}+m_{3}}^{\Delta}.\]
2. _The_ \(\mathcal{R}(p,q)\)_-conformal Witt_ \(4\)_-algebra is deduced by taking_ \(n=4:\)__ \[\left[\mathcal{L}_{m_{1}}^{\Delta},\mathcal{L}_{m_{2}}^{\Delta}, \mathcal{L}_{m_{3}}^{\Delta},\mathcal{L}_{m_{4}}^{\Delta}\right] =\frac{\left(q-p\right)^{3}}{(\tau_{1}\tau_{2})^{\bar{m}}}\left( \frac{[-2\bar{m}]_{\mathcal{R}(p,q)}}{2[-\bar{m}]_{\mathcal{R}(p,q)}}\right)\] \[\times\prod_{j=1}^{4}\left(\tau_{1}^{\Delta(\bar{m}-1)}+\tau_{2}^ {\Delta(t\partial_{t}-\bar{m})}\right)\] \[\times\prod_{1\leq i<j\leq 4}\left([m_{i}]_{\mathcal{R}(p,q)}-[m_{ j}]_{\mathcal{R}(p,q)}\right)\mathcal{L}_{\bar{m}}^{\Delta},\] _where_ \(\bar{m}=\sum_{l=1}^{4}m_{l}.\)__
3. _In the limit_ \(p\longrightarrow q,\) _the relation (_49_) is reduced to the null_ \(n\)_-algebra in the case_ \(n\geq 3.\)__
We can prove that the relation (49) satisfies the conformal generalized Jacobi identity (GJI)
\[\epsilon_{n_{1},\ldots,n_{2n-1}}^{i_{1},\ldots,i_{2n-1}}\left[\left[\mathcal{L} _{i_{1}}^{\Delta},\ldots,\mathcal{L}_{i_{2n-1}}^{\Delta}\right]_{\mathcal{R}(p,q)},\mathcal{L}_{i_{n+1}}^{\Delta},\ldots,\mathcal{L}_{i_{2n-1}}^{\Delta} \right]_{\mathcal{R}(p,q)}=0,\]
and the skewsymmetry hold for (49). Thus, the \(\mathcal{R}(p,q)\)-conformal Witt \(n\)-algebra (49) is a higher order Lie algebra.
Now, we investigate the \(\mathcal{R}(p,q)\)-Virasoro \(2n\)-algebra (\(n\geq 2\)) with conformal dimension \((\Delta\neq 0,1).\) It's the centrally extended of the \(\mathcal{R}(p,q)\)-conformal Witt \(n\)-algebra (49). Then,
**Proposition 14**.: _For \((\Delta\neq 0,1),\) the \(\mathcal{R}(p,q)\)-conformal Virasoro \(2n\)-algebra (\(n\) even) is determined by the following commutation relation:_
\[\big{[}L^{\Delta}_{m_{1}},\ldots,L^{\Delta}_{m_{2n}}\big{]}=f^{\Delta}_{\mathcal{ R}(p,q)}(m_{1},\ldots,m_{2n})L^{\Delta}_{\tilde{m}}+\tilde{C}^{\Delta}_{ \mathcal{R}(p,q)}(m_{1},\ldots,m_{2n}), \tag{58}\]
_where_
\[f^{\Delta}_{\mathcal{R}(p,q)}(m_{1},\ldots,m_{2n}) =\frac{\big{(}q-p\big{)}^{\binom{2n-1}{2}}}{(\tau_{1}\tau_{2})^{ \lfloor\frac{2n-1}{2}\rfloor\tilde{m}}}\Bigg{(}\frac{[-2\tilde{m}]_{\mathcal{R }(p,q)}}{2[-\tilde{m}]_{\mathcal{R}(p,q)}}\Bigg{)}^{\beta}\prod_{j=1}^{2n} \Bigg{(}\tau_{1}^{\Delta(\tilde{m}-1)}\] \[+\tau_{2}^{\Delta(t\partial_{t}-\tilde{m})}\Bigg{)}\prod_{1\leq i <j\leq 2n}\Big{(}[m_{i}]_{\mathcal{R}(p,q)}-[m_{j}]_{\mathcal{R}(p,q)}\Big{)}, \tag{59}\]
\(\tilde{m}=\sum_{l=1}^{2n}\)_\(m_{l}\) and_
\[\tilde{C}^{\Delta}_{\mathcal{R}(p,q)}(m_{1},\cdots,m_{2n})=\frac{ c(p,q)\epsilon_{1\cdots 2n}^{i_{1}\cdots i_{2n}}}{6\times 2^{n}\times n!}\prod_{l=1}^{ n}\Big{(}\frac{q}{p}\Big{)}^{i\,N_{\Delta/2}}\frac{[m_{i_{2l-1}}-1]_{\mathcal{R }(p,q)}}{\big{(}\tau_{1}\tau_{2}\big{)}^{m_{2l-1}}}\] \[\qquad\qquad\qquad\times\frac{[m_{2l-1}]_{\mathcal{R}(p,q)}}{[2m_ {2l-1}]_{\mathcal{R}(p,q)}}[m_{i_{2l-1}}+1]_{\mathcal{R}(p,q)}\delta_{m_{i_{2 l-1}}+m_{i_{2l}},0}. \tag{60}\]
Proof.: For the relation (49), we deduce (59). Moreover, the conformal central extention \(\tilde{C}^{\Delta}_{\mathcal{R}(p,q)}(m_{1},\ldots,m_{2n})\) is provided by the factorization
\[\tilde{C}^{\Delta}_{\mathcal{R}(p,q)}(m_{1},\ldots,m_{2n})=\prod_{i=1}^{n} \Gamma^{i}(N_{\Delta})\,C^{\Delta}_{\mathcal{R}(p,q)}(m_{1},\ldots,m_{2n}),\]
where \(C^{\Delta}_{\mathcal{R}(p,q)}(m_{1},\ldots,m_{2n})\) is given in [25]:
\[C^{\Delta}_{\mathcal{R}(p,q)}(m_{1},\cdots,m_{2n}) =\frac{c(p,q)\epsilon_{1\cdots 2n}^{i_{1}\cdots i_{2n}}}{6\times 2 ^{n}\times n!}\prod_{l=1}^{n}\frac{[m_{i_{2l-1}}-1]_{\mathcal{R}(p,q)}}{ \big{(}\tau_{1}\tau_{2}\big{)}^{m_{2l-1}}}\frac{[m_{2l-1}]_{\mathcal{R}(p,q)}} {[2m_{2l-1}]_{\mathcal{R}(p,q)}}\] \[\times[m_{i_{2l-1}}]_{\mathcal{R}(p,q)}[m_{i_{2l-1}}+1]_{\mathcal{ R}(p,q)}\delta_{m_{i_{2l-1}}+m_{i_{2l}},0}\]
and \(\Gamma^{i}(N_{\Delta})\) by [9, 16]:
\[\Gamma(N_{\Delta})=\Big{(}\frac{q}{p}\Big{)}^{N_{\Delta/2}}.\]
Thus, the relation (33) follows. Finally, the \(\mathcal{R}(p,q)\)-conformal Virasoro \(2n\)-algebra (\(n\)-even) is generated by the relations (58),(59), and (60).
**Remark 15**.: _The particular cases of the conformal Virasoro \(2n\)-algebra (\(n\) even) from the quantum algebras existing in the literature are deuced as:_
* _The conformal Virasoro_ \(2n\)_-algebra (_\(n\) _even) associated to the_ **Biedenharn-Macfarlane algebra**___[_3, 24_]__:_ \[\big{[}L^{\Delta}_{m_{1}},\ldots,L^{\Delta}_{m_{2n}}\big{]}=f^{\Delta}_{q}(m_{1},\ldots,m_{2n})L^{\Delta}_{\tilde{m}}+\tilde{C}^{\Delta}_{q}(m_{1},\ldots,m_{2n }),\] (61)
_where_
\[f_{q}^{\Delta}(m_{1},\ldots,m_{2n}) =\big{(}q^{-1}-q\big{)}^{\binom{2n-1}{2}}\Bigg{(}\frac{[-2\tilde{m} ]_{q}}{2[-\tilde{m}]_{q}}\Bigg{)}^{\beta}\prod_{j=1}^{2n}\Bigg{(}q^{\Delta( \tilde{m}-1)}\] \[+q^{-\Delta(t\partial_{t}-\tilde{m})}\Bigg{)}\prod_{1\leq i<j\leq 2 n}\Big{(}[m_{i}]_{q}-[m_{j}]_{q}\Big{)}, \tag{62}\]
\(\tilde{m}=\sum_{l=1}^{2n}\,m_{l}\) _and_
\[\tilde{C}_{q}^{\Delta}(m_{1},\cdots,m_{2n}) =\frac{c(q)\epsilon_{1\cdots 2n}^{i_{1}\cdots i_{2n}}}{6\times 2^{n} \times n!}\prod_{l=1}^{n}q^{-2i\,N_{\Delta/2}}\frac{[m_{2l-1}]_{q}}{[2m_{2l-1} ]_{q}}\] \[\times[m_{i_{2l-1}}-1]_{q}[m_{i_{2l-1}}]_{q}[m_{i_{2l-1}}+1]_{q} \delta_{m_{i_{2l-1}}+m_{i_{2l}},0}. \tag{63}\]
2. _The conformal Virasoro_ \(2n\)_-algebra (_\(n\) _even) from the_ _Jagannathan-__Srinivasa algebra_ _[_21_]__:_ \[\big{[}L^{\Delta}_{m_{1}},\ldots,L^{\Delta}_{m_{2n}}\big{]}=f_{p,q}^{\Delta}(m _{1},\ldots,m_{2n})L^{\Delta}_{\tilde{m}}+\tilde{C}^{\Delta}_{p,q}(m_{1}, \ldots,m_{2n}),\] (64) _where_ \[f_{p,q}^{\Delta}(m_{1},\ldots,m_{2n}) =\frac{\big{(}q-p\big{)}^{\binom{2n-1}{2}}}{(p\,q)^{\lfloor\frac {2n-1}{2}\rfloor\tilde{m}}}\Bigg{(}\frac{[-2\tilde{m}]_{p,q}}{2[-\tilde{m}]_{p,q}}\Bigg{)}^{\beta}\prod_{j=1}^{2n}\Bigg{(}p^{\Delta(\tilde{m}-1)}\] \[+q^{\Delta(t\partial_{t}-\tilde{m})}\Bigg{)}\prod_{1\leq i<j\leq 2 n}\Big{(}[m_{i}]_{p,q}-[m_{j}]_{p,q}\Big{)},\] (65) \(\tilde{m}=\sum_{l=1}^{2n}\,m_{l}\) _and_ \[\tilde{C}^{\Delta}_{p,q}(m_{1},\cdots,m_{2n}) =\frac{c(p,q)\epsilon_{1\cdots 2n}^{i_{1}\cdots i_{2n}}}{6 \times 2^{n}\times n!}\prod_{l=1}^{n}\Big{(}\frac{q}{p}\Big{)}^{i\,N_{\Delta/2}} \frac{[m_{i_{2l-1}}-1]_{p,q}}{\big{(}p\,q\big{)}^{m_{2l-1}}}\] \[\times\frac{[m_{2l-1}]_{p,q}}{[2m_{2l-1}]_{p,q}}[m_{i_{2l-1}}]_{p,q }[m_{i_{2l-1}}+1]_{p,q}\delta_{m_{i_{2l-1}}+m_{i_{2l}},0}.\] (66)
3. _The conformal Virasoro_ \(2n\)_-algebra (_\(n\) _even) induced by the_ _Chakrabarti-__Jagannathan algebra_[10]_:_ \[\big{[}L^{\Delta}_{m_{1}},\ldots,L^{\Delta}_{m_{2n}}\big{]}=f_{p^{-1},q}^{\Delta} (m_{1},\ldots,m_{2n})L^{\Delta}_{\tilde{m}}+\tilde{C}^{\Delta}_{p^{-1},q}(m_{1 },\ldots,m_{2n}),\] (67) _where_ \[f_{p^{-1},q}^{\Delta}(m_{1},\ldots,m_{2n}) =\frac{\big{(}q-p\big{)}^{\binom{2n-1}{2}}}{\big{(}\frac{q}{p} \big{)}^{\lfloor\frac{2n-1}{2}\rfloor\tilde{m}}}\Bigg{(}\frac{[-2\tilde{m}]_{p ^{-1},q}}{2[-\tilde{m}]_{p^{-1},q}}\Bigg{)}^{\beta}\prod_{j=1}^{2n}\Bigg{(}p^ {-\Delta(\tilde{m}-1)}\] \[+q^{\Delta(t\partial_{t}-\tilde{m})}\Bigg{)}\prod_{1\leq i<j\leq 2 n}\Big{(}[m_{i}]_{p^{-1},q}-[m_{j}]_{p^{-1},q}\Big{)},\] (68)
\[\tilde{m}=\sum_{l=1}^{2n}\,m_{l}\text{ and }\] \[\tilde{C}_{p^{-1},q}^{\Delta}(m_{1},\cdots,m_{2n})=\frac{c(p^{-1},q) \epsilon_{1\cdots 2n}^{i_{1}\cdots i_{2n}}}{6\times 2^{n}\times n!}\prod_{l=1}^{n} \big{(}q\,p\big{)}^{i\,N_{\Delta/2}}\frac{[m_{i_{2l-1}}-1]_{p^{-1},q}}{\big{(} \frac{q}{p}\big{)}^{m_{2l-1}}}\] \[\times\frac{[m_{2l-1}]_{p^{-1},q}}{[2m_{2l-1}]_{p^{-1},q}}[m_{i_{ 2l-1}}]_{p^{-1},q}[m_{i_{2l-1}}+1]_{p^{-1},q}\delta_{m_{i_{2l-1}}+m_{i_{2l}},0}. \tag{69}\]
1. _The conformal super Virasoro_ \(2n\)_-algebra (_\(n\) _even) generated by the_ _Generalized_ \(q\)_-Quesne algebra_________[18]_:_ \[\big{[}L_{m_{1}}^{\Delta},\dots,L_{m_{2n}}^{\Delta}\big{]}=f_{p,q}^{\Delta}(m _{1},\dots,m_{2n})L_{\tilde{m}}^{\Delta}+\tilde{C}_{p,q}^{\Delta}(m_{1},\dots, m_{2n}),\] (70) _where_ \[f_{p,q}^{\Delta}(m_{1},\dots,m_{2n}) =\frac{\big{(}q^{-1}-p\big{)}^{\binom{2n-1}{2}}}{\big{(}\frac{p} {q}\big{)}^{\lfloor \frac{2n-1}{2}\rfloor\tilde{m}}}\Bigg{(}\frac{[-2\tilde{m}]_{p,q}^{Q}}{2[- \tilde{m}]_{p,q}^{Q}}\Bigg{)}^{\beta}\prod_{j=1}^{2n}\bigg{(}p^{\Delta(\tilde{ m}-1)}\] \[+q^{-\Delta(t\partial_{t}-\tilde{m})}\Bigg{)}\prod_{1\leq i<j\leq 2n} \Big{(}[m_{i}]_{p,q}^{Q}-[m_{j}]_{p,q}^{Q}\Big{)},\] (71) \[\tilde{m}=\sum_{l=1}^{2n}\,m_{l}\text{ and }\] \[\tilde{C}_{p,q}^{\Delta}(m_{1},\cdots,m_{2n}) =\frac{c(p,q)\epsilon_{1\cdots 2n}^{i_{1}\cdots i_{2n}}}{6\times 2^{n} \times n!}\prod_{l=1}^{n}(q\,p)^{-i\,N_{\Delta/2}}\frac{[m_{i_{2l-1}}-1]_{p,q} ^{Q}}{\big{(}\frac{p}{q}\big{)}^{m_{2l-1}}}\] \[\times\frac{[m_{2l-1}]_{p,q}^{Q}}{[2m_{2l-1}]_{p,q}^{Q}}[m_{i_{2l- 1}}]_{p,q}^{Q}[m_{i_{2l-1}}+1]_{p,q}^{Q}\delta_{m_{i_{2l-1}}+m_{i_{2l}},0}.\] (72)
2. _The conformal super Virasoro algebra generated by the_ _Hounkonnou-Ngompe generalized algebra___[19]_:_ \[\big{[}L_{m_{1}}^{\Delta},\dots,L_{m_{2n}}^{\Delta}\big{]}=f_{p,q}^{\Delta}(m _{1},\dots,m_{2n})L_{\tilde{m}}^{\Delta}+\tilde{C}_{p,q}^{\Delta}(m_{1},\dots, m_{2n}),\] (73) _where_ \[f_{p,q}^{\Delta}(m_{1},\dots,m_{2n}) =\frac{\big{(}q-p\big{)}^{\binom{2n-1}{2}}}{(p\,q)^{\lfloor\frac{ 2n-1}{2}\rfloor\tilde{m}}}\Bigg{(}\frac{[-2\tilde{m}]_{p,q,g}^{\mu,\nu}}{2[- \tilde{m}]_{p,q,g}^{\mu,\nu}}\Bigg{)}^{\beta}\prod_{j=1}^{2n}\bigg{(}p^{ \Delta(\tilde{m}-1)}\] \[+q^{\Delta(t\partial_{t}-\tilde{m})}\Bigg{)}\prod_{1\leq i<j\leq 2n }\Big{(}[m_{i}]_{p,q,g}^{\mu,\nu}-[m_{j}]_{p,q,g}^{\mu,\nu}\Big{)},\] (74) \[\tilde{m}=\sum_{l=1}^{2n}\,m_{l}\text{ and }\] \[\tilde{C}_{p,q}^{\Delta}(m_{1},\cdots,m_{2n}) =\frac{c(p,q)\epsilon_{1\cdots 2n}^{i_{1}\cdots i_{2n}}}{6\times 2^{n} \times n!}\prod_{l=1}^{n}\Big{(}\frac{q}{p}\Big{)}^{i\,N_{\Delta/2}}\frac{[m_{i_ {2l-1}}-1]_{p,q,g}^{\mu,\nu}}{\big{(}p\,q\big{)}^{m_{2l-1}}}\] \[\times\frac{[m_{2l-1}]_{p,q,g}^{\mu,\nu}}{[2m_{2l-1}]_{p,q,g}^{ \mu,\nu}}[m_{i_{2l-1}}]_{p,q,g}^{\mu,\nu}[m_{i_{2l-1}}+1]_{p,q,g}^{\mu,\nu} \delta_{m_{i_{2l-1}}+m_{i_{2l}},0}.\] (75)
### Another \(\mathcal{R}(p,q)\)-conformal Witt \(n\)-algebra
In this section, we determine another \(\mathcal{R}(p,q)\)-conformal Witt \(n\)-algebra. Some examples for a value of \(n\) are deduced and particular cases induced from quantum algebras are derived.
**Definition 16**.: _For \((\Delta\neq 0,1),\) the \(\mathcal{R}(p,q)\)-conformal operator is defined by:_
\[\mathcal{T}_{m}^{a\Delta}:=x^{(1-\Delta)(m+1)}\mathcal{D}_{\mathcal{R}(p^{a},q ^{a})}\,x^{\Delta(m+1)}, \tag{76}\]
where \(\mathcal{D}_{\mathcal{R}(p^{a},q^{a})}\) is the \(\mathcal{R}(p,q)\)-derivative presented by:
\[\mathcal{D}_{\mathcal{R}(p^{a},q^{a})}\big{(}\phi(z)\big{)}=\frac{p^{a}-q^{a}} {p^{a\,P}-q^{a\,Q}}\mathcal{R}(p^{a\,P},q^{a\,Q})\,\mathcal{D}_{p^{a},q^{a}} \phi(z).\]
Then, after computation, the relation (76) is reduced as:
\[\mathcal{T}_{m}^{a\Delta}=[\Delta(m+1)]_{\mathcal{R}(p^{a},q^{a})}\,z^{m}. \tag{77}\]
**Proposition 17**.: _The \(\mathcal{R}(p,q)\)-conformal operators (76) satisfy the product relation given by:_
\[\mathcal{T}_{m}^{a\Delta}.\mathcal{T}_{n}^{b\Delta} =\frac{\big{(}\tau_{1}^{a+b}-\tau_{2}^{a+b}\big{)}\tau_{1}^{an(1- \Delta)}}{\big{(}\tau_{1}^{a}-\tau_{2}^{a}\big{)}\big{(}\tau_{1}^{b}-\tau_{2} ^{b}\big{)}\tau_{1}^{b\Delta\,m}}\,\mathcal{T}_{m+n}^{(a+b)\Delta}+f_{ \mathcal{R}(p,q)}^{\Delta(a,b)}(m,n)\] \[-\frac{\tau_{2}^{a(\Delta(m+1)+n)}}{\tau_{1}^{b\Delta\,m}\big{(} \tau_{1}^{a}-\tau_{2}^{a}\big{)}}\,\mathcal{T}_{m+n}^{b\Delta}-\frac{\tau_{2} ^{b\Delta(n+1)}}{\tau_{1}^{an(\Delta-1)}\big{(}\tau_{1}^{b}-\tau_{2}^{b} \big{)}}\,\mathcal{T}_{m+n}^{a\Delta}, \tag{78}\]
_and the commutation relation_
\[\Big{[}\mathcal{T}_{m}^{a\Delta},\mathcal{T}_{n}^{b\Delta}\Big{]} =\frac{\big{(}\tau_{1}^{a+b}-\tau_{2}^{a+b}\big{)}}{\big{(}\tau_ {1}^{a}-\tau_{2}^{a}\big{)}\big{(}\tau_{1}^{b}-\tau_{2}^{b}\big{)}}\bigg{(} \frac{\tau_{1}^{an(1-\Delta)}}{\tau_{1}^{b\Delta\,m}}-\frac{\tau_{1}^{bm(1- \Delta)}}{\tau_{1}^{a\Delta\,n}}\bigg{)}\mathcal{T}_{m+n}^{(a+b)\Delta}\] \[-\frac{\tau_{2}^{b\Delta(n+1)}}{\tau_{1}^{a\Delta\,n}}\frac{\big{(} \tau_{1}^{an}-\tau_{2}^{bm}\big{)}}{\big{(}\tau_{1}^{b}-\tau_{2}^{b}\big{)}} \mathcal{T}_{m+n}^{a\Delta}+h_{\mathcal{R}(p,q)}^{\Delta(a,b)}(m,n)\] \[-\frac{\tau_{2}^{a\Delta(m+1)}}{\tau_{1}^{b\Delta\,m}}\frac{ \big{(}\tau_{1}^{bm}-\tau_{2}^{an}\big{)}}{\big{(}\tau_{1}^{a}-\tau_{2}^{a} \big{)}}\mathcal{T}_{m+n}^{b\Delta}, \tag{79}\]
_where \(f_{\mathcal{R}(p,q)}^{\Delta(a,b)}(m,n)\) and \(h_{\mathcal{R}(p,q)}^{\Delta(a,b)}(m,n)\) are given by the relations (81) and (82)._
Proof.: From the relation (76) and after computation, we have
\[\mathcal{T}_{m}^{a\Delta}.\mathcal{T}_{n}^{b\Delta}=[\Delta(n+1)]_{\mathcal{R} (p^{b},q^{b})}[\Delta(m+1)+n]_{\mathcal{R}(p^{a},q^{a})}x^{m+n}.\]
Using the \(\mathcal{R}(p,q)\)-numbers[17]:
\[[n]_{\mathcal{R}(p,q)}=\frac{\tau_{1}^{n}-\tau_{2}^{n}}{\tau_{1}-\tau_{2}}, \quad\tau_{1}\neq\tau_{2}, \tag{80}\]
the relation (78) and (79) follows by straightforward computation with
\[f_{\mathcal{R}(p,q)}^{\Delta(a,b)}(m,n) =\tau_{2}^{(a+b)\Delta(m+n+1)}\,[-\Delta\,m]_{\mathcal{R}(p^{b},q ^{b})}\,[n(1-\Delta)]_{\mathcal{R}(p^{a},q^{a})}, \tag{81}\] \[f_{\mathcal{R}(p,q)}^{\Delta(b,a)}(n,m) =\tau_{2}^{(a+b)\Delta(m+n+1)}\,[-\Delta\,n]_{\mathcal{R}(p^{a},q^{a})}\,[m(1-\Delta)]_{\mathcal{R}(p^{b},q^{b})}.\]
and
\[h_{\mathcal{R}(p,q)}^{\Delta(a,b)}(m,n) =f_{\mathcal{R}(p,q)}^{\Delta(a,b)}(m,n)-f_{\mathcal{R}(p,q)}^{ \Delta(b,a)}(n,m)\] \[=\tau_{2}^{(a+b)\Delta(m+n+1)}\bigg{(}[-\Delta\,m]_{\mathcal{R}(p^ {b},q^{b})}\,[n(1-\Delta)]_{\mathcal{R}(p^{a},q^{a})}\] \[-[-\Delta\,n]_{\mathcal{R}(p^{a},q^{a})}\,[m(1-\Delta)]_{\mathcal{ R}(p^{b},q^{b})}\bigg{)}. \tag{82}\]
**Remark 18**.: _Putting \(a=b=1,\) in the relation (79), we obtain the commutation relation:_
\[\Big{[}\mathcal{T}_{m}^{\Delta},\mathcal{T}_{n}^{\Delta}\Big{]} =\frac{\tau_{1}^{-\Delta(n+m)}\big{(}\tau_{1}^{n}-\tau_{1}^{m} \big{)}}{\big{(}\tau_{1}-\tau_{2}\big{)}}[2]_{\mathcal{R}(p,q)}\mathcal{T}_{m +n}^{2\Delta}+h_{\mathcal{R}(p,q)}^{\Delta}(m,n)\] \[-\bigg{(}\frac{\tau_{2}^{\Delta(n+1)}\big{(}\tau_{1}^{n}-\tau_{2} ^{m}\big{)}}{\tau_{1}^{\Delta\,n}\big{(}\tau_{1}-\tau_{2}\big{)}}+\frac{\tau_ {2}^{\Delta(m+1)}\big{(}\tau_{1}^{m}-\tau_{2}^{n}\big{)}}{\tau_{1}^{\Delta\, m}\big{(}\tau_{1}-\tau_{2}\big{)}}\bigg{)}\mathcal{T}_{m+n}^{\Delta},\]
_where_
\[h_{\mathcal{R}(p,q)}^{\Delta(1,1)}(m,n) =\tau_{2}^{2\Delta(m+n+1)}\bigg{(}[-\Delta\,m]_{\mathcal{R}(p,q) }\,[n(1-\Delta)]_{\mathcal{R}(p,q)}\] \[-[-\Delta\,n]_{\mathcal{R}(p,q)}\,[m(1-\Delta)]_{\mathcal{R}(p,q )}\bigg{)}.\]
Note that for \(\Delta=1,\) we recovered the \(\mathcal{R}(p,q)\)-deformed Witt algebra given in [17].
**Corollary 19**.: _The \(q\)-conformal operators_
\[\mathcal{T}_{m}^{a\Delta}=[\Delta(m+1)]_{q^{a}}\,z^{m}.\]
_obeys the commutation relation_
\[\Big{[}\mathcal{T}_{m}^{a\Delta},\mathcal{T}_{n}^{b\Delta}\Big{]} =\frac{\big{(}q^{a+b}-1\big{)}}{\big{(}q^{a}-1\big{)}\big{(}q^{b }-1\big{)}}\bigg{(}\frac{q^{an(1-\Delta)}}{q^{b\Delta\,m}}-\frac{q^{bm(1- \Delta)}}{q^{a\Delta\,n}}\bigg{)}\mathcal{T}_{m+n}^{(a+b)\Delta}\] \[-\frac{1}{q^{a\Delta\,n}}\frac{\big{(}q^{an}-1\big{)}}{\big{(}q^{ b}-1\big{)}}\mathcal{T}_{m+n}^{a\Delta}-\frac{1}{q^{b\Delta\,m}}\frac{\big{(}q^{bm }-1\big{)}}{\big{(}q^{a}-1\big{)}}\mathcal{T}_{m+n}^{b\Delta}\] \[+h_{q}^{\Delta(a,b)}(m,n),\]
_where_
\[h_{q}^{\Delta(a,b)}(m,n)=\bigg{(}[-\Delta\,m]_{q^{b}}[n(1-\Delta)]_{q^{a}}-[- \Delta\,n]_{q^{a}}[m(1-\Delta)]_{q^{b}}\bigg{)}.\]
**Definition 20**.: _The \(n\)-bracket is defined by:_
\[\Big{[}\mathcal{T}_{m_{1}}^{a_{1}\Delta},\cdots,\mathcal{T}_{m_{n}}^{a_{n} \Delta}\Big{]}:=\epsilon_{1\cdots n}^{i_{1}\cdots i_{n}}\,\mathcal{T}_{m_{i_{1} }}^{a_{i_{1}}\Delta}\cdots\mathcal{T}_{m_{i_{n}}}^{a_{i_{n}}\Delta},\]
_where \(\epsilon_{1\cdots n}^{i_{1}\cdots i_{n}}\) is the Levi-Civita (48)._
Our interest is focussed on the special case with the same \(a\Delta.\) Then,
\[\Big{[}\mathcal{T}_{m_{1}}^{a\Delta},\cdots,\mathcal{T}_{m_{n}}^{a\Delta}\Big{]} =\epsilon_{1\cdots n}^{1\cdots n}\,\mathcal{T}_{m_{1}}^{a\Delta}\cdots \mathcal{T}_{m_{n}}^{a\Delta}.\]
Putting \(a=b\) in the relation (79), the following commutation relation holds:
\[\Big{[}\mathcal{T}_{m}^{a\Delta},\mathcal{T}_{n}^{a\Delta}\Big{]} = \frac{\tau_{1}^{-a\Delta(n+m)}\big{(}\tau_{1}^{an}-\tau_{1}^{am} \big{)}}{\tau_{1}^{a}-\tau_{2}^{a}}[2]_{\mathcal{R}(p^{a},q^{a})}\mathcal{T}_{ m+n}^{2a\Delta}\] \[- \frac{1}{\big{(}\tau_{1}-\tau_{2}\big{)}}\bigg{(}\frac{\tau_{2}^ {a\Delta(n+1)}}{\tau_{1}^{a\Delta\,n}}\big{(}\tau_{1}^{an}-\tau_{2}^{am}\big{)}\] \[- \frac{\tau_{2}^{a\Delta(m+1)}}{\tau_{1}^{a\Delta\,m}}\big{(}\tau _{1}^{am}-\tau_{2}^{an}\big{)}\bigg{)}\mathcal{T}_{m+n}^{a\Delta}+h_{\mathcal{ R}(p,q)}^{\Delta(a)}(m,n),\]
where
\[h_{\mathcal{R}(p,q)}^{\Delta(a)}(m,n) =\tau_{2}^{2a\Delta(m+n+1)}\bigg{(}[-\Delta\,m]_{\mathcal{R}(p^{ a},q^{a})}[n(1-\Delta)]_{\mathcal{R}(p^{a},q^{a})}\] \[-[-\Delta\,n]_{\mathcal{R}(p^{a},q^{a})}[m(1-\Delta)]_{\mathcal{ R}(p^{a},q^{a})}\bigg{)}.\]
**Proposition 21**.: _The \(\mathcal{R}(p,q)\)-conformal Witt \(n\)-algebra is given by the following commutation relation:_
\[\Big{[}\mathcal{T}_{m_{1}}^{a\Delta},\cdots,\mathcal{T}_{m_{n}}^{ a\Delta}\Big{]} =\frac{(-1)^{n+1}}{\big{(}\tau_{1}^{a}-\tau_{2}^{a}\big{)}^{n-1} }\Big{(}V_{a\Delta}^{n}[n]_{\mathcal{R}(p^{a},q^{a})}\mathcal{T}_{\bar{m}}^{n \,a\Delta}\] \[-[n-1]_{\mathcal{R}(p^{a},q^{a})}\big{(}D_{a\Delta}^{n}+N_{a \Delta}^{n}\big{)}\mathcal{T}_{\bar{m}}^{a(n-1)\Delta}\Big{)}\] \[+h_{\mathcal{R}(p,q)}^{\Delta}(m_{1},\ldots,m_{n}), \tag{83}\]
_where_
\[V_{a\Delta}^{n} =\tau_{1}^{a(n-1)(1-\Delta)\bar{m}}\Big{(}\big{(}\tau_{1}^{a}- \tau_{2}^{a}\big{)}^{\binom{n}{2}}\prod_{1\leq j<k\leq n}\Big{(}[m_{j}]_{ \mathcal{R}(p^{a},q^{a})}\] \[-[m_{k}]_{\mathcal{R}(p^{a},q^{a})}\Big{)}+\prod_{1\leq j<k\leq n }\Big{(}\tau_{2}^{a\,m_{j}}-\tau_{2}^{a\,m_{k}}\Big{)}\Big{)}, \tag{84}\]
\[D_{a\Delta}^{n} =\Big{(}\big{(}\tau_{1}^{a}-\tau_{2}^{a}\big{)}^{\binom{n}{2}} \prod_{1\leq j<k\leq n}\frac{\tau_{2}^{a\Delta(m_{k}+1)}}{\tau_{1}^{a\Delta\, m_{k}}}\Big{(}[m_{k}]_{\mathcal{R}(p^{a},q^{a})}-[m_{j}]_{\mathcal{R}(p^{a},q^{a})}\] \[+\tau_{2}^{am_{k}}-\tau_{1}^{am_{j}}\Big{)}, \tag{85}\]
\[N^{n}_{a\Delta} =(-1)^{n+1}\Big{(}\big{(}\tau^{a}_{1}-\tau^{a}_{2}\big{)}^{\binom{n} {2}}\prod_{1\leq j<k\leq n}\frac{\tau^{a\Delta(m_{j}+1)}_{2}}{\tau^{a\Delta\,m_{ j}}_{1}}\Big{(}[m_{j}]_{\mathcal{R}(p^{a},q^{a})}\] \[-[m_{k}]_{\mathcal{R}(p^{a},q^{a})}+\tau^{am_{j}}_{2}-\tau^{am_{k} }_{1}\Big{)}, \tag{86}\]
_and_
\[h^{\Delta(a)}_{\mathcal{R}(p,q)}(m_{1},\ldots,m_{n})=\tau^{an \Delta(\bar{m}+1)}_{2}\prod_{1\leq j<k\leq n}\bigg{(}[-\Delta\,m_{j}]_{\mathcal{ R}(p^{a},q^{a})}\] \[\qquad\qquad\times[m_{k}(1-\Delta)]_{\mathcal{R}(p^{a},q^{a})}-[ -\Delta\,m_{k}]_{\mathcal{R}(p^{a},q^{a})}[m_{j}(1-\Delta)]_{\mathcal{R}(p^{a },q^{a})}\bigg{)}. \tag{87}\]
Proof.: By straigthfoward computation.
The results concerning the \(q\)-deformed conformal Witt \(n\)-algebra are contained in the following corollary.
**Corollary 22**.: _The \(q\)-deformed conformal Witt \(n\)-algebra is given by the following commutation relation:_
\[\Big{[}\mathcal{T}^{a\Delta}_{m_{1}},\cdots,\mathcal{T}^{a\Delta} _{m_{n}}\Big{]} =\frac{(-1)^{n+1}}{\big{(}q^{a}-1\big{)}^{n-1}}\Big{(}V^{n}_{a \Delta}[n]_{q^{a}}\mathcal{T}^{n\,a\Delta}_{\bar{m}}\] \[-[n-1]_{q^{a}}\big{(}D^{n}_{a\Delta}+N^{n}_{a\Delta}\big{)} \mathcal{T}^{a(n-1)\Delta}_{\bar{m}}\Big{)}\] \[+h^{\Delta}_{q}(m_{1},\ldots,m_{n}), \tag{88}\]
_where_
\[V^{n}_{a\Delta}=q^{a(n-1)(1-\Delta)\bar{m}}\Big{(}\big{(}q^{a}-1 \big{)}^{\binom{n}{2}}\prod_{1\leq j<k\leq n}\Big{(}[m_{j}]_{q^{a}}-[m_{k}]_{ q^{a}}\Big{)}\Big{)},\]
\[D^{n}_{a\Delta}=\Big{(}\big{(}q^{a}-1\big{)}^{\binom{n}{2}}\prod_{1\leq j<k \leq n}\frac{1}{q^{a\Delta\,m_{k}}}\Big{(}[m_{k}]_{q^{a}}-[m_{j}]_{q^{a}}+1-q^ {am_{j}}\Big{)},\]
\[N^{n}_{a\Delta}=(-1)^{n+1}\Big{(}\big{(}q^{a}-1\big{)}^{\binom{n}{2}}\prod_{1 \leq j<k\leq n}\frac{1}{q^{a\Delta\,m_{j}}}\Big{(}[m_{j}]_{q^{a}}-[m_{k}]_{q^{ a}}+1-q^{am_{k}}\Big{)},\]
_and_
\[h^{\Delta(a)}_{q}(m_{1},\ldots,m_{n}) =\prod_{1\leq j<k\leq n}\bigg{(}[-\Delta\,m_{j}]_{q^{a}}[m_{k}(1- \Delta)]_{q^{a}}\] \[-[-\Delta\,m_{k}]_{q^{a}}[m_{j}(1-\Delta)]_{q^{a}}\bigg{)}.\]
Proof.: By taking \(\tau_{1}=q\) and \(\tau_{2}=1\) in the proposition (21).
**Remark 23**.: _For \(n=3\) in the relation (83), we obtain the \(\mathcal{R}(p,q)\)-conformal Witt \(3\)-algebra:_
\[\Big{[}\mathcal{T}_{m_{1}}^{a\Delta},\mathcal{T}_{m_{2}}^{a\Delta}, \mathcal{T}_{m_{3}}^{a\Delta}\Big{]} =\frac{1}{\big{(}\tau_{1}^{a}-\tau_{2}^{a}\big{)}^{2}}\Big{(}V_{a \Delta}^{3}[3]_{\mathcal{R}(p^{a},q^{a})}\mathcal{T}_{m_{1}+m_{2}+m_{3}}^{3\,a \Delta}\] \[-[2]_{\mathcal{R}(p^{a},q^{a})}\big{(}D_{a\Delta}^{3}+N_{a\Delta}^ {3}\big{)}\mathcal{T}_{m_{1}+m_{2}+m_{3}}^{2a\Delta}\Big{)}\] \[+h_{\mathcal{R}(p,q)}^{\Delta}(m_{1},m_{2},m_{3}),\]
_where_
\[V_{a\Delta}^{3} =\tau_{1}^{2a(1-\Delta)(m_{1}+m_{2}+m_{3})}\Big{(}\big{(}\tau_{1} ^{a}-\tau_{2}^{a}\big{)}^{\binom{3}{2}}\prod_{1\leq j<k\leq 3}\Big{(}[m_{j}]_{ \mathcal{R}(p^{a},q^{a})}\] \[-[m_{k}]_{\mathcal{R}(p^{a},q^{a})}\Big{)}+\prod_{1\leq j<k\leq 3 }\Big{(}\tau_{2}^{a\,m_{j}}-\tau_{2}^{a\,m_{k}}\Big{)}\Big{)},\] \[D_{a\Delta}^{3} =\Big{(}\big{(}\tau_{1}^{a}-\tau_{2}^{a}\big{)}^{\binom{3}{2}} \prod_{1\leq j<k\leq 3}\frac{\tau_{2}^{a\Delta(m_{k}+1)}}{\tau_{1}^{a\Delta\,m_{k}} }\Big{(}[m_{k}]_{\mathcal{R}(p^{a},q^{a})}-[m_{j}]_{\mathcal{R}(p^{a},q^{a})}\] \[+\tau_{2}^{am_{k}}-\tau_{1}^{am_{j}}\Big{)},\]
\[N_{a\Delta}^{3} =\Big{(}\big{(}\tau_{1}^{a}-\tau_{2}^{a}\big{)}^{\binom{3}{2}} \prod_{1\leq j<k\leq 3}\frac{\tau_{2}^{a\Delta(m_{j}+1)}}{\tau_{1}^{a\Delta\,m_{j}} }\Big{(}[m_{j}]_{\mathcal{R}(p^{a},q^{a})}-[m_{k}]_{\mathcal{R}(p^{a},q^{a})}\] \[+\tau_{2}^{am_{j}}-\tau_{1}^{am_{k}}\Big{)},.\]
_and_
\[h_{\mathcal{R}(p,q)}^{\Delta(a)}(m_{1}, m_{2},m_{3})=\tau_{2}^{3a\Delta(m_{1}+m_{2}+m_{3}+1)}\prod_{1\leq j<k \leq 3}\bigg{(}[-\Delta\,m_{j}]_{\mathcal{R}(p^{a},q^{a})}\] \[\times[m_{k}(1-\Delta)]_{\mathcal{R}(p^{a},q^{a})}-[-\Delta\,m_{ k}]_{\mathcal{R}(p^{a},q^{a})}[m_{j}(1-\Delta)]_{\mathcal{R}(p^{a},q^{a})} \bigg{)}.\]
### Conformal Witt \(n\)-algebra and quantum algebras
In this section, the conformal Witt \(n\)-algebra corresponding to quantum algebra known in the literature are deduced.
#### 2.3.1. Conformal Witt \(n\)-algebra associated to the Biedenharn-Macfarlane algebra
The conformal Witt \(n\)-algebra induced by the Biedenharn-Macfarlane algebra[3, 24] is derived by: for \((\Delta\neq 0,1)\), the \(q\)-conformal operator defined by:
\[\mathcal{T}_{m}^{a\Delta}:=x^{(1-\Delta)(m+1)}\mathcal{D}_{q^{a}}\,x^{\Delta(m +1)}, \tag{89}\]
where \(\mathcal{D}_{q^{a}}\) is the \(q\)-derivative :
\[\mathcal{D}_{q^{a}}\phi(x)=\frac{\phi(q^{a}\,x)-\phi(q^{-a}\,x)}{q^{a}-q^{-a}}. \tag{90}\]
From the \(q\)-number
\[[n]_{q^{a}}=\frac{q^{an}-q^{-an}}{q^{a}-q^{-a}}, \tag{91}\]
the relation (89) is reduced as:
\[\mathcal{T}_{m}^{a\Delta}=[\Delta(m+1)]_{q^{a}}\,x^{m}.\]
Thus, the \(q\)-conformal operators (89) satisfy the product relation:
\[\mathcal{T}_{m}^{a\Delta}\cdot\mathcal{T}_{n}^{b\Delta} =\frac{\big{(}q^{a+b}-q^{-a-b}\big{)}q^{an(1-\Delta)}}{\big{(}q^{a }-q^{-a}\big{)}\big{(}q^{b}-q^{-b}\big{)}q^{b\Delta\,m}}\,\mathcal{T}_{m+n}^{(a +b)\Delta}+f_{q}^{\Delta(a,b)}(m,n)\] \[-\frac{q^{-a(\Delta(m+1)+n)}}{q^{b\Delta\,m}\big{(}q^{a}-q^{-a} \big{)}}\,\mathcal{T}_{m+n}^{b\Delta}-\frac{q^{-b\Delta(n+1)}}{q^{an(\Delta-1 )}\big{(}q^{b}-q^{-b}\big{)}}\,\mathcal{T}_{m+n}^{a\Delta}, \tag{92}\]
with
\[f_{q}^{\Delta(a,b)}(m,n)=q^{-(a+b)\Delta(m+n+1)}\,[-\Delta\,m]_{q^{b}}\,[n(1- \Delta)]_{q^{a}},\]
and the commutation relation
\[\Big{[}\mathcal{T}_{m}^{a\Delta},\mathcal{T}_{n}^{b\Delta}\Big{]} =\frac{\big{(}q^{a+b}-q^{-a-b}\big{)}}{\big{(}q^{a}-q^{-a}\big{)} \big{(}q^{b}-q^{-b}\big{)}}\bigg{(}\frac{q^{an(1-\Delta)}}{q^{b\Delta\,m}}- \frac{q^{bm(1-\Delta)}}{q^{a\Delta\,n}}\bigg{)}\mathcal{T}_{m+n}^{(a+b)\Delta}\] \[-\frac{q^{-b\Delta(n+1)}}{q^{a\Delta\,n}}\frac{\big{(}q^{an}-q^{ -bm}\big{)}}{\big{(}q^{b}-q^{-b}\big{)}}\mathcal{T}_{m+n}^{a\Delta}+h_{q}^{ \Delta(a,b)}(m,n)\] \[-\frac{q^{-a\Delta(m+1)}}{q^{b\Delta\,m}}\frac{\big{(}q^{bm}-q^{ -an}\big{)}}{\big{(}q^{a}-q^{-a}\big{)}}\mathcal{T}_{m+n}^{b\Delta}, \tag{93}\]
where
\[h_{q}^{\Delta(a,b)}(m,n) =q^{-(a+b)\Delta(m+n+1)}\bigg{(}[-\Delta\,m]_{q^{b}}\,[n(1- \Delta)]_{q^{a}}\] \[-[-\Delta\,n]_{q^{a}}\,[m(1-\Delta)]_{q^{b}}\bigg{)}.\]
Putting \(a=b=1\), in the relation (93), the commutation relation:
\[\Big{[}\mathcal{T}_{m}^{\Delta},\mathcal{T}_{n}^{\Delta}\Big{]} =\frac{q^{-\Delta(n+m)}\big{(}q^{n}-q^{m}\big{)}}{\big{(}q-q^{-1} \big{)}}[2]_{q}\,\mathcal{T}_{m+n}^{2\Delta}+h_{q}^{\Delta}(m,n)\] \[-\bigg{(}\frac{q^{-\Delta(n+1)}\big{(}q^{n}-q^{-m}\big{)}}{q^{ \Delta\,n}\big{(}q-q^{-1}\big{)}}+\frac{q^{-\Delta(m+1)}\big{(}q^{m}-q^{-n} \big{)}}{q^{\Delta\,m}\big{(}q-q^{-1}\big{)}}\bigg{)}\mathcal{T}_{m+n}^{\Delta},\]
where
\[h_{q}^{\Delta(1,1)}(m,n)=q^{-2\Delta(m+n+1)}\bigg{(}[-\Delta\,m]_{q}\,[n(1- \Delta)]_{q}-[-\Delta\,n]_{q}\,[m(1-\Delta)]_{q}\bigg{)}.\]
Besides, the \(n\)-bracket is defined by:
\[\Big{[}\mathcal{T}_{m_{1}}^{a_{1}\Delta},\cdots,\mathcal{T}_{m_{n}}^{a_{n}\Delta} \Big{]}:=\epsilon_{1\cdots n}^{i_{1}\cdots i_{n}}\,\mathcal{T}_{m_{i_{1}}}^{a_{i_ {1}}\Delta}\cdots\mathcal{T}_{m_{i_{n}}}^{a_{i_{n}}\Delta}.\]
Our interest is focussed on the special case with the same \(a\Delta.\) Then,
\[\Big{[}\mathcal{T}_{m_{1}}^{a\Delta},\cdots,\mathcal{T}_{m_{n}}^{a\Delta} \Big{]}=\epsilon_{1\cdots n}^{1\cdots n}\,\mathcal{T}_{m_{1}}^{a\Delta}\cdots \mathcal{T}_{m_{n}}^{a\Delta}.\]
Setting \(a=b\) in the relation (93), we obtain:
\[\Big{[}\mathcal{T}_{m}^{a\Delta},\mathcal{T}_{n}^{a\Delta}\Big{]} =\frac{q^{-a\Delta(n+m)}\big{(}q^{an}-q^{am}\big{)}}{q^{a}-q^{-a} }[2]_{q^{a}}\mathcal{T}_{m+n}^{2a\Delta}\] \[-\frac{1}{\big{(}q-q^{-1}\big{)}}\bigg{(}\frac{q^{-a\Delta(n+1)}} {q^{a\Delta\,n}}\big{(}q^{an}-q^{-am}\big{)}\] \[-\frac{q^{-a\Delta(m+1)}}{q^{a\Delta\,m}}\big{(}q^{am}-q^{-an} \big{)}\bigg{)}\mathcal{T}_{m+n}^{a\Delta}+h_{q}^{\Delta(a)}(m,n),\]
where
\[h_{q}^{\Delta(a)}(m,n) =q^{-2a\Delta(m+n+1)}\bigg{(}[-\Delta\,m]_{q^{a}}[n(1-\Delta)]_{q ^{a}}\] \[-[-\Delta\,n]_{q^{a}}[m(1-\Delta)]_{q^{a}}\bigg{)}.\]
The \(q\)-conformal Witt \(n\)-algebra is presented by :
\[\Big{[}\mathcal{T}_{m_{1}}^{a\Delta},\cdots,\mathcal{T}_{m_{n}}^{ a\Delta}\Big{]} =\frac{(-1)^{n+1}}{\big{(}q^{a}-q^{-a}\big{)}^{n-1}}\Big{(}V_{a \Delta}^{n}[n]_{q^{a}}\mathcal{T}_{\bar{m}}^{n\,a\Delta}\] \[-[n-1]_{q^{a}}\big{(}D_{a\Delta}^{n}+N_{a\Delta}^{n}\big{)} \mathcal{T}_{\bar{m}}^{a(n-1)\Delta}\Big{)}\] \[+h_{q}^{\Delta}(m_{1},\ldots,m_{n}),\]
where
\[V_{a\Delta}^{n} =q^{a(n-1)(1-\Delta)\bar{m}}\Big{(}\big{(}q^{a}-q^{-a}\big{)}^{ \binom{n}{2}}\prod_{1\leq j<k\leq n}\Big{(}[m_{j}]_{q^{a}}-[m_{k}]_{q^{a}} \Big{)}\] \[+\prod_{1\leq j<k\leq n}\Big{(}q^{-a\,m_{j}}-q^{-a\,m_{k}}\Big{)} \Big{)},\]
\[D_{a\Delta}^{n} =\Big{(}\big{(}q^{a}-q^{-a}\big{)}^{\binom{n}{2}}\prod_{1\leq j<k \leq n}\frac{q^{-a\Delta(m_{k}+1)}}{q^{a\Delta\,m_{k}}}\Big{(}[m_{k}]_{q^{a}} -[m_{j}]_{q^{a}}+q^{-am_{k}}-q^{am_{j}}\Big{)},\] \[N_{a\Delta}^{n} =(-1)^{n+1}\Big{(}\big{(}q^{a}-q^{-a}\big{)}^{\binom{n}{2}}\prod _{1\leq j<k\leq n}\frac{q^{-a\Delta(m_{j}+1)}}{q^{a\Delta\,m_{j}}}\Big{(}[m_{j }]_{q^{a}}-[m_{k}]_{q^{a}}\] \[+q^{-am_{j}}-q^{am_{k}}\Big{)},\]
and
\[h_{q}^{\Delta(a)}(m_{1},\ldots,m_{n}) =q^{-an\Delta(\bar{m}+1)}\prod_{1\leq j<k\leq n}\bigg{(}[-\Delta\,m_ {j}]_{q^{a}}[m_{k}(1-\Delta)]_{q^{a}}\] \[-[-\Delta\,m_{k}]_{q^{a}}[m_{j}(1-\Delta)]_{q^{a}}\bigg{)}.\]
Putting \(n=3\) in the relation (88), we obtain the \(q\)-conformal Witt \(3\)-algebra:
\[\Big{[}\mathcal{T}_{m_{1}}^{a\Delta},\mathcal{T}_{m_{2}}^{a\Delta },\mathcal{T}_{m_{3}}^{a\Delta}\Big{]} =\frac{1}{\big{(}q^{a}-q^{-a}\big{)}^{2}}\Big{(}V_{a\Delta}^{3}[ 3]_{q^{a}}\mathcal{T}_{m_{1}+m_{2}+m_{3}}^{3\,a\Delta}\] \[-[2]_{q^{a}}\big{(}D_{a\Delta}^{3}+N_{a\Delta}^{3}\big{)} \mathcal{T}_{m_{1}+m_{2}+m_{3}}^{2a\Delta}\Big{)}\] \[+h_{q}^{\Delta}(m_{1},m_{2},m_{3}),\]
where
\[V_{a\Delta}^{3} =q^{2a(1-\Delta)(m_{1}+m_{2}+m_{3})}\Big{(}\big{(}q^{a}-q^{-a} \big{)}^{\binom{3}{2}}\prod_{1\leq j<k\leq 3}\Big{(}[m_{j}]_{q^{a}}-[m_{k}]_{q^{a}} \Big{)}\] \[+\prod_{1\leq j<k\leq 3}\Big{(}q^{-a\,m_{j}}-q^{-a\,m_{k}} \Big{)}\Big{)},\]
\[D_{a\Delta}^{3} =\Big{(}\big{(}q^{a}-q^{-a}\big{)}^{\binom{3}{2}}\prod_{1\leq j<k \leq 3}\frac{q^{-a\Delta(m_{k}+1)}}{q^{a\Delta\,m_{k}}}\Big{(}[m_{k}]_{q^{a}} -[m_{j}]_{q^{a}}+q^{-am_{k}}-q^{am_{j}}\Big{)},\]
\[N_{a\Delta}^{3} =\Big{(}\big{(}q^{a}-q^{-a}\big{)}^{\binom{3}{2}}\prod_{1\leq j<k \leq 3}\frac{q^{-a\Delta(m_{j}+1)}}{q^{a\Delta\,m_{j}}}\Big{(}[m_{j}]_{q^{a}} -[m_{k}]_{q^{a}}+q^{-am_{j}}-q^{am_{k}}\Big{)},\]
and
\[h_{q}^{\Delta(a)}(m_{1},m_{2},m_{3}) =q^{-3a\Delta(m_{1}+m_{2}+m_{3}+1)}\prod_{1\leq j<k\leq 3} \bigg{(}[-\Delta\,m_{j}]_{q^{a}}[m_{k}(1-\Delta)]_{q^{a}}\] \[-[-\Delta\,m_{k}]_{q^{a}}[m_{j}(1-\Delta)]_{q^{a}}\bigg{)}.\]
#### 2.3.2. Conformal Witt \(n\)-algebra corresponding to the Jagannathan-Srinivasa algebra
The conformal Witt \(n\)-algebra from the Jagannathan-Srinivasa algebra [21] is deduced as follows: We consider for \((\Delta\neq 0,1)\), the \((p,q)\)-conformal operator defined by:
\[\mathcal{T}_{m}^{a\Delta}:=x^{(1-\Delta)(m+1)}\mathcal{D}_{p^{a},q^{a}}\,x^{ \Delta(m+1)}, \tag{94}\]
where \(\mathcal{D}_{p^{a},q^{a}}\) is the \((p,q)\)-derivative :
\[\mathcal{D}_{p^{a},q^{a}}\phi(x)=\frac{\phi(p^{a}\,x)-\phi(q^{a}\,x)}{p^{a}-q^ {a}}. \tag{95}\]
Then, by using the \((p,q)\)-number
\[[n]_{p^{a},q^{a}}=\frac{p^{an}-q^{an}}{p^{a}-q^{a}}, \tag{96}\]
the relation (94) takes the form:
\[\mathcal{T}_{m}^{a\Delta}=[\Delta(m+1)]_{p^{a},q^{a}}\,x^{m}. \tag{97}\]
Thus, the \((p,q)\)-conformal operators (94) obeys the product relation:
\[\mathcal{T}_{m}^{a\Delta}.\mathcal{T}_{n}^{b\Delta} =\frac{\big{(}p^{a+b}-q^{a+b}\big{)}p^{an(1-\Delta)}}{\big{(}p^{a }-q^{a}\big{)}\big{(}p^{b}-q^{b}\big{)}p^{b\Delta\,m}}\,\mathcal{T}_{m+n}^{(a+b )\Delta}-\frac{q^{b\Delta(n+1)}}{p^{an(\Delta-1)}\big{(}p^{b}-q^{b}\big{)}}\, \mathcal{T}_{m+n}^{a\Delta}\] \[-\frac{q^{a(\Delta(m+1)+n)}}{p^{b\Delta\,m}\big{(}p^{a}-q^{a} \big{)}}\,\mathcal{T}_{m+n}^{b\Delta}+f_{p,q}^{\Delta(a,b)}(m,n),\]
with
\[f_{p,q}^{\Delta(a,b)}(m,n)=q^{(a+b)\Delta(m+n+1)}\,[-\Delta\,m]_{p^{b},q^{b}} \,[n(1-\Delta)]_{p^{a},q^{a}},\]
and the commutation relation
\[\Big{[}\mathcal{T}_{m}^{a\Delta},\mathcal{T}_{n}^{b\Delta}\Big{]} =\frac{\big{(}p^{a+b}-q^{a+b}\big{)}}{\big{(}p^{a}-q^{a}\big{)} \big{(}p^{b}-q^{b}\big{)}}\bigg{(}\frac{p^{an(1-\Delta)}}{p^{b\Delta\,m}}- \frac{p^{bm(1-\Delta)}}{p^{a\Delta\,n}}\bigg{)}\mathcal{T}_{m+n}^{(a+b)\Delta}\] \[-\frac{q^{b\Delta(n+1)}}{p^{a\Delta\,n}}\frac{\big{(}p^{an}-q^{bm }\big{)}}{\big{(}p^{b}-q^{b}\big{)}}\mathcal{T}_{m+n}^{a\Delta}-\frac{q^{a \Delta(m+1)}}{p^{b\Delta\,m}}\frac{\big{(}p^{bm}-q^{an}\big{)}}{\big{(}p^{a}- q^{a}\big{)}}\mathcal{T}_{m+n}^{b\Delta}\] \[+h_{p,q}^{\Delta(a,b)}(m,n), \tag{98}\]
where
\[h_{p,q}^{\Delta(a,b)}(m,n) =q^{(a+b)\Delta(m+n+1)}\bigg{(}[-\Delta\,m]_{p^{b},q^{b}}\,[n(1- \Delta)]_{p^{a},q^{a}}\] \[-[-\Delta\,n]_{p^{a},q^{a}}\,[m(1-\Delta)]_{p^{b},q^{b}}\bigg{)}.\]
Putting \(a=b=1\), in the relation (98), we obtain the commutation relation:
\[\Big{[}\mathcal{T}_{m}^{\Delta},\mathcal{T}_{n}^{\Delta}\Big{]} =\frac{p^{-\Delta(n+m)}\big{(}p^{n}-p^{m}\big{)}}{\big{(}p-q\big{)} }[2]_{p,q}\mathcal{T}_{m+n}^{2\Delta}+h_{p,q}^{\Delta}(m,n)\] \[-\bigg{(}\frac{q^{\Delta(n+1)}\big{(}p^{n}-q^{m}\big{)}}{p^{\Delta \,n}\big{(}p-q\big{)}}+\frac{q^{\Delta(m+1)}\big{(}p^{m}-q^{n}\big{)}}{p^{ \Delta\,m}\big{(}p-q\big{)}}\bigg{)}\mathcal{T}_{m+n}^{\Delta},\]
where
\[h_{p,q}^{\Delta(1,1)}(m,n) =q^{2\Delta(m+n+1)}\bigg{(}[-\Delta\,m]_{p,q}\,[n(1-\Delta)]_{p,q}\] \[-[-\Delta\,n]_{p,q}\,[m(1-\Delta)]_{p,q}\bigg{)}.\]
Moreover, the \(n\)-bracket is defined by:
\[\Big{[}\mathcal{T}_{m_{1}}^{a_{1}\Delta},\cdots,\mathcal{T}_{m_{n}}^{a_{n}\Delta} \Big{]}:=\epsilon_{1\cdots n}^{i_{1}\cdots i_{n}}\,\mathcal{T}_{m_{i_{1}}}^{a_{ i_{1}}\Delta}\cdots\mathcal{T}_{m_{i_{n}}}^{a_{i_{n}}\Delta}.\]
Our interest is focussed on the special case with the same \(a\Delta\). Then,
\[\Big{[}\mathcal{T}_{m_{1}}^{a\Delta},\cdots,\mathcal{T}_{m_{n}}^{a\Delta} \Big{]}=\epsilon_{1\cdots n}^{1\cdots n}\,\mathcal{T}_{m_{1}}^{a\Delta}\cdots \mathcal{T}_{m_{n}}^{a\Delta}.\]
Putting \(a=b\) in the relation (98), the following commutation relation holds:
\[\Big{[}\mathcal{T}_{m}^{a\Delta},\mathcal{T}_{n}^{a\Delta}\Big{]} =\frac{p^{-a\Delta(n+m)}\big{(}p^{an}-p^{am}\big{)}}{p^{a}-q^{a}} [2]_{p^{a},q^{a}}\mathcal{T}_{m+n}^{2a\Delta}\] \[-\frac{1}{\big{(}\tau_{1}-\tau_{2}\big{)}}\bigg{(}\frac{q^{a \Delta(n+1)}}{p^{a\Delta\,n}}\big{(}p^{an}-q^{am}\big{)}\] \[-\frac{q^{a\Delta(m+1)}}{p^{a\Delta\,m}}\big{(}p^{am}-q^{an}\big{)} \bigg{)}\mathcal{T}_{m+n}^{a\Delta}+h_{p,q}^{\Delta(a)}(m,n),\]
where
\[h_{p,q}^{\Delta(a)}(m,n) =q^{2a\Delta(m+n+1)}\bigg{(}[-\Delta\,m]_{p^{a},q^{a}}[n(1-\Delta )]_{p^{a},q^{a}}\] \[-[-\Delta\,n]_{p^{a},q^{a}}[m(1-\Delta)]_{p^{a},q^{a}}\bigg{)}\]
The \((p,q)\)-conformal Witt \(n\)-algebra is presented by :
\[\Big{[}\mathcal{T}_{m_{1}}^{a\Delta},\cdots,\mathcal{T}_{m_{n}}^{ a\Delta}\Big{]} =\frac{(-1)^{n+1}}{\big{(}p^{a}-q^{a}\big{)}^{n-1}}\Big{(}V_{a \Delta}^{n}[n]_{p^{a},q^{a}}\mathcal{T}_{\bar{m}}^{\,n\,a\Delta}\] \[-[n-1]_{p^{a},q^{a}}\big{(}D_{a\Delta}^{n}+N_{a\Delta}^{n}\big{)} \mathcal{T}_{\bar{m}}^{a(n-1)\Delta}\Big{)}\] \[+h_{p,q}^{\Delta}(m_{1},\ldots,m_{n}), \tag{99}\]
where
\[V_{a\Delta}^{n} =p^{a(n-1)(1-\Delta)\bar{m}}\Big{(}\big{(}p^{a}-q^{a}\big{)}^{ \binom{n}{2}}\prod_{1\leq j<k\leq n}\Big{(}[m_{j}]_{p^{a},q^{a}}-[m_{k}]_{p^{a },q^{a}}\Big{)}\] \[+\prod_{1\leq j<k\leq n}\Big{(}q^{a\,m_{j}}-q^{a\,m_{k}}\Big{)} \Big{)},\]
\[D_{a\Delta}^{n}=\Big{(}\big{(}p^{a}-q^{a}\big{)}^{\binom{n}{2}} \prod_{1\leq j<k\leq n}\frac{q^{a\Delta(m_{k}+1)}}{p^{a\Delta\,m_{k}}}\Big{(}[ m_{k}]_{p^{a},q^{a}}-[m_{j}]_{p^{a},q^{a}}+q^{am_{k}}-p^{am_{j}}\Big{)},\]
\[N^{n}_{a\Delta} =(-1)^{n+1}\Big{(}\big{(}p^{a}-q^{a}\big{)}^{\binom{n}{2}}\prod_{1\leq j <k\leq n}\frac{q^{a\Delta(m_{j}+1)}}{p^{a\Delta\,m_{j}}}\Big{(}[m_{j}]_{p^{a},q^ {a}}-[m_{k}]_{p^{a},q^{a}}\] \[+q^{am_{j}}-p^{am_{k}}\Big{)},\]
and
\[h^{\Delta(a)}_{p,q}(m_{1},\ldots,m_{n}) =q^{an\Delta(\bar{m}+1)}\prod_{1\leq j<k\leq n}\bigg{(}[-\Delta\,m _{j}]_{p^{a},q^{a}}[m_{k}(1-\Delta)]_{p^{a},q^{a}}\] \[-[-\Delta\,m_{k}]_{p^{a},q^{a}}[m_{j}(1-\Delta)]_{p^{a},q^{a}} \bigg{)}.\]
Putting \(n=3\) in the relation (99), we obtain the \((p,q)\)-conformal Witt \(3\)-algebra:
\[\Big{[}\mathcal{T}^{a\Delta}_{m_{1}},\mathcal{T}^{a\Delta}_{m_{2 }},\mathcal{T}^{a\Delta}_{m_{3}}\Big{]} =\frac{1}{\big{(}p^{a}-q^{a}\big{)}^{2}}\Big{(}V^{3}_{a\Delta}[ 3]_{p^{a},q^{a}}\mathcal{T}^{3\,a\Delta}_{m_{1}+m_{2}+m_{3}}\] \[-[2]_{p^{a},q^{a}}\big{(}D^{3}_{a\Delta}+N^{3}_{a\Delta}\big{)} \mathcal{T}^{2a\Delta}_{m_{1}+m_{2}+m_{3}}\Big{)}\] \[+h^{\Delta}_{p,q}(m_{1},m_{2},m_{3}),\]
where
\[V^{3}_{a\Delta} =p^{2a(1-\Delta)(m_{1}+m_{2}+m_{3})}\Big{(}\big{(}p^{a}-q^{a} \big{)}^{\binom{3}{2}}\prod_{1\leq j<k\leq 3}\Big{(}[m_{j}]_{p^{a},q^{a}}-[m_{k}]_{p ^{a},q^{a}}\Big{)}\] \[+\prod_{1\leq j<k\leq 3}\Big{(}q^{a\,m_{j}}-q^{a\,m_{k}}\Big{)} \Big{)},\]
\[D^{3}_{a\Delta} =\Big{(}\big{(}p^{a}-q^{a}\big{)}^{\binom{3}{2}}\prod_{1\leq j<k \leq 3}\frac{q^{a\Delta(m_{k}+1)}}{p^{a\Delta\,m_{k}}}\Big{(}[m_{k}]_{p^{a},q ^{a}}-[m_{j}]_{p^{a},q^{a}}+q^{am_{k}}-p^{am_{j}}\Big{)},\]
\[N^{3}_{a\Delta} =\Big{(}\big{(}p^{a}-q^{a}\big{)}^{\binom{3}{2}}\prod_{1\leq j<k \leq 3}\frac{q^{a\Delta(m_{j}+1)}}{p^{a\Delta\,m_{j}}}\Big{(}[m_{j}]_{p^{a},q ^{a}}-[m_{k}]_{p^{a},q^{a}}+q^{am_{j}}-p^{am_{k}}\Big{)},\]
and
\[h^{\Delta(a)}_{p,q}(m_{1},m_{2},m_{3}) =q^{3a\Delta(m_{1}+m_{2}+m_{3}+1)}\prod_{1\leq j<k\leq 3} \bigg{(}[-\Delta\,m_{j}]_{p^{a},q^{a}}[m_{k}(1-\Delta)]_{p^{a},q^{a}}\] \[-[-\Delta\,m_{k}]_{p^{a},q^{a}}[m_{j}(1-\Delta)]_{p^{a},q^{a}} \bigg{)}.\]
#### 2.3.3. Conformal Witt \(n\)-algebra associated to the Chakrabarty and Jagannathan algebra
The conformal Witt \(n\)-algebra from the Chakrabarty and Jagannathan algebra [9] is deduced as follows: We consider for \((\Delta\neq 0,1)\), the \((p^{-1},q)\)-conformal operator defined by:
\[\mathcal{T}_{m}^{a\Delta}:=x^{(1-\Delta)(m+1)}\mathcal{D}_{p^{-a},q^{a}}\,x^{ \Delta(m+1)}, \tag{100}\]
where \(\mathcal{D}_{p^{-a},q^{a}}\) is the \((p^{-1},q)\)-derivative :
\[\mathcal{D}_{p^{-a},q^{a}}\phi(x)=\frac{\phi(p^{-a}\,x)-\phi(q^{a}\,x)}{(p^{-a }-q^{a})x}. \tag{101}\]
Then, by using the \((p^{-1},q)\)-number
\[[n]_{p^{-a},q^{a}}=\frac{p^{-an}-q^{an}}{p^{-a}-q^{a}}, \tag{102}\]
the relation (100) takes the form:
\[\mathcal{T}_{m}^{a\Delta}=[\Delta(m+1)]_{p^{-a},q^{a}}\,x^{m}. \tag{103}\]
Thus, the \((p^{-1},q)\)-conformal operators (100) obeys the product relation:
\[\mathcal{T}_{m}^{a\Delta}.\mathcal{T}_{n}^{b\Delta} =\frac{\big{(}p^{-a-b}-q^{a+b}\big{)}p^{-an(1-\Delta)}}{\big{(}p^ {-a}-q^{a}\big{)}\big{(}p^{-b}-q^{b}\big{)}p^{-b\Delta\,m}}\,\mathcal{T}_{m+n }^{(a+b)\Delta}+f_{p^{-1},q}^{\Delta(a,b)}(m,n)\] \[-\frac{q^{a(\Delta(m+1)+n)}}{p^{-b\Delta\,m}\big{(}p^{-a}-q^{a} \big{)}}\,\mathcal{T}_{m+n}^{b\Delta}-\frac{q^{b\Delta(n+1)}}{p^{-an(\Delta -1)}\big{(}p^{-b}-q^{b}\big{)}}\,\mathcal{T}_{m+n}^{a\Delta}, \tag{104}\]
with
\[f_{p^{-1},q}^{\Delta(a,b)}(m,n)=q^{(a+b)\Delta(m+n+1)}\,[-\Delta\,m]_{p^{-b}, q^{b}}\,[n(1-\Delta)]_{p^{-a},q^{a}},\]
and the commutation relation
\[\Big{[}\mathcal{T}_{m}^{a\Delta},\mathcal{T}_{n}^{b\Delta}\Big{]} =\frac{\big{(}p^{-a-b}-q^{a+b}\big{)}}{\big{(}p^{-a}-q^{a}\big{)} \big{(}p^{-b}-q^{b}\big{)}}\bigg{(}\frac{p^{-an(1-\Delta)}}{p^{-b\Delta\,m}} -\frac{p^{-bm(1-\Delta)}}{p^{-a\Delta\,n}}\bigg{)}\mathcal{T}_{m+n}^{(a+b)\Delta}\] \[-\frac{q^{b\Delta(n+1)}}{p^{-a\Delta\,n}}\frac{\big{(}p^{-an}-q^ {bm}\big{)}}{\big{(}p^{-b}-q^{b}\big{)}}\mathcal{T}_{m+n}^{a\Delta}+h_{p^{-1},q}^{\Delta(a,b)}(m,n)\] \[-\frac{q^{a\Delta(m+1)}}{p^{-b\Delta\,m}}\frac{\big{(}p^{-bm}-q^ {an}\big{)}}{\big{(}p^{-a}-q^{a}\big{)}}\mathcal{T}_{m+n}^{b\Delta}, \tag{105}\]
where
\[h_{p^{-1},q}^{\Delta(a,b)}(m,n) =q^{(a+b)\Delta(m+n+1)}\bigg{(}[-\Delta\,m]_{p^{-b},q^{b}}\,[n(1- \Delta)]_{p^{-a},q^{a}}\] \[-[-\Delta\,n]_{p^{-a},q^{a}}\,[m(1-\Delta)]_{p^{-b},q^{b}}\bigg{)}.\]
Putting \(a=b=1\), in the relation (105), we obtain the commutation relation:
\[\Big{[}\mathcal{T}_{m}^{\Delta},\mathcal{T}_{n}^{\Delta}\Big{]} =\frac{p^{\Delta(n+m)}\big{(}p^{-n}-p^{m}\big{)}}{\big{(}p^{-1}-q \big{)}}[2]_{p^{-1},q}\mathcal{T}_{m+n}^{2\Delta}+h_{p^{-1},q}^{\Delta}(m,n)\] \[-\bigg{(}\frac{q^{\Delta(n+1)}\big{(}p^{-n}-q^{m}\big{)}}{p^{- \Delta\,n}\big{(}p^{-1}-q\big{)}}+\frac{q^{\Delta(m+1)}\big{(}p^{-m}-q^{n}\big{)} }{p^{-\Delta\,m}\big{(}p^{-1}-q\big{)}}\bigg{)}\mathcal{T}_{m+n}^{\Delta},\]
where
\[h_{p^{-1},q}^{\Delta(1,1)}(m,n) =q^{2\Delta(m+n+1)}\bigg{(}[-\Delta\,m]_{p^{-1},q}\,[n(1-\Delta)] _{p^{-1},q}\] \[-[-\Delta\,n]_{p^{-1},q}\,[m(1-\Delta)]_{p^{-1},q}\bigg{)}.\]
Moreover, the \(n\)-bracket is defined by:
\[\Big{[}\mathcal{T}_{m_{1}}^{a_{1}\Delta},\cdots,\mathcal{T}_{m_{n}}^{a_{n} \Delta}\Big{]}:=\epsilon_{1\cdots n}^{i_{1}\cdots i_{n}}\,\mathcal{T}_{m_{i_{ 1}}}^{a_{i_{1}}\Delta}\cdots\mathcal{T}_{m_{i_{n}}}^{a_{i_{n}}\Delta}.\]
Our interest is focussed on the special case with the same \(a\Delta\). Then,
\[\Big{[}\mathcal{T}_{m_{1}}^{a\Delta},\cdots,\mathcal{T}_{m_{n}}^{a_{n}\Delta} \Big{]}=\epsilon_{1\cdots n}^{1\cdots n}\,\mathcal{T}_{m_{1}}^{a\Delta}\cdots \mathcal{T}_{m_{n}}^{a\Delta}.\]
Putting \(a=b\) in the relation (105), the following commutation relation holds:
\[\Big{[}\mathcal{T}_{m}^{a\Delta},\mathcal{T}_{n}^{a\Delta}\Big{]} =\frac{p^{a\Delta(n+m)}\big{(}p^{-an}-p^{-am}\big{)}}{p^{-a}-q^{ a}}[2]_{p^{-a},q^{a}}\mathcal{T}_{m+n}^{2a\Delta}\] \[-\frac{1}{\big{(}p^{-1}-q\big{)}}\bigg{(}\frac{q^{a\Delta(n+1)}}{ p^{-a\Delta\,n}}\big{(}p^{-an}-q^{am}\big{)}\] \[-\frac{q^{a\Delta(m+1)}}{p^{-a\Delta\,m}}\big{(}p^{-am}-q^{an} \big{)}\bigg{)}\mathcal{T}_{m+n}^{a\Delta}+h_{p^{-1},q}^{\Delta(a)}(m,n),\]
where
\[h_{p,q}^{\Delta(a)}(m,n) =q^{2a\Delta(m+n+1)}\bigg{(}[-\Delta\,m]_{p^{-a},q^{a}}[n(1- \Delta)]_{p^{-a},q^{a}}\] \[-[-\Delta\,n]_{p^{-a},q^{a}}[m(1-\Delta)]_{p^{-a},q^{a}}\bigg{)}\]
The \((p^{-1},q)\)-conformal Witt \(n\)-algebra is presented by :
\[\Big{[}\mathcal{T}_{m_{1}}^{a\Delta},\cdots,\mathcal{T}_{m_{n}}^{ a\Delta}\Big{]} =\frac{(-1)^{n+1}}{\big{(}p^{-a}-q^{a}\big{)}^{n-1}}\Big{(}V_{a \Delta}^{n}[n]_{p^{-a},q^{a}}\mathcal{T}_{\bar{m}}^{\,n\,a\Delta}\] \[-[n-1]_{p^{-a},q^{a}}\big{(}D_{a\Delta}^{n}+N_{a\Delta}^{n}\big{)} \mathcal{T}_{\bar{m}}^{a(n-1)\Delta}\Big{)}\] \[+h_{p^{-1},q}^{\Delta}(m_{1},\ldots,m_{n}), \tag{106}\]
where
\[V^{n}_{a\Delta} =p^{-a(n-1)(1-\Delta)\bar{m}}\Big{(}\big{(}p^{-a}-q^{a}\big{)}^{ \binom{n}{2}}\prod_{1\leq j<k\leq n}\Big{(}[m_{j}]_{p^{-a},q^{a}}-[m_{k}]_{p^{-a},q^{a}}\Big{)}\] \[+\prod_{1\leq j<k\leq n}\Big{(}q^{a\,m_{j}}-q^{a\,m_{k}}\Big{)} \Big{)},\]
\[D^{n}_{a\Delta} =\Big{(}\big{(}p^{-a}-q^{a}\big{)}^{\binom{n}{2}}\prod_{1\leq j<k \leq n}\frac{q^{a\Delta(m_{k}+1)}}{p^{-a\Delta\,m_{k}}}\Big{(}[m_{k}]_{p^{-a},q^{a}}-[m_{j}]_{p^{-a},q^{a}}\] \[+q^{am_{k}}-p^{-am_{j}}\Big{)},\]
\[N^{n}_{a\Delta} =(-1)^{n+1}\Big{(}\big{(}p^{-a}-q^{a}\big{)}^{\binom{n}{2}}\prod _{1\leq j<k\leq n}\frac{q^{a\Delta(m_{j}+1)}}{p^{-a\Delta\,m_{j}}}\Big{(}[m_{ j}]_{p^{-a},q^{a}}-[m_{k}]_{p^{-a},q^{a}}\] \[+q^{am_{j}}-p^{-am_{k}}\Big{)},\]
and
\[h^{\Delta(a)}_{p^{-1},q}(m_{1},\ldots,m_{n}) =q^{an\Delta(\bar{m}+1)}\prod_{1\leq j<k\leq n}\bigg{(}[-\Delta\, m_{j}]_{p^{-a},q^{a}}[m_{k}(1-\Delta)]_{p^{-a},q^{a}}\] \[-[-\Delta\,m_{k}]_{p^{-a},q^{a}}[m_{j}(1-\Delta)]_{p^{-a},q^{a}} \bigg{)}.\]
Putting \(n=3\) in the relation (106), we obtain the \((p^{-1},q)\)-conformal Witt \(3\)-algebra:
\[\Big{[}\mathcal{T}^{a\Delta}_{m_{1}},\mathcal{T}^{a\Delta}_{m_{2} },\mathcal{T}^{a\Delta}_{m_{3}}\Big{]} =\frac{1}{\big{(}p^{-a}-q^{a}\big{)}^{2}}\Big{(}V^{3}_{a\Delta}[ 3]_{p^{-a},q^{a}}\mathcal{T}^{3\,a\Delta}_{m_{1}+m_{2}+m_{3}}\] \[-[2]_{p^{-a},q^{a}}\big{(}D^{3}_{a\Delta}+N^{3}_{a\Delta}\big{)} \mathcal{T}^{2a\Delta}_{m_{1}+m_{2}+m_{3}}\Big{)}\] \[+h^{\Delta}_{p^{-1},q}(m_{1},m_{2},m_{3}),\]
where
\[V^{3}_{a\Delta} =p^{-2a(1-\Delta)(m_{1}+m_{2}+m_{3})}\Big{(}\big{(}p^{-a}-q^{a} \big{)}^{\binom{3}{2}}\prod_{1\leq j<k\leq 3}\Big{(}[m_{j}]_{p^{-a},q^{a}}-[m_{k}]_{p^{- a},q^{a}}\Big{)}\] \[+\prod_{1\leq j<k\leq 3}\Big{(}q^{a\,m_{j}}-q^{a\,m_{k}}\Big{)} \Big{)},\]
\[D^{3}_{a\Delta} =\Big{(}\big{(}p^{-a}-q^{a}\big{)}^{\binom{3}{2}}\prod_{1\leq j<k \leq 3}\frac{q^{a\Delta(m_{k}+1)}}{p^{-a\Delta\,m_{k}}}\Big{(}[m_{k}]_{p^{-a},q^{a}}-[m_{j}]_{p^{-a},q^{a}}+q^{am_{k}}-p^{-am_{j}}\Big{)},\]
\[N_{a\Delta}^{3}=\Big{(}\big{(}p^{-a}-q^{a}\big{)}^{\binom{3}{2}}\prod_{1\leq j<k \leq 3}\frac{q^{a\Delta(m_{j}+1)}}{p^{-a\Delta\,m_{j}}}\Big{(}[m_{j}]_{p^{-a},q^{a }}-[m_{k}]_{p^{-a},q^{a}}+q^{am_{j}}-p^{-am_{k}}\Big{)},\]
and
\[h_{p^{-1},q}^{\Delta(a)}(m_{1},m_{2},m_{3}) =q^{3a\Delta(m_{1}+m_{2}+m_{3}+1)}\prod_{1\leq j<k\leq 3}\bigg{(}[- \Delta\,m_{j}]_{p^{-a},q^{a}}[m_{k}(1-\Delta)]_{p^{-a},q^{a}}\] \[-[-\Delta\,m_{k}]_{p^{-a},q^{a}}[m_{j}(1-\Delta)]_{p^{-a},q^{a}} \bigg{)}.\]
3.4. Conformal Witt \(n\)-algebra induced by the Hounkonnou-Ngompe generalization of \(q\)-Quesne algebra
The conformal Witt \(n\)-algebra associated to the Hounkonnou-Ngompe generalization of \(q\)-Quesne algebra [18] is deduced as follows: We consider for \((\Delta\neq 0,1)\), the generalized \(q\)-Quesne conformal operator defined by:
\[\mathcal{T}_{m}^{a\Delta}:=x^{(1-\Delta)(m+1)}\mathcal{D}_{p^{a},q^{a}}^{Q}\, x^{\Delta(m+1)}, \tag{107}\]
where \(\mathcal{D}_{p^{a},q^{a}}^{Q}\) is the generalized \(q\)-Quesne derivative :
\[\mathcal{D}_{p^{a},q^{a}}^{Q}\phi(x)=\frac{\phi(p^{a}\,x)-\phi(q^{-a}\,x)}{(q^ {a}-p^{-a})x}. \tag{108}\]
Then, by using the generalized \(q\)-Quesne number
\[[n]_{p^{a},q^{a}}^{Q}=\frac{p^{an}-q^{-an}}{-p^{-a}+q^{a}}, \tag{109}\]
the relation (107) takes the form:
\[\mathcal{T}_{m}^{a\Delta}=[\Delta(m+1)]_{p^{a},q^{a}}^{Q}\,x^{m}. \tag{110}\]
Thus, the generalized \(q\)-Quesne conformal operators (107) obeys the product relation:
\[\mathcal{T}_{m}^{a\Delta}\cdot\mathcal{T}_{n}^{b\Delta} =\frac{\big{(}p^{a+b}-q^{-a-b}\big{)}p^{an(1-\Delta)}}{\big{(}-p^ {-a}+q^{a}\big{)}\big{(}-p^{-b}+q^{b}\big{)}p^{b\Delta\,m}}\,\mathcal{T}_{m+n} ^{(a+b)\Delta}+f_{p,q}^{\Delta(a,b)}(m,n)\] \[-\frac{q^{-a(\Delta(m+1)+n)}}{p^{b\Delta\,m}\big{(}-p^{-a}+q^{a} \big{)}}\,\mathcal{T}_{m+n}^{b\Delta}-\frac{q^{-b\Delta(n+1)}}{p^{an(\Delta-1 )}\big{(}-p^{-b}+q^{b}\big{)}}\,\mathcal{T}_{m+n}^{a\Delta},\]
with
\[f_{p,q}^{\Delta(a,b)}(m,n)=q^{-(a+b)\Delta(m+n+1)}\,[-\Delta\,m]_{p^{b},q^{b} }^{Q}\,[n(1-\Delta)]_{p^{a},q^{a}}^{Q},\]
and the commutation relation
\[\Big{[}\mathcal{T}_{m}^{a\Delta},\mathcal{T}_{n}^{b\Delta}\Big{]} =\frac{\big{(}p^{a+b}-q^{-a-b}\big{)}}{\big{(}-p^{-a}+q^{a}\big{)} \big{(}-p^{-b}+q^{b}\big{)}}\bigg{(}\frac{p^{an(1-\Delta)}}{p^{b\Delta\,m}}- \frac{p^{bm(1-\Delta)}}{p^{a\Delta\,n}}\bigg{)}\mathcal{T}_{m+n}^{(a+b)\Delta}\] \[-\frac{q^{-b\Delta(n+1)}}{p^{a\Delta\,n}}\frac{\big{(}p^{an}-q^{-bm }\big{)}}{\big{(}-p^{-b}+q^{b}\big{)}}\mathcal{T}_{m+n}^{a\Delta}+h_{p,q}^{ \Delta(a,b)}(m,n)\] \[-\frac{q^{-a\Delta(m+1)}}{p^{b\Delta\,m}}\frac{\big{(}p^{bm}-q^{- an}\big{)}}{\big{(}-p^{-a}+q^{a}\big{)}}\mathcal{T}_{m+n}^{b\Delta}, \tag{111}\]
where
\[h_{p,q}^{\Delta(a,b)}(m,n) =q^{-(a+b)\Delta(m+n+1)}\bigg{(}[-\Delta\,m]_{p^{b},q^{b}}^{Q}\, [n(1-\Delta)]_{p^{a},q^{a}}^{Q}\] \[-[-\Delta\,n]_{p^{a},q^{a}}^{Q}\,[m(1-\Delta)]_{p^{b},q^{b}}^{Q} \bigg{)}.\]
Putting \(a=b=1\), in the relation (111), we obtain the commutation relation:
\[\Big{[}\mathcal{T}_{m}^{\Delta},\mathcal{T}_{n}^{\Delta}\Big{]} =\frac{p^{-\Delta(n+m)}\big{(}p^{n}-p^{m}\big{)}}{\big{(}-p^{-1}+ q\big{)}}[2]_{p,q}^{Q}\mathcal{T}_{m+n}^{2\Delta}+h_{p,q}^{\Delta}(m,n)\] \[\quad-\bigg{(}\frac{q^{-\Delta(n+1)}\big{(}p^{n}-q^{-m}\big{)}}{p ^{\Delta\,n}\big{(}-p^{-1}+q\big{)}}+\frac{q^{-\Delta(m+1)}\big{(}p^{m}-q^{-n }\big{)}}{p^{\Delta\,m}\big{(}-p^{-1}+q\big{)}}\bigg{)}\mathcal{T}_{m+n}^{ \Delta},\]
where
\[h_{p,q}^{\Delta(1,1)}(m,n)=q^{-2\Delta(m+n+1)}\bigg{(}[-\Delta\,m]_{p,q}^{Q} \,[n(1-\Delta)]_{p,q}^{Q}\,-[-\Delta\,n]_{p,q}^{Q}\,[m(1-\Delta)]_{p,q}^{Q} \bigg{)}.\]
Besides, the \(n\)-bracket is defined by:
\[\Big{[}\mathcal{T}_{m_{1}}^{a_{1}\Delta},\cdots,\mathcal{T}_{m_{n}}^{a_{n} \Delta}\Big{]}:=\epsilon_{1\cdots n}^{i_{1}\cdots i_{n}}\,\mathcal{T}_{m_{i_{1 }}}^{a_{i_{1}}\Delta}\cdots\mathcal{T}_{m_{i_{n}}}^{a_{i_{n}}\Delta}.\]
Our interest is focussed on the special case with the same \(a\Delta\). Then,
\[\Big{[}\mathcal{T}_{m_{1}}^{a\Delta},\cdots,\mathcal{T}_{m_{n}}^{a\Delta} \Big{]}=\epsilon_{1\cdots n}^{1\cdots n}\,\mathcal{T}_{m_{1}}^{a\Delta}\cdots \mathcal{T}_{m_{n}}^{a\Delta}.\]
Putting \(a=b\) in the relation (111), the following commutation relation holds:
\[\Big{[}\mathcal{T}_{m}^{a\Delta},\mathcal{T}_{n}^{a\Delta}\Big{]} =\frac{p^{-a\Delta(n+m)}\big{(}p^{an}-p^{am}\big{)}}{-p^{-a}+q^{a} }[2]_{p^{a},q^{a}}^{Q}\mathcal{T}_{m+n}^{2a\Delta}\] \[-\frac{1}{\big{(}q-p^{-1}\big{)}}\bigg{(}\frac{q^{-a\Delta(n+1)}} {p^{a\Delta\,n}}\big{(}p^{an}-q^{-am}\big{)}\] \[-\frac{q^{-a\Delta(m+1)}}{p^{a\Delta\,m}}\big{(}p^{am}-q^{-an} \big{)}\bigg{)}\mathcal{T}_{m+n}^{a\Delta}+h_{p,q}^{\Delta(a)}(m,n),\]
where
\[h^{\Delta(a)}_{p,q}(m,n) =q^{-2a\Delta(m+n+1)}\biggl{(}[-\Delta\,m]^{Q}_{p^{a},q^{a}}[n(1- \Delta)]^{Q}_{p^{a},q^{a}}\] \[-[-\Delta\,n]^{Q}_{p^{a},q^{a}}[m(1-\Delta)]^{Q}_{p^{a},q^{a}} \biggr{)}\]
The generalized \(q\)-Quesne conformal Witt \(n\)-algebra is given by :
\[\Bigl{[}\mathcal{T}^{a\Delta}_{m_{1}},\cdots,\mathcal{T}^{a\Delta} _{m_{n}}\Bigr{]} =\frac{(-1)^{n+1}}{\big{(}-p^{-a}+q^{a}\big{)}^{n-1}}\Bigl{(}V^{n }_{a\Delta}[n]^{Q}_{p^{a},q^{a}}\mathcal{T}^{n\,a\Delta}_{\bar{m}}\] \[-[n-1]^{Q}_{p^{a},q^{a}}\bigl{(}D^{n}_{a\Delta}+N^{n}_{a\Delta} \bigr{)}\mathcal{T}^{a(n-1)\Delta}_{\bar{m}}\Bigr{)}\] \[+h^{\Delta}_{p,q}(m_{1},\dots,m_{n}), \tag{112}\]
where
\[V^{n}_{a\Delta} =p^{a(n-1)(1-\Delta)\bar{m}}\Bigl{(}\big{(}-p^{-a}+q^{a}\big{)}^ {\binom{n}{2}}\prod_{1\leq j<k\leq n}\Big{(}[m_{j}]^{Q}_{p^{a},q^{a}}-[m_{k}]^ {Q}_{p^{a},q^{a}}\Bigr{)}\] \[+\prod_{1\leq j<k\leq n}\Big{(}q^{-a\,m_{j}}-q^{-a\,m_{k}}\Big{)} \Bigr{)},\]
\[D^{n}_{a\Delta} = \Bigl{(}\big{(}-p^{-a}+q^{a}\big{)}^{\binom{n}{2}}\prod_{1\leq j< k\leq n}\frac{q^{-a\Delta(m_{k}+1)}}{p^{a\Delta\,m_{k}}}\Bigl{(}[m_{k}]^{Q}_{p^{a},q ^{a}}-[m_{j}]^{Q}_{p^{a},q^{a}}\] \[+ q^{-am_{k}}-p^{am_{j}}\Bigr{)},\]
\[N^{n}_{a\Delta} =(-1)^{n+1}\biggl{(}\big{(}-p^{-a}+q^{a}\big{)}^{\binom{n}{2}} \prod_{1\leq j<k\leq n}\frac{q^{-a\Delta(m_{j}+1)}}{p^{a\Delta\,m_{j}}}\Bigl{(} [m_{j}]^{Q}_{p^{a},q^{a}}-[m_{k}]^{Q}_{p^{a},q^{a}}\] \[+q^{-am_{j}}-p^{am_{k}}\Bigr{)},\]
and
\[h^{\Delta(a)}_{p,q}(m_{1},\dots,m_{n}) =q^{-an\Delta(\bar{m}+1)}\prod_{1\leq j<k\leq n}\bigg{(}[-\Delta\, m_{j}]^{Q}_{p^{a},q^{a}}[m_{k}(1-\Delta)]^{Q}_{p^{a},q^{a}}\] \[-[-\Delta\,m_{k}]^{Q}_{p^{a},q^{a}}[m_{j}(1-\Delta)]^{Q}_{p^{a},q ^{a}}\bigg{)}.\]
Putting \(n=3\) in the relation (112), we obtain the generalized \(q\)-Quesne conformal Witt 3-algebra:
\[\Big{[}\mathcal{T}^{a\Delta}_{m_{1}},\mathcal{T}^{a\Delta}_{m_{2}}, \mathcal{T}^{a\Delta}_{m_{3}}\Big{]} =\frac{1}{\big{(}-p^{-a}+q^{a}\big{)}^{2}}\Big{(}V^{3}_{a\Delta}[3 ]^{Q}_{p^{a},q^{a}}\mathcal{T}^{3\,a\Delta}_{m_{1}+m_{2}+m_{3}}\] \[-[2]^{Q}_{p^{a},q^{a}}\big{(}D^{3}_{a\Delta}+N^{3}_{a\Delta}) \mathcal{T}^{2a\Delta}_{m_{1}+m_{2}+m_{3}}\Big{)}\] \[+h^{\Delta}_{p,q}(m_{1},m_{2},m_{3}),\]
where
\[V^{3}_{a\Delta} =p^{2a(1-\Delta)(m_{1}+m_{2}+m_{3})}\Big{(}\big{(}-p^{-a}+q^{a} \big{)}^{\binom{3}{2}}\prod_{1\leq j<k\leq 3}\Big{(}[m_{j}]^{Q}_{p^{a},q^{a}}-[m_{ k}]^{Q}_{p^{a},q^{a}}\Big{)}\] \[+\prod_{1\leq j<k\leq 3}\Big{(}q^{-a\,m_{j}}-q^{-a\,m_{k}}\Big{)} \Big{)},\]
\[D^{3}_{a\Delta} = \Big{(}\big{(}-p^{-a}+q^{a}\big{)}^{\binom{3}{2}}\prod_{1\leq j<k \leq 3}\frac{q^{-a\Delta(m_{k}+1)}}{p^{a\Delta\,m_{k}}}\Big{(}[m_{k}]^{Q}_{p^ {a},q^{a}}-[m_{j}]^{Q}_{p^{a},q^{a}}\] \[+ q^{-am_{k}}-p^{am_{j}}\Big{)},\]
\[N^{3}_{a\Delta} = \Big{(}\big{(}-p^{a}+q^{a}\big{)}^{\binom{3}{2}}\prod_{1\leq j<k \leq 3}\frac{q^{-a\Delta(m_{j}+1)}}{p^{a\Delta\,m_{j}}}\Big{(}[m_{j}]^{Q}_{p^ {a},q^{a}}-[m_{k}]^{Q}_{p^{a},q^{a}}\] \[+ q^{-am_{j}}-p^{am_{k}}\Big{)},\]
and
\[h^{\Delta(a)}_{p,q}(m_{1},m_{2},m_{3}) =q^{-3a\Delta(m_{1}+m_{2}+m_{3}+1)}\prod_{1\leq j<k\leq 3} \bigg{(}[-\Delta\,m_{j}]^{Q}_{p^{a},q^{a}}[m_{k}(1-\Delta)]^{Q}_{p^{a},q^{a}}\] \[-[-\Delta\,m_{k}]^{Q}_{p^{a},q^{a}}[m_{j}(1-\Delta)]^{Q}_{p^{a},q ^{a}}\bigg{)}.\]
### A toy model for \(\mathcal{R}(p,q)\)-conformal Virasoro constraints
In this section, we study a toy model for the \(\mathcal{R}(p,q)\)-Virasoro constraints with conformal dimension \(\Delta.\) We consider the generating function with infinitely many parameters introduced as follows [28]:
\[Z^{toy}(t)=\int\ x^{\gamma}\,\exp\left(\sum_{s=0}^{\infty}\frac{t_{s}}{s!}x^{ s}\right)\,dx,\]
which encodes many different integrals. We assume that the following property holds for the \(\mathcal{R}(p,q)\)-derivative
\[\int_{\mathbb{R}}x^{(m+1)(1-\Delta)}\mathcal{D}_{\mathcal{R}(p^{a},q^{a})}f(x )d\,x=0,\]
where
\[h(p^{a},q^{a})=\frac{p^{a}-q^{a}}{p^{P^{a}}-q^{Q^{a}}}\mathcal{R}\big{(}p^{Pa},q^{ Q^{a}}\big{)}.\]
For \(f(x)=x^{\Delta(m+1)+\gamma}\,\exp\left(\sum_{s=0}^{\infty}\frac{t_{s}}{s!}x^{s} \right),\) we have
\[\int_{-\infty}^{+\infty}x^{(m+1)(1-\Delta)}\mathcal{D}_{\mathcal{R}(p^{a},q^{ a})}\left(x^{\Delta(m+1)+\gamma}\,\exp\left(\sum_{s=0}^{\infty}\frac{t_{s}}{s!}x^{s} \right)\right)d\,x=0.\]
We consider the following relation:
\[\exp\left(\sum_{s=0}^{\infty}\frac{t_{s}}{s!}x^{s}\right)=\sum_{n=0}^{\infty}B _{n}(t_{1},\cdots,t_{n})\frac{x^{n}}{n!},\]
where \(B_{n}\) is the Bell polynomials. Then, from the \(\mathcal{R}(p,q)\)-Leibniz rule, we get:
\[\mathcal{D}_{\mathcal{R}(p^{a},q^{a})}\bigg{(}x^{\Delta(m+1)+ \gamma}\exp\bigg{(} \sum_{s=0}^{\infty}\frac{t_{s}}{s!}x^{s}\bigg{)}\bigg{)}= \mathcal{D}_{\mathcal{R}(p^{a},q^{a})}\big{(}x^{\Delta(m+1)+\gamma}\big{)}\] \[\times\exp\left(\sum_{s=0}^{\infty}\frac{t_{s}}{s!}(x\tau_{1})^{ as}\right)+\frac{h(p^{a},q^{a})}{(\tau_{1}^{a}-\tau_{2}^{a})}\] \[\times\tau_{2}^{a(\Delta(m+1)+\gamma)}\sum_{k=1}^{\infty}\frac{B _{k}(t_{1}^{a},\cdots,t_{k}^{a})}{k!}\] \[\times x^{\Delta(k+m)+\gamma}\exp\bigg{(}\sum_{s=0}^{\infty} \frac{t_{s}}{s!}x^{s}\bigg{)},\]
where \(t_{k}^{a}=(\tau_{1}^{a\,k}-\tau_{2}^{a\,k})t_{k}.\) After computation, we have:
\[x^{(m+1)(1-\Delta)}\mathcal{D}_{\mathcal{R}(p^{a},q^{a})} \left(x^{\Delta(m+1)+\gamma}\,\exp\left(\sum_{s=0}^{\infty}\frac{ t_{s}}{s!}x^{s}\right)\right)\] \[=\frac{x^{m+\gamma}\left[\Delta(m+1)+\gamma\right]_{\mathcal{R}( p^{a},q^{a})}}{\tau_{1}^{-a\,m}}\exp\left(\sum_{s=0}^{\infty}\frac{t_{s}}{s!}x^{s }\right)\] \[+\frac{h(p^{a},q^{a})\tau_{2}^{a(\Delta(m+1)+\gamma)}}{(\tau_{1} ^{a}-\tau_{2}^{a})}\sum_{k=1}^{\infty}\frac{B_{k}(t_{1}^{a},\cdots,t_{k}^{a})} {k!}\] \[\times x^{m+\gamma+k}\exp\left(\sum_{s=0}^{\infty}\frac{t_{s}}{s!} x^{s}\right),\]
Then, from the constraints on the partition function,
\[\mathcal{T}_{m}^{a\Delta}\,Z^{(toy)}(t)=0,\quad m\geq 0,\]
we obtain:
**Lemma 24**.: _The \(\mathcal{R}(p,q)\)-conformal differential operator is determined by:_
\[\mathcal{T}_{m}^{a\Delta} =[\Delta(m+1)+\gamma]_{\mathcal{R}(p^{a},q^{a})}\,m!\,\tau_{1}^{a\,m }\,\frac{\partial}{\partial t_{m}}\] \[+h(p^{a},q^{a})\frac{\tau_{2}^{a(\Delta(m+1)+\gamma)}}{\tau_{1}^{ a}-\tau_{2}^{a}}\sum_{k=1}^{\infty}\frac{(k+m)!}{k!}B_{k}(t_{1}^{a},\ldots,t_{k}^{ a})\frac{\partial}{\partial t_{k+m}}. \tag{113}\]
Note that taking \(\Delta=1\), we obtained the result given in [25].
Furthemore, there exist another way to determine the \(\mathcal{R}(p,q)\)-conformal differential operator. The result is given in the following lemma.
**Lemma 25**.: _The \(\mathcal{R}(p,q)\)-conformal differential operator is given by:_
\[\mathcal{T}_{m}^{a\Delta} =[x\partial_{x}+\Delta(m+1)-m]_{\mathcal{R}(p^{a},q^{a})}\,m!\, \tau_{1}^{a\,m}\,\frac{\partial}{\partial t_{m}}\] \[+h(p^{a},q^{a})\frac{\tau_{2}^{a(\Delta(m+1)+\gamma)}}{\tau_{1}^ {a}-\tau_{2}^{a}}\sum_{k=1}^{\infty}\frac{(m+1+\Delta(k-1))!}{k!}\] \[\times B_{k}(t_{1}^{a},\cdots,t_{k}^{a})\frac{\partial}{\partial t _{m+1+\Delta(k-1)}}. \tag{114}\]
Proof.: By using the definition of the \(\mathcal{R}(p,q)\)-derivative
\[\mathcal{D}_{\mathcal{R}(p^{a},q^{a})}=\frac{1}{x}[x\partial_{x}]_{\mathcal{R }(p^{a},q^{a})},\]
the conformal operators (76) may be rewritten in the form:
\[\mathcal{T}_{m}^{a\Delta}=[x\partial_{x}+\Delta(m+1)-m]_{\mathcal{R}(p^{a},q^ {a})}\,x^{m}. \tag{115}\]
Then, using step by step the same technique to construct the operator (113), the result follows.
**Corollary 26**.: _The \(q\)-conformal differential operator is given by:_
\[\mathcal{T}_{m}^{a\Delta} = [x\partial_{x}+\Delta(m+1)-m]_{q^{a}}\,m!\,q^{a\,m}\,\frac{ \partial}{\partial t_{m}}\] \[+ \frac{h(q^{a})}{q^{a}-1}\sum_{k=1}^{\infty}\frac{(m+1+\Delta(k-1) )!}{k!}B_{k}(t_{1}^{a},\cdots,t_{k}^{a})\frac{\partial}{\partial t_{m+1+ \Delta(k-1)}}.\]
Proof.: The result is obtained by taking \(\tau_{1}=q\) and \(\tau_{2}=1\).
**Remark 27**.: _For \(\Delta=0\) and \(\Delta=1,\) we obtain, respectively, the following \(\mathcal{R}(p,q)\)-differential operators:_
\[\mathcal{T}_{m} =[x\partial_{x}-m]_{\mathcal{R}(p^{a},q^{a})}\,m!\,\tau_{1}^{a\,m }\,\frac{\partial}{\partial t_{m}}\] \[+h(p^{a},q^{a})\frac{\tau_{2}^{a\,\gamma}}{\tau_{1}^{a}-\tau_{2}^ {a}}\sum_{k=1}^{\infty}\frac{(m+1)!}{k!}B_{k}(t_{1}^{a},\cdots,t_{k}^{a})\frac {\partial}{\partial t_{m+1}} \tag{116}\]
_and_
\[\mathcal{T}^{a}_{m} =[x\partial_{x}+1]_{\mathcal{R}(p^{a},q^{a})}\,m!\,\tau_{1}^{a\,m} \,\frac{\partial}{\partial t_{m}}\] \[+h(p^{a},q^{a})\frac{\tau_{2}^{a(m+1+\gamma)}}{\tau_{1}^{a}-\tau_ {2}^{a}}\sum_{k=1}^{\infty}\frac{(m+k)!}{k!}B_{k}(t_{1}^{a},\cdots,t_{k}^{a}) \frac{\partial}{\partial t_{m+k}}. \tag{117}\]
**Remark 28**.: _The conformal differential operators induced by quantum algebras in the literature are deduced as follows:_
* _The_ \(q\)_-conformal differential operators are given by:_ \[\mathcal{T}^{a\Delta}_{m} = [\Delta(m+1)+\gamma]_{q^{a}}\,m!\,q^{-a\,m}\,\frac{\partial}{ \partial t_{m}}\] (118) \[+ \frac{q^{-a(\Delta(m+1)+\gamma)}}{q^{a}-q^{-a}}\sum_{k=1}^{ \infty}\frac{(m+k)!}{k!}B_{k}(t_{1}^{a},\cdots,t_{k}^{a})\frac{\partial}{ \partial t_{m+k}}\] _and_ \[\mathcal{T}^{a\Delta}_{m} =[x\partial_{x}+\Delta(m+1)-m]_{q^{a}}\,m!\,\tau_{1}^{a\,m}\, \frac{\partial}{\partial t_{m}}\] \[+h(q^{a})\frac{q^{-a(\Delta(m+1)+\gamma)}}{q^{a}-q^{-a}}\sum_{k= 1}^{\infty}\frac{(m+1+\Delta(k-1))!}{k!}\] \[\times B_{k}(t_{1}^{a},\cdots,t_{k}^{a})\frac{\partial}{\partial t _{m+1+\Delta(k-1)}}.\] (119)
* _The_ \((p,q)\)_-conformal differential operators are determined as follows:_ \[\mathcal{T}^{a\Delta}_{m} = [\Delta(m+1)+\gamma]_{p^{a},q^{a}}\,m!\,p^{-a\,m}\,\frac{ \partial}{\partial t_{m}}\] (120) \[+ \frac{q^{a(\Delta(m+1)+\gamma)}}{p^{a}-q^{a}}\sum_{k=1}^{\infty }\frac{(m+k)!}{k!}B_{k}(t_{1}^{a},\cdots,t_{k}^{a})\frac{\partial}{\partial t _{m+k}}\] _and_ \[\mathcal{T}^{a\Delta}_{m} =[x\partial_{x}+\Delta(m+1)-m]_{p^{a},q^{a}}\,m!\,p^{a\,m}\,\frac {\partial}{\partial t_{m}}\] (121) \[+h(p^{a},q^{a})\frac{q^{a(\Delta(m+1)+\gamma)}}{p^{a}-q^{a}}\sum _{k=1}^{\infty}\frac{(m+1+\Delta(k-1))!}{k!}\] \[\times B_{k}(t_{1}^{a},\cdots,t_{k}^{a})\frac{\partial}{\partial t _{m+1+\Delta(k-1)}}.\]
2. _The_ \((p^{-1},q)\)_-conformal differential operators are presented by:_ \[{\mathcal{T}}^{a\Delta}_{m} = [\Delta(m+1)+\gamma]_{p^{-a},q^{a}}\,m!\,p^{a\,m}\,\frac{\partial}{ \partial t_{m}}\] (122) \[+ \frac{q^{a(\Delta(m+1)+\gamma)}}{p^{-a}-q^{a}}\sum_{k=1}^{\infty }\frac{(m+k)!}{k!}B_{k}(t_{1}^{a},\cdots,t_{k}^{a})\frac{\partial}{\partial t _{m+k}}\] _and_ \[{\mathcal{T}}^{a\Delta}_{m} =[x\partial_{x}+\Delta(m+1)-m]_{p^{-a},q^{a}}\,m!\,p^{-a\,m}\, \frac{\partial}{\partial t_{m}}\] (123) \[+h(p^{-a},q^{a})\frac{q^{a(\Delta(m+1)+\gamma)}}{p^{-a}-q^{a}} \sum_{k=1}^{\infty}\frac{(m+1+\Delta(k-1))!}{k!}\] \[\times B_{k}(t_{1}^{a},\cdots,t_{k}^{a})\frac{\partial}{\partial t _{m+1+\Delta(k-1)}}.\]
3. _The generalized_ \(q\)_-Quesne conformal differential operators are derived as:_ \[{\mathcal{T}}^{a\Delta}_{m} = [\Delta(m+1)+\gamma]^{Q}_{p^{a},q^{a}}\,m!\,p^{-a\,m}\,\frac{ \partial}{\partial t_{m}}\] (124) \[+ \frac{q^{-a(\Delta(m+1)+\gamma)}}{p^{a}-q^{-a}}\sum_{k=1}^{\infty }\frac{(m+k)!}{k!}B_{k}(t_{1}^{a},\cdots,t_{k}^{a})\frac{\partial}{\partial t _{m+k}}\] _and_ \[{\mathcal{T}}^{a\Delta}_{m} =[x\partial_{x}+\Delta(m+1)-m]^{Q}_{p^{a},q^{a}}\,m!\,p^{a\,m}\, \frac{\partial}{\partial t_{m}}\] (125) \[+h(p^{a},q^{a})\frac{q^{-a(\Delta(m+1)+\gamma)}}{p^{a}-q^{-a}} \sum_{k=1}^{\infty}\frac{(m+1+\Delta(k-1))!}{k!}\] \[\times B_{k}(t_{1}^{a},\cdots,t_{k}^{a})\frac{\partial}{\partial t _{m+1+\Delta(k-1)}}.\]
4. \({\mathcal{R}}(p,q)\)-super Virasoro \(n\)-algebra with conformal dimension \((\Delta\neq 0,1)\)
This section is reserved to the characterization of the super Virasoro algebra with conformal dimension \((\Delta\neq 0,1).\) Related particular cases are deduced. Using step by step the procedure presented to construct the \({\mathcal{R}}(p,q)\)-conformal Virasoro \(2n\)-algebra (49), we deduce the \({\mathcal{R}}(p,q)\)-conformal super Virasoro \(2n\)-algebra. It's generated by the bosonic and fermionic operators \({\mathcal{L}}^{\Delta}_{m}\) of parity \(0\) and \({\mathcal{G}}^{\Delta}_{m}\) of parity \(1\) obeying the following commutation relations:
\[\left[{\mathcal{L}}^{\Delta}_{m_{1}},\ldots,{\mathcal{L}}^{\Delta}_{m_{2n}} \right]_{{\mathcal{R}}(p,q)}=f^{\Delta}_{{\mathcal{R}}(p,q)}(m_{1},\ldots,m_{2n }){\mathcal{L}}^{\Delta}_{m}+\tilde{C}^{\Delta}_{{\mathcal{R}}(p,q)}(m_{1}, \ldots,m_{2n}), \tag{126}\]
\[\left[{\mathcal{L}}^{\Delta}_{m_{1}},\cdots,{\mathcal{G}}^{{\mathcal{R}}(p,q )}_{m_{2n}}\right]_{{\mathcal{R}}(p,q)}=g^{\Delta}_{{\mathcal{R}}(p,q)}(m_{1},\cdots m_{2n})+{\mathcal{C}}S^{\Delta}_{{\mathcal{R}}(p,q)}(m_{1},\cdots m_{2 n}), \tag{127}\]
where \(f^{\Delta}_{\mathcal{R}(p,q)}(m_{1},\cdots,m_{2n})\) and \(C^{\Delta}_{\mathcal{R}(p,q)}(m_{1},\cdots,m_{2n})\) are given by the relations (59), (60),
\[g^{\Delta}_{\mathcal{R}(p,q)}(m_{1},m_{2},\cdots m_{2n}) =\frac{\big{(}q-p\big{)}^{\binom{2n-1}{2}}}{\big{(}\tau_{1}\tau_{ 2}\big{)}^{-(n-1)\tilde{m}+1}}\bigg{(}\frac{[-2\tilde{m}-1]_{\mathcal{R}(p,q)} }{2[\tilde{m}-1]_{\mathcal{R}(p,q)}}\bigg{)}\] \[\times\prod_{1\leq i<j\leq 2n-1}\big{(}[m_{i}]_{\mathcal{R}(p,q)}-[m _{j}]_{\mathcal{R}(p,q)}\big{)}\] \[\times\prod_{i=1}^{2n-1}\big{(}[m_{i}]_{\mathcal{R}(p,q)}-[m_{2n} +1]_{\mathcal{R}(p,q)}\big{)}\mathcal{G}^{\Delta}_{\tilde{m}},\]
\[\mathcal{C}S^{\Delta}_{\mathcal{R}(p,q)}(m_{1},m_{2},\cdots m_{2n}) =\sum_{k=1}^{2n-1}\frac{(-1)^{k+1}c(p,q)(\tau_{1}\tau_{2})^{-m_{k} }}{6\times 2^{n-1}(n-1)!}\frac{[m_{k}]_{\mathcal{R}(p,q)}}{[2m_{k}]_{\mathcal{R} (p,q)}}\] \[\times[m_{k}+1]_{\mathcal{R}(p,q)}[m_{k}]_{\mathcal{R}(p,q)}[m_{k }-1]_{\mathcal{R}(p,q)}\delta_{m_{k}+m_{2n}+1,0}\] \[\times\epsilon^{i_{1}\cdots i_{2n-2}}_{j_{1}\cdots j_{2n-2}}\prod _{s=1}^{n-1}\frac{(\tau_{1}\tau_{2})^{-i_{2s-1}}[i_{2s-1}]_{\mathcal{R}(p,q)} }{[2\,i_{2s-1}]_{\mathcal{R}(p,q)}}\] \[\times[i_{2s-1}+1]_{\mathcal{R}(p,q)}\,[i_{2s-1}]_{\mathcal{R}(p, q)}\,[i_{2s-1}-1]_{\mathcal{R}(p,q)}\delta_{i_{2s-1}+i_{2s},0},\]
with \(\{j_{1},\cdots,j_{2n-2}\}=\{1,\cdots,\hat{k},\cdots,2n-1\}\) and other anti-commutators are zeros.
### Another \(\mathcal{R}(p,q)\)-super Witt \(n\)-algebra with conformal dimension \((\Delta\neq 0,1)\)
In this section, we construct another super Witt \(n\)-algebra with conformal dimension \((\Delta\neq 0,1)\) induced by the \(\mathcal{R}(p,q)\)-deformed quantum algebra [13].
**Definition 29**.: _For \((\Delta\neq 0,1),\) the \(\mathcal{R}(p,q)\)-super conformal operators are defined by:_
\[\mathcal{T}^{a\Delta}_{m} :=x^{(1-\Delta)(m+1)}\bar{\Delta}\,x^{\Delta(m+1)}, \tag{128}\] \[\mathbb{T}^{a\Delta}_{m} :=\theta\,x^{(1-\Delta)(m+1)}\bar{\Delta}\,x^{\Delta(m+1)}, \tag{129}\]
_where \(\bar{\Delta}\) is given in the relation (5)._
The conformal super operators (128) and (129) can be rewritten as:
\[\mathcal{T}^{a\Delta}_{m} =[\Delta(m+1)]_{\mathcal{R}(p^{a},q^{a})}\,x^{m} \tag{130}\] \[\mathbb{T}^{a\Delta}_{m} =\theta\,[\Delta(m+1)]_{\mathcal{R}(p^{a},q^{a})}\,x^{m}. \tag{131}\]
**Proposition 30**.: _The conformal super operators (128) and (129) satisfy the following product relation:_
\[\mathcal{T}_{m}^{a\Delta}\!\cdot\!\mathbb{T}_{n}^{b\Delta} =\frac{\big{(}\tau_{1}^{a+b}-\tau_{2}^{a+b}\big{)}\tau_{1}^{a(n(1- \Delta)+1)}}{\big{(}\tau_{1}^{a}-\tau_{2}^{a}\big{)}\big{(}\tau_{1}^{b}-\tau_{2 }^{b}\big{)}\tau_{1}^{b\Delta\,m}}\,\mathbb{T}_{m+n}^{(a+b)\Delta}+\mathbb{F}_ {\mathcal{R}(p,q)}^{\Delta(a,b)}(m,n)\] \[-\frac{\tau_{2}^{b\Delta(n+1)}}{\tau_{1}^{a(n(\Delta-1)-1)}\big{(} \tau_{1}^{b}-\tau_{2}^{b}\big{)}}\,\mathbb{T}_{m+n}^{a\Delta}-\frac{\tau_{2}^{ a(\Delta(m+1)+n+1)}}{\tau_{1}^{b\Delta\,m}\big{(}\tau_{1}^{a}-\tau_{2}^{a} \big{)}}\,\mathbb{T}_{m+n}^{b\Delta}, \tag{132}\]
_with_
\[\mathbb{F}_{\mathcal{R}(p,q)}^{\Delta(a,b)}(m,n)=\tau_{2}^{(a+b)\Delta(m+n+1) }\,[n(1-\Delta)+1]_{\mathcal{R}(p^{a},q^{a})}\,[-\Delta\,m]_{\mathcal{R}(p^{ b},q^{b})}\]
_and the commutation relation_
\[\Big{[}\mathcal{T}_{m}^{a\Delta},\mathbb{T}_{n}^{b\Delta}\Big{]} =\frac{\big{(}\tau_{1}^{a+b}-\tau_{2}^{a+b}\big{)}}{\big{(}\tau_ {1}^{a}-\tau_{2}^{a}\big{)}\big{(}\tau_{1}^{b}-\tau_{2}^{b}\big{)}}\bigg{(} \frac{\tau_{1}^{a(n(1-\Delta)+1)}}{\tau_{1}^{b\Delta\,m}}-\frac{\tau_{1}^{b(m (1-\Delta)+1)}}{\tau_{1}^{a\Delta\,n}}\bigg{)}\mathbb{T}_{m+n}^{(a+b)\Delta}\] \[-\frac{\tau_{2}^{b\Delta(n+1)}}{\tau_{1}^{a\Delta\,n}}\frac{ \big{(}\tau_{1}^{a(n+1)}-\tau_{2}^{b(m+1)}\big{)}}{\big{(}\tau_{1}^{b}-\tau_{ 2}^{b}\big{)}}\mathbb{T}_{m+n}^{a\Delta}+\mathbb{H}_{\mathcal{R}(p,q)}^{\Delta (a,b)}(m,n)\] \[-\frac{\tau_{2}^{a\Delta(m+1)}}{\tau_{1}^{b\Delta\,m}}\frac{ \big{(}\tau_{1}^{b(m+1)}-\tau_{2}^{a(n+1)}\big{)}}{\big{(}\tau_{1}^{a}-\tau_{ 2}^{a}\big{)}}\mathbb{T}_{m+n}^{b\Delta}, \tag{133}\]
_where_
\[\mathbb{H}_{\mathcal{R}(p,q)}^{\Delta(a,b)}(m,n) =\tau_{2}^{(a+b)\Delta(m+n+1)}\bigg{(}[-\Delta\,m]_{\mathcal{R}( p^{b},q^{b})}\,[n(1-\Delta)+1]_{\mathcal{R}(p^{a},q^{a})}\] \[-[-\Delta\,n]_{\mathcal{R}(p^{a},q^{a})}\,[m(1-\Delta)+1]_{ \mathcal{R}(p^{b},q^{b})}\bigg{)}\]
Proof.: By using the relations (30), (311) and after computation, we get
\[\mathcal{T}_{m}^{a\Delta}\!\cdot\!\mathbb{T}_{n}^{b\Delta}=[\Delta(n+1)]_{ \mathcal{R}(p^{b},q^{b})}[\Delta(m+1)+n+1]_{\mathbb{R}(p^{a},q^{a})}\,x^{m+n}.\]
From the relation (80), the results follows.
**Remark 31**.: _Setting \(a=b=1,\) in the relation (133), we obtain the following commutation relation:_
\[\Big{[}\mathcal{T}_{m}^{\Delta},\mathbb{T}_{n}^{\Delta}\Big{]} =\frac{\tau_{1}^{-\Delta(n+m)}\big{(}\tau_{1}^{n+1}-\tau_{1}^{m+1 }\big{)}}{\big{(}\tau_{1}-\tau_{2}\big{)}}[2]_{\mathcal{R}(p,q)}\mathbb{T}_{m +n}^{2\Delta}+\mathbb{H}_{\mathcal{R}(p,q)}^{\Delta(1,1)}(m,n)\] \[-\bigg{(}\frac{\tau_{2}^{\Delta(n+1)}\big{(}\tau_{1}^{n+1}-\tau_{ 2}^{m+1}\big{)}}{\tau_{1}^{\Delta\,n}\big{(}\tau_{1}-\tau_{2}\big{)}}+\frac{ \tau_{2}^{\Delta(m+1)}\big{(}\tau_{1}^{m+1}-\tau_{2}^{n+1}\big{)}}{\tau_{1}^{ \Delta\,m}\big{(}\tau_{1}-\tau_{2}\big{)}}\bigg{)}\mathbb{T}_{m+n}^{\Delta},\]
_where_
\[\mathbb{H}_{\mathcal{R}(p,q)}^{\Delta(1,1)}(m,n) =\tau_{2}^{2\Delta(m+n+1)}\bigg{(}[-\Delta\,m]_{\mathcal{R}(p,q)} \,[n(1-\Delta)+1]_{\mathcal{R}(p,q)}\] \[-[-\Delta\,n]_{\mathcal{R}(p,q)}\,[m(1-\Delta)+1]_{\mathcal{R}(p,q )}\bigg{)}.\]
**Corollary 32**.: _The \(q\)-conformal super operators:_
\[\mathcal{T}_{m}^{a\Delta} =[\Delta(m+1)]_{q^{a}}\,z^{m}\] \[\mathbb{T}_{m}^{a\Delta} =\theta\,[\Delta(m+1)]_{q^{a}}\,z^{m}\]
_satisfy the commutation relation_
\[\Big{[}\mathcal{T}_{m}^{a\Delta},\mathbb{T}_{n}^{b\Delta}\Big{]} =\frac{\big{(}q^{a+b}-1\big{)}}{\big{(}q^{a}-1\big{)}\big{(}q^{b} -1\big{)}}\bigg{(}\frac{q^{a(n(1-\Delta)+1)}}{q^{b\Delta\,m}}-\frac{q^{b(m(1- \Delta)+1)}}{q^{a\Delta\,n}}\bigg{)}\mathbb{T}_{m+n}^{(a+b)\Delta}\] \[-\frac{1}{q^{a\Delta\,n}}\frac{\big{(}q^{a(n+1)}-1\big{)}}{ \big{(}q^{b}-1\big{)}}\mathbb{T}_{m+n}^{a\Delta}+\mathbb{H}_{q}^{\Delta(a,b)} (m,n)\] \[-\frac{1}{q^{b\Delta\,m}}\frac{\big{(}q^{b(m+1)}-1\big{)}}{\big{(} q^{a}-1\big{)}}\mathbb{T}_{m+n}^{b\Delta},\]
_where_
\[\mathbb{H}_{q}^{\Delta(a,b)}(m,n)=\bigg{(}[-\Delta\,m]_{q^{b}}\,[n(1-\Delta)+ 1]_{q^{a}}-[-\Delta\,n]_{q^{a}}\,[m(1-\Delta)+1]_{q^{b}}\bigg{)}.\]
From the super multibracket of order \(n\), we have:
**Definition 33**.: _The \(\mathcal{R}(p,q)\)-deformed super \(n\)-bracket is defined as follows:_
\[\big{[}\mathcal{T}_{m_{1}}^{a\Delta},\mathcal{T}_{m_{2}}^{a\Delta },\cdots,\mathbb{T}_{m_{n}}^{a\Delta}\big{]} :=\sum_{j=0}^{n-1}(-1)^{n-1+j}\epsilon_{12\cdots n-1}^{i_{1}\dots i _{n-1}}\mathcal{T}_{m_{i_{1}}}^{a\Delta}\cdots\mathcal{T}_{m_{i_{j}}}^{a\Delta}\] \[\times\mathbb{T}_{m_{n}}^{a\Delta}\mathcal{T}_{m_{i_{j+1}}}^{a \Delta}\cdots\mathcal{T}_{m_{i_{n-1}}}^{a\Delta}.\]
By using the relation (133) with \(a=b\), we get:
\[\Big{[}\mathcal{T}_{m}^{a\Delta},\mathbb{T}_{n}^{a\Delta}\Big{]} =\frac{\tau_{1}^{-a\Delta(n+m)}\big{(}\tau_{1}^{a(n+1)}-\tau_{1}^ {a(m+1)}\big{)}}{\big{(}\tau_{1}^{a}-\tau_{2}^{a}\big{)}}[2]_{\mathcal{R}(p^{ a},q^{a})}\mathbb{T}_{m+n}^{2\Delta}\] \[+\mathbb{H}_{\mathcal{R}(p,q)}^{\Delta(a)}(m,n)-\bigg{(}\frac{ \tau_{2}^{a\Delta(n+1)}\big{(}\tau_{1}^{a(n+1)}-\tau_{2}^{a(m+1)}\big{)}}{\tau _{1}^{a\Delta\,n}\big{(}\tau_{1}^{a}-\tau_{2}^{a}\big{)}}\] \[+\frac{\tau_{2}^{a\Delta(m+1)}\big{(}\tau_{1}^{a(m+1)}-\tau_{2}^ {a(n+1)}\big{)}}{\tau_{1}^{a\Delta\,m}\big{(}\tau_{1}^{a}-\tau_{2}^{a}\big{)}} \bigg{)}\mathbb{T}_{m+n}^{a\Delta},\]
where
\[\mathbb{H}^{\Delta(a)}_{\mathcal{R}(p,q)}(m,n) =\tau_{2}^{2a\Delta(m+n+1)}\bigg{(}[-\Delta\,m]_{\mathcal{R}(p^{a}, q^{a})}\,[n(1-\Delta)+1]_{\mathcal{R}(p^{a},q^{a})}\] \[-[-\Delta\,n]_{\mathcal{R}(p^{a},q^{a})}\,[m(1-\Delta)+1]_{ \mathcal{R}(p^{a},q^{a})}\bigg{)}.\]
**Proposition 34**.: _The \(\mathcal{R}(p,q)\)-conformal super Witt \(n\) algebra is generated by relation (83) and the following commutation relation:_
\[\Big{[}\mathcal{T}^{a\Delta}_{m_{1}},\cdots,\mathbb{T}^{a\Delta}_ {m_{n}}\Big{]} =\frac{(-1)^{n+1}}{\big{(}\tau_{1}^{a}-\tau_{2}^{a}\big{)}^{n-1} }\Big{(}\mathbb{V}^{n}_{a\Delta}[n]_{\mathcal{R}(p^{a},q^{a})}\mathbb{T}^{an \Delta}_{\bar{m}}\] \[-[n-1]_{\mathcal{R}(p^{a},q^{a})}\big{(}\mathbb{D}^{n}_{a\Delta} +\mathbb{N}^{n}_{a\Delta}\big{)}\mathbb{T}^{a(n-1)\Delta}_{\bar{m}}\Big{)}\] \[+\mathbb{H}^{\Delta(a)}_{\mathcal{R}(p,q)}\big{(}m_{1},\cdots,m_ {n}\big{)},\]
_where_
\[\mathbb{V}^{n}_{a\Delta} =\tau_{1}^{a(n-1)(1-\Delta)\bar{m}}\Big{(}\big{(}\tau_{1}^{a}- \tau_{2}^{a}\big{)}^{\binom{n}{2}}\prod_{1\leq j<k\leq n}\Big{(}[m_{j}+1]_{ \mathcal{R}(p^{a},q^{a})}\] \[-[m_{k}+1]_{\mathcal{R}(p^{a},q^{a})}\Big{)}+\prod_{1\leq j<k\leq n }\Big{(}\tau_{2}^{a(m_{j}+1)}-\tau_{2}^{a(m_{k}+1)}\Big{)}\Big{)},\]
\[\mathbb{D}^{n}_{a\Delta} =\Big{(}\big{(}\tau_{1}^{a}-\tau_{2}^{a}\big{)}^{\binom{n}{2}} \prod_{1\leq j<k\leq n}\frac{\tau_{2}^{a\Delta(m_{k}+1)}}{\tau_{1}^{a\Delta\, m_{k}}}\Big{(}[m_{k}+1]_{\mathcal{R}(p^{a},q^{a})}\] \[-[m_{j}+1]_{\mathcal{R}(p^{a},q^{a})}+\tau_{2}^{a(m_{k}+1)}-\tau_ {1}^{a(m_{j}+1)}\Big{)},\]
\[\mathbb{N}^{n}_{a\Delta} =(-1)^{n+1}\Big{(}\big{(}\tau_{1}^{a}-\tau_{2}^{a}\big{)}^{\binom {n}{2}}\prod_{1\leq j<k\leq n}\frac{\tau_{2}^{a\Delta(m_{j}+1)}}{\tau_{1}^{a \Delta\,m_{j}}}\Big{(}[m_{j}+1]_{\mathcal{R}(p^{a},q^{a})}\] \[-[m_{k}+1]_{\mathcal{R}(p^{a},q^{a})}+\tau_{2}^{a(m_{j}+1)}-\tau_ {1}^{a(m_{k}+1)}\Big{)},\]
_and_
\[\mathbb{H}^{\Delta(a)}_{\mathcal{R}(p,q)}(m_{1},\ldots,m_{n}) =\prod_{1\leq j<k\leq n}\bigg{(}[-\Delta\,m_{j}]_{\mathcal{R}(p^ {a},q^{a})}[m_{k}(1-\Delta)+1]_{\mathcal{R}(p^{a},q^{a})}\] \[-[-\Delta\,m_{k}]_{\mathcal{R}(p^{a},q^{a})}[m_{j}(1-\Delta)+1]_{ \mathcal{R}(p^{a},q^{a})}\bigg{)}\tau_{2}^{an\Delta(\bar{m}+1)}.\]
### Conformal super Witt \(n\)-algebra and quantum algebras
In this section, the conformal super Witt \(n\)-algebra corresponding to quantum algebra existing in the literature are derived.
#### 3.2.1. Conformal super Witt \(n\)-algebra associated to the Biedenharn-Macfarlane algebra
The conformal Witt \(n\)-algebra induced by the Biedenharn-Macfarlane algebra [3, 24] is derived by: for \((\Delta\neq 0,1)\), the \(q\)-conformal super operators are defined by:
\[\mathcal{T}_{m}^{a\Delta} := x^{(1-\Delta)(m+1)}\bar{\Delta}\,x^{\Delta(m+1)} \tag{134}\] \[\mathbb{T}_{m}^{a\Delta} := \theta x^{(1-\Delta)(m+1)}\bar{\Delta}\,x^{\Delta(m+1)}, \tag{135}\]
where \(\mathcal{D}_{q^{a}}\) is the \(q\)-derivative (90). From the \(q\)-number(91), the relations (134) and (135) are reduced as:
\[\mathcal{T}_{m}^{a\Delta} = [\Delta(m+1)]_{q^{a}}\,x^{m}\] \[\mathbb{T}_{m}^{a\Delta} = \theta[\Delta(m+1)]_{q^{a}}\,x^{m}.\]
Thus, the \(q\)-conformal super operators (134) and (135) satisfies the product relation:
\[\mathcal{T}_{m}^{a\Delta}.\mathbb{T}_{n}^{b\Delta} =\frac{\big{(}q^{a+b}-q^{-a-b}\big{)}q^{a(n(1-\Delta)+1)}}{\big{(} q^{a}-q^{-a}\big{)}\big{(}q^{b}-q^{-b}\big{)}q^{b\Delta\,m}}\,\mathbb{T}_{m+n}^{( a+b)\Delta}-\frac{q^{-b\Delta(n+1)}}{q^{a(n(\Delta-1)-1)}\big{(}q^{b}-q^{-b} \big{)}}\,\mathbb{T}_{m+n}^{a\Delta}\] \[-\frac{q^{-a(\Delta(m+1)+n+1)}}{q^{b\Delta\,m}\big{(}q^{a}-q^{-a} \big{)}}\,\mathbb{T}_{m+n}^{b\Delta}+\mathbb{F}_{q}^{\Delta(a,b)}(m,n),\]
with
\[\mathbb{F}_{q}^{\Delta(a,b)}(m,n)=q^{-(a+b)\Delta(m+n+1)}\,[-\Delta\,m]_{q^{b }}\,[n(1-\Delta)+1]_{q^{a}},\]
and the commutation relation
\[\Big{[}\mathcal{T}_{m}^{a\Delta},\mathbb{T}_{n}^{b\Delta}\Big{]} =\frac{\big{(}q^{a+b}-q^{-a-b}\big{)}}{\big{(}q^{a}-q^{-a}\big{)} \big{(}q^{b}-q^{-b}\big{)}}\bigg{(}\frac{q^{a(n(1-\Delta)+1)}}{q^{b\Delta\,m}} -\frac{q^{bm(1-\Delta)}}{q^{a\Delta\,n}}\bigg{)}\mathbb{T}_{m+n}^{(a+b)\Delta}\] \[-\frac{q^{-b\Delta(n+1)}}{q^{a\Delta\,n}}\,\frac{\big{(}q^{a(n+1) }-q^{-b(m+1)}\big{)}}{\big{(}q^{b}-q^{-b}\big{)}}\mathbb{T}_{m+n}^{a\Delta}+ \mathbb{H}_{q}^{\Delta(a,b)}(m,n)\] \[-\frac{q^{-a\Delta(m+1)}}{q^{b\Delta\,m}}\frac{\big{(}q^{b(m+1)}- q^{-a(n+1)}\big{)}}{\big{(}q^{a}-q^{-a}\big{)}}\mathbb{T}_{m+n}^{b\Delta}, \tag{136}\]
where
\[\mathbb{H}_{q}^{\Delta(a,b)}(m,n) =q^{-(a+b)\Delta(m+n+1)}\bigg{(}[-\Delta\,m]_{q^{b}}\,[n(1-\Delta )+1]_{q^{a}}\] \[-[-\Delta\,n]_{q^{a}}\,[m(1-\Delta)+1]_{q^{b}}\bigg{)}.\]
Putting \(a=b=1\), in the relation (136), the commutation relation:
\[\Big{[}\mathcal{T}_{m}^{\Delta},\mathbb{T}_{n}^{\Delta}\Big{]} =\frac{q^{-\Delta(n+m)}\big{(}q^{n+1}-q^{m+1}\big{)}}{\big{(}q-q^{- 1}\big{)}}[2]_{q}\mathbb{T}_{m+n}^{2\Delta}+\mathbb{H}_{q}^{\Delta}(m,n)\] \[-\bigg{(}\frac{q^{-\Delta(n+1)}\big{(}q^{n+1}-q^{-m-1}\big{)}}{q^{ \Delta\,n}\big{(}q-q^{-1}\big{)}}+\frac{q^{-\Delta(m+1)}\big{(}q^{m+1}-q^{-n-1 }\big{)}}{q^{\Delta\,m}\big{(}q-q^{-1}\big{)}}\bigg{)}\mathbb{T}_{m+n}^{\Delta},\]
where
\[\mathbb{H}_{q}^{\Delta(1,1)}(m,n) =q^{-2\Delta(m+n+1)}\bigg{(}[-\Delta\,m]_{q}[n(1-\Delta)+1]_{q}\] \[-[-\Delta\,n]_{q}[m(1-\Delta)+1]_{q}\bigg{)}.\]
Besides, the super \(n\)-bracket is defined by:
\[\big{[}\mathcal{T}_{m_{1}}^{a\Delta},\mathcal{T}_{m_{2}}^{a\Delta },\cdots,\mathbb{T}_{m_{n}}^{a\Delta}\big{]} :=\sum_{j=0}^{n-1}(-1)^{n-1+j}\epsilon_{12\dots n-1}^{i_{1}\dots i _{n-1}}\mathcal{T}_{m_{i_{1}}}^{a\Delta}\cdots\mathcal{T}_{m_{i_{j}}}^{a\Delta}\] \[\times\mathbb{T}_{m}^{a\Delta}\mathcal{T}_{m_{i_{j+1}}}^{a\Delta }\cdots\mathcal{T}_{m_{i_{n-1}}}^{a\Delta}.\]
Setting \(a=b\) in the relation (136), we obtain:
\[\Big{[}\mathcal{T}_{m}^{a\Delta},\mathbb{T}_{n}^{a\Delta}\Big{]} =\frac{q^{-a\Delta(n+m)}\big{(}q^{a(n+1)}-q^{a(m+1)}\big{)}}{q^{ a}-q^{-a}}[2]_{q^{a}}\mathbb{T}_{m+n}^{2a\Delta}\] \[-\frac{1}{\big{(}q-q^{-1}\big{)}}\bigg{(}\frac{q^{-a\Delta(n+1)}} {q^{a\Delta\,n}}\big{(}q^{a(n+1)}-q^{-a(m+1)}\big{)}\] \[-\frac{q^{-a\Delta(m+1)}}{q^{a\Delta\,m}}\big{(}q^{a(m+1)}-q^{-a( n+1)}\big{)}\bigg{)}\mathbb{T}_{m+n}^{a\Delta}+\mathbb{H}_{q}^{\Delta(a)}(m,n),\]
where
\[\mathbb{H}_{q}^{\Delta(a)}(m,n) =q^{-2a\Delta(m+n+1)}\bigg{(}[-\Delta\,m]_{q^{a}}[n(1-\Delta)+1]_ {q^{a}}\] \[-[-\Delta\,n]_{q^{a}}[m(1-\Delta)+1]_{q^{a}}\bigg{)}.\]
The \(q\)-conformal super Witt \(n\)-algebra is determined by :
\[\Big{[}\mathcal{T}_{m_{1}}^{a\Delta},\cdots,\mathbb{T}_{m_{n}}^{a \Delta}\Big{]} =\frac{(-1)^{n+1}}{\big{(}q^{a}-q^{-a}\big{)}^{n-1}}\Big{(} \mathbb{V}_{a\Delta}^{n}[n]_{q^{a}}\mathbb{T}_{\tilde{m}}^{n\,a\Delta}\] \[-[n-1]_{q^{a}}\big{(}\mathbb{D}_{a\Delta}^{n}+\mathbb{N}_{a\Delta }^{n}\big{)}\mathbb{T}_{\tilde{m}}^{a(n-1)\Delta}\Big{)}\] \[+\mathbb{H}_{q}^{\Delta}(m_{1},\dots,m_{n}),\]
where
\[\mathbb{V}^{n}_{a\Delta} =q^{a(n-1)(1-\Delta)\bar{m}}\Big{(}\big{(}q^{a}-q^{-a}\big{)}^{ \binom{n}{2}}\prod_{1\leq j<k\leq n}\Big{(}[m_{j}+1]_{q^{a}}-[m_{k}+1]_{q^{a}} \Big{)}\] \[+\prod_{1\leq j<k\leq n}\Big{(}q^{-a(m_{j}+1)}-q^{-a(m_{k}+1)} \Big{)}\Big{)},\]
\[\mathbb{D}^{n}_{a\Delta} =\Big{(}\big{(}q^{a}-q^{-a}\big{)}^{\binom{n}{2}}\prod_{1\leq j<k \leq n}\frac{q^{-a\Delta(m_{k}+1)}}{q^{a\Delta\,m_{k}}}\Big{(}[m_{k}+1]_{q^{a} }-[m_{j}+1]_{q^{a}}\] \[+q^{-a(m_{k}+1)}-q^{a(m_{j}+1)}\Big{)},\]
\[\mathbb{D}^{n}_{a\Delta} =(-1)^{n+1}\Big{(}\big{(}q^{a}-q^{-a}\big{)}^{\binom{n}{2}}\prod _{1\leq j<k\leq n}\frac{q^{-a\Delta(m_{j}+1)}}{q^{a\Delta\,m_{j}}}\Big{(}[m_{j }+1]_{q^{a}}-[m_{k}+1]_{q^{a}}\] \[+q^{-a(m_{j}+1)}-q^{a(m_{k}+1)}\Big{)},\]
and
\[\mathbb{H}^{\Delta(a)}_{q}(m_{1},\ldots,m_{n}) =q^{-an\Delta(\bar{m}+1)}\prod_{1\leq j<k\leq n}\bigg{(}[-\Delta \,m_{j}]_{q^{a}}[m_{k}(1-\Delta)+1]_{q^{a}}\] \[-[-\Delta\,m_{k}]_{q^{a}}[m_{j}(1-\Delta)+1]_{q^{a}}\bigg{)}.\]
#### 3.2.2. Conformal super Witt \(n\)-algebra corresponding to the Jagannathan-Srinivasa algebra
The conformal super Witt \(n\)-algebra from the Jagannathan-Srinivasa algebra[21] is derived as: We consider for \((\Delta\neq 0,1)\), the \((p,q)\)-conformal super operators defined by:
\[\mathcal{T}^{n\Delta}_{m} := x^{(1-\Delta)(m+1)}\bar{\Delta}\,x^{\Delta(m+1)} \tag{137}\] \[\mathbb{T}^{a\Delta}_{m} := \theta\,x^{(1-\Delta)(m+1)}\bar{\Delta}\,x^{\Delta(m+1)}, \tag{138}\]
where \(\mathcal{D}_{p^{a},q^{a}}\) is the \((p,q)\)-derivative (95). By using the \((p,q)\)-number (96), the relation (137) and (138) take the form:
\[\mathcal{T}^{a\Delta}_{m} = [\Delta(m+1)]_{p^{a},q^{a}}\,x^{m}\] \[\mathbb{T}^{a\Delta}_{m} = \theta[\Delta(m+1)]_{p^{a},q^{a}}\,x^{m}.\]
Then, the \((p,q)\)-conformal super operators (137) and (138) satisfy the product relation:
\[\mathcal{T}^{a\Delta}_{m}\cdot\mathbb{T}^{b\Delta}_{n} =\frac{\big{(}p^{a+b}-q^{a+b}\big{)}p^{a(n(1-\Delta)+1)}}{\big{(} p^{a}-q^{a}\big{)}\big{(}p^{b}-q^{b}\big{)}p^{b\Delta\,m}}\,\mathbb{T}^{(a+b) \Delta}_{m+n}-\frac{q^{b\Delta(n+1)}}{p^{a(n(\Delta-1)-1)}\big{(}p^{b}-q^{b} \big{)}}\,\mathbb{T}^{a\Delta}_{m+n}\] \[-\frac{q^{a(\Delta(m+1)+n+1)}}{p^{b\Delta\,m}\big{(}p^{a}-q^{a} \big{)}}\,\mathbb{T}^{b\Delta}_{m+n}+\mathbb{F}^{\Delta(a,b)}_{p,q}(m,n),\]
with
\[\mathbb{F}_{p,q}^{\Delta(a,b)}(m,n)=q^{(a+b)\Delta(m+n+1)}\,[-\Delta\,m]_{p^{b},q^ {b}}\,[n(1-\Delta)+1]_{p^{a},q^{a}},\]
and the commutation relation
\[\left[\mathcal{T}_{m}^{a\Delta},\mathbb{T}_{n}^{b\Delta}\right] =\frac{\left(p^{a+b}-q^{a+b}\right)}{\left(p^{a}-q^{a}\right) \left(p^{b}-q^{b}\right)}\bigg{(}\frac{p^{a(n(1-\Delta)+1)}}{p^{b\Delta\,m}}- \frac{p^{b(m(1-\Delta)+1)}}{p^{a\Delta\,n}}\bigg{)}\mathbb{T}_{m+n}^{(a+b)\Delta}\] \[-\frac{q^{b\Delta(n+1)}}{p^{a\Delta\,n}}\frac{\left(p^{a(n+1)}-q ^{b(m+1)}\right)}{\left(p^{b}-q^{b}\right)}\mathbb{T}_{m+n}^{a\Delta}+ \mathbb{H}_{p,q}^{\Delta(a,b)}(m,n)\] \[-\frac{q^{a\Delta(m+1)}}{p^{b\Delta\,m}}\frac{\left(p^{(m+1)}-q ^{a(n+1)}\right)}{\left(p^{a}-q^{a}\right)}\mathbb{T}_{m+n}^{b\Delta}, \tag{139}\]
with
\[\mathbb{H}_{p,q}^{\Delta(a,b)}(m,n) = q^{(a+b)\Delta(m+n+1)}\bigg{(}[-\Delta\,m]_{p^{b},q^{b}}\,[n(1- \Delta)+1]_{p^{a},q^{a}}\] \[- [-\Delta\,n]_{p^{a},q^{a}}\,[m(1-\Delta)+1]_{p^{b},q^{b}}\bigg{)}.\]
Putting \(a=b=1\), in the relation (139), we obtain the commutation relation:
\[\left[\mathcal{T}_{m}^{\Delta},\mathbb{T}_{n}^{\Delta}\right] =\frac{p^{-\Delta(n+m)}\big{(}p^{n+1}-p^{m+1}\big{)}}{\big{(}p-q \big{)}}[2]_{p,q}\mathbb{T}_{m+n}^{2\Delta}+\mathbb{H}_{p,q}^{\Delta}(m,n)\] \[-\bigg{(}\frac{q^{\Delta(n+1)}\big{(}p^{n+1}-q^{m+1}\big{)}}{p^{ \Delta\,n}\big{(}p-q\big{)}}+\frac{q^{\Delta(m+1)}\big{(}p^{m+1}-q^{n+1}\big{)} }{p^{\Delta\,m}\big{(}p-q\big{)}}\bigg{)}\mathbb{T}_{m+n}^{\Delta},\]
where
\[\mathbb{H}_{p,q}^{\Delta(1,1)}(m,n) =q^{2\Delta(m+n+1)}\bigg{(}[-\Delta\,m]_{p,q}\,[n(1-\Delta)+1]_{p,q}\] \[-[-\Delta\,n]_{p,q}\,[m(1-\Delta)+1]_{p,q}\bigg{)}.\]
Furthermore, the super \(n\)-bracket is defined by:
\[\left[\mathcal{T}_{m_{1}}^{a\Delta},\mathcal{T}_{m_{2}}^{a\Delta},\cdots,\mathbb{T}_{m_{n}}^{a\Delta}\right] :=\sum_{j=0}^{n-1}(-1)^{n-1+j}\epsilon_{12\cdots n-1}^{i_{1}\cdots i _{n-1}}\mathcal{T}_{m_{i_{1}}}^{a\Delta}\cdots\mathcal{T}_{m_{i_{j}}}^{a\Delta}\] \[\times\mathbb{T}_{m_{n}}^{a\Delta}\mathcal{T}_{m_{i_{j+1}}}^{a \Delta}\cdots\mathcal{T}_{m_{i_{n-1}}}^{a\Delta}.\]
Putting \(a=b\) in the relation (139), we obtain:
\[\Big{[}\mathcal{T}_{m}^{a\Delta},\mathbb{T}_{n}^{a\Delta}\Big{]} =\frac{p^{-a\Delta(n+m)}\big{(}p^{a(n+1)}-p^{a(m+1)}\big{)}}{p^{a}- q^{a}}[2]_{p^{a},q^{a}}\mathbb{T}_{m+n}^{2a\Delta}\] \[-\frac{1}{\big{(}p-q\big{)}}\bigg{(}\frac{q^{a\Delta(n+1)}}{p^{a \Delta\,n}}\big{(}p^{a(n+1)}-q^{a(m+1)}\big{)}\] \[-\frac{q^{a\Delta(m+1)}}{p^{a\Delta\,m}}\big{(}p^{a(m+1)}-q^{a(n+ 1)}\big{)}\bigg{)}\mathbb{T}_{m+n}^{a\Delta}+\mathbb{H}_{p,q}^{\Delta(a)}(m,n),\]
where
\[\mathbb{H}_{p,q}^{\Delta(a)}(m,n) =q^{2a\Delta(m+n+1)}\bigg{(}[-\Delta\,m]_{p^{a},q^{a}}[n(1-\Delta) +1]_{p^{a},q^{a}}\] \[-[-\Delta\,n]_{p^{a},q^{a}}[m(1-\Delta)+1]_{p^{a},q^{a}}\bigg{)}\]
The \((p,q)\)-conformal super Witt \(n\)-algebra is given by :
\[\Big{[}\mathcal{T}_{m_{1}}^{a\Delta},\cdots,\mathbb{T}_{m_{n}}^{a \Delta}\Big{]} =\frac{(-1)^{n+1}}{\big{(}p^{a}-q^{a}\big{)}^{n-1}}\Big{(}\mathbb{ V}_{a\Delta}^{n}[n]_{p^{a},q^{a}}\mathbb{T}_{\bar{m}}^{n\,a\Delta}\] \[-[n-1]_{p^{a},q^{a}}\big{(}\mathbb{D}_{a\Delta}^{n}+\mathbb{N}_{ a\Delta}^{n}\big{)}\mathbb{T}_{\bar{m}}^{a(n-1)\Delta}\Big{)}\] \[+\mathbb{H}_{p,q}^{\Delta}(m_{1},\ldots,m_{n}),\]
where
\[\mathbb{V}_{a\Delta}^{n} =p^{a(n-1)(1-\Delta)\bar{m}}\Big{(}\big{(}p^{a}-q^{a}\big{)}^{ \binom{n}{2}}\prod_{1\leq j<k\leq n}\Big{(}[m_{j}+1]_{p^{a},q^{a}}-[m_{k}+1]_{ p^{a},q^{a}}\Big{)}\] \[+\prod_{1\leq j<k\leq n}\Big{(}q^{a(m_{j}+1)}-q^{a(m_{k}+1)} \Big{)}\Big{)},\]
\[\mathbb{D}_{a\Delta}^{n} = \Big{(}\big{(}p^{a}-q^{a}\big{)}^{\binom{n}{2}}\prod_{1\leq j<k \leq n}\frac{q^{a\Delta(m_{k}+1)}}{p^{a\Delta\,m_{k}}}\Big{(}[m_{k}+1]_{p^{a},q^{a}}-[m_{j}+1]_{p^{a},q^{a}}\] \[+ q^{a(m_{k}+1)}-p^{a(m_{j}+1)}\Big{)},\]
\[\mathbb{N}_{a\Delta}^{n} = (-1)^{n+1}\Big{(}\big{(}p^{a}-q^{a}\big{)}^{\binom{n}{2}}\prod_{1 \leq j<k\leq n}\frac{q^{a\Delta(m_{j}+1)}}{p^{a\Delta\,m_{j}}}\Big{(}[m_{j}+1] _{p^{a},q^{a}}-[m_{k}+1]_{p^{a},q^{a}}\] \[+ q^{a(m_{j}+1)}-p^{a(m_{k}+1)}\Big{)},\]
and
\[\mathbb{H}_{p,q}^{\Delta(a)}(m_{1},\ldots,m_{n}) =q^{an\Delta(\bar{m}+1)}\prod_{1\leq j<k\leq n}\bigg{(}[-\Delta\,m_ {j}]_{p^{a},q^{a}}[m_{k}(1-\Delta)+1]_{p^{a},q^{a}}\] \[-[-\Delta\,m_{k}]_{p^{a},q^{a}}[m_{j}(1-\Delta)+1]_{p^{a},q^{a}} \bigg{)}.\]
#### 3.2.3. Conformal super Witt \(n\)-algebra associated to the Chakrabarty and Jagannathan algebra
The conformal super Witt \(n\)-algebra from the Chakrabarty and Jagannathan algebra [9] is derived as: We consider for \((\Delta\neq 0,1)\), the \((p^{-1},q)\)-conformal super operators defined by:
\[\mathcal{T}_{m}^{a\Delta} :=x^{(1-\Delta)(m+1)}\bar{\Delta}\,x^{\Delta(m+1)} \tag{140}\] \[\mathbb{T}_{m}^{a\Delta} :=\theta\,x^{(1-\Delta)(m+1)}\bar{\Delta}\,x^{\Delta(m+1)}, \tag{141}\]
where \(\mathcal{D}_{p^{-a},q^{a}}\) is the \((p^{-1},q)\)-derivative (101). By using the \((p^{-1},q)\)-number (102), the relation (140) and (141) take the form:
\[\mathcal{T}_{m}^{a\Delta} =[\Delta(m+1)]_{p^{-a},q^{a}}\,x^{m}\] \[\mathbb{T}_{m}^{a\Delta} =\theta[\Delta(m+1)]_{p^{-a},q^{a}}\,x^{m}.\]
Then, the \((p^{-1},q)\)-conformal super operators (140) and (141) satisfy the product relation:
\[\mathcal{T}_{m}^{a\Delta}.\mathbb{T}_{n}^{b\Delta} =\frac{\big{(}p^{-a-b}-q^{a+b}\big{)}p^{-a(n(1-\Delta)+1)}}{\big{(} p^{-a}-q^{a}\big{)}\big{(}p^{-b}-q^{b}\big{)}p^{-b\Delta\,m}}\,\mathbb{T}_{m+n}^{( a+b)\Delta}+\mathbb{F}_{p^{-1},q}^{\Delta(a,b)}(m,n)\] \[-\frac{q^{a(\Delta(m+1)+n+1)}}{p^{-b\Delta\,m}\big{(}p^{-a}-q^{a} \big{)}}\,\mathbb{T}_{m+n}^{b\Delta}-\frac{q^{b\Delta(n+1)}}{p^{-a(n(\Delta- 1)-1)}\big{(}p^{-b}-q^{b}\big{)}}\,\mathbb{T}_{m+n}^{a\Delta},\]
with
\[\mathbb{F}_{p^{-1},q}^{\Delta(a,b)}(m,n)=q^{(a+b)\Delta(m+n+1)}\,[-\Delta\,m ]_{p^{-b},q^{b}}\,[n(1-\Delta)+1]_{p^{-a},q^{a}},\]
and the commutation relation
\[\Big{[}\mathcal{T}_{m}^{a\Delta},\mathbb{T}_{n}^{b\Delta}\Big{]} =\frac{\big{(}p^{-a-b}-q^{a+b}\big{)}}{\big{(}p^{-a}-q^{a}\big{)} \big{(}p^{-b}-q^{b}\big{)}}\bigg{(}\frac{p^{-a(n(1-\Delta)+1)}}{p^{-b\Delta\,m }}-\frac{p^{-b(m(1-\Delta)+1)}}{p^{-a\Delta\,n}}\bigg{)}\mathbb{T}_{m+n}^{(a+ b)\Delta}\] \[-\frac{q^{b\Delta(n+1)}}{p^{-a\Delta\,n}}\frac{\big{(}p^{-a(n+1) }-q^{b(m+1)}\big{)}}{\big{(}p^{-b}-q^{b}\big{)}}\mathbb{T}_{m+n}^{a\Delta}+ \mathbb{H}_{p^{-1},q}^{\Delta(a,b)}(m,n)\] \[-\frac{q^{a\Delta(m+1)}}{p^{-b\Delta\,m}}\frac{\big{(}p^{-b(m+1) }-q^{a(n+1)}\big{)}}{\big{(}p^{-a}-q^{a}\big{)}}\mathbb{T}_{m+n}^{b\Delta}, \tag{142}\]
with
\[\mathbb{H}_{p^{-1},q}^{\Delta(a,b)}(m,n) =q^{(a+b)\Delta(m+n+1)}\biggl{(}[-\Delta\,m]_{p^{-b},q^{b}}\,[n(1- \Delta)+1]_{p^{-a},q^{a}}\] \[-[-\Delta\,n]_{p^{-a},q^{a}}\,[m(1-\Delta)+1]_{p^{-b},q^{b}}\biggr{)}.\]
Putting \(a=b=1\), in the relation (142), we obtain the commutation relation:
\[\Bigl{[}\mathcal{T}_{m}^{\Delta},\mathbb{T}_{n}^{\Delta}\Bigr{]} =\frac{p^{\Delta(n+m)}\bigl{(}p^{-n-1}-p^{-m-1}\bigr{)}}{\bigl{(} p^{-1}-q\bigr{)}}[2]_{p^{-1},q}\mathbb{T}_{m+n}^{2\Delta}+\mathbb{H}_{p^{-1},q}^{ \Delta}(m,n)\] \[-\biggl{(}\frac{q^{\Delta(n+1)}\bigl{(}p^{-n-1}-q^{m+1}\bigr{)}} {p^{-\Delta\,n}\bigl{(}p^{-1}-q\bigr{)}}+\frac{q^{\Delta(m+1)}\bigl{(}p^{-m-1} -q^{n+1}\bigr{)}}{p^{-\Delta\,m}\bigl{(}p^{-1}-q\bigr{)}}\biggr{)}\mathbb{T}_{ m+n}^{\Delta},\]
where
\[\mathbb{H}_{p^{-1},q}^{\Delta(1,1)}(m,n) =q^{2\Delta(m+n+1)}\biggl{(}[-\Delta\,m]_{p^{-1},q}\,[n(1-\Delta) +1]_{p^{-1},q}\] \[-[-\Delta\,n]_{p^{-1},q}\,[m(1-\Delta)+1]_{p^{-1},q}\biggr{)}.\]
Furthermore, the super \(n\)-bracket is defined by:
\[\bigl{[}\mathcal{T}_{m_{1}}^{a\Delta},\mathcal{T}_{m_{2}}^{a \Delta},\cdots,\mathbb{T}_{m_{n}}^{a\Delta}\bigr{]} :=\sum_{j=0}^{n-1}(-1)^{n-1+j}\epsilon_{12\cdots n-1}^{i_{1}\dots i _{n-1}}\mathcal{T}_{m_{i_{1}}}^{a\Delta}\cdots\mathcal{T}_{m_{i_{j}}}^{a\Delta}\] \[\times\mathbb{T}_{m_{n}}^{a\Delta}\mathcal{T}_{m_{i_{j+1}}}^{a \Delta}\cdots\mathcal{T}_{m_{i_{n-1}}}^{a\Delta}.\]
Putting \(a=b\) in the relation (142), we have:
\[\Bigl{[}\mathcal{T}_{m}^{a\Delta},\mathbb{T}_{n}^{a\Delta}\Bigr{]} =\frac{p^{a\Delta(n+m)}\bigl{(}p^{-a(n+1)}-p^{-a(m+1)}\bigr{)}}{p^ {-a}-q^{a}}[2]_{p^{-a},q^{a}}\mathbb{T}_{m+n}^{2a\Delta}\] \[-\frac{1}{\bigl{(}p^{-1}-q\bigr{)}}\biggl{(}\frac{q^{a\Delta(n+1)} }{p^{-a\Delta\,n}}\bigl{(}p^{-a(n+1)}-q^{a(m+1)}\bigr{)}\] \[-\frac{q^{a\Delta(m+1)}}{p^{-a\Delta\,m}}\bigl{(}p^{-a(m+1)}-q^{ a(n+1)}\bigr{)}\biggr{)}\mathbb{T}_{m+n}^{a\Delta}+\mathbb{H}_{p^{-1},q}^{ \Delta(a)}(m,n),\]
where
\[\mathbb{H}_{p^{-1},q}^{\Delta(a)}(m,n) =q^{2a\Delta(m+n+1)}\biggl{(}[-\Delta\,m]_{p^{-a},q^{a}}[n(1- \Delta)+1]_{p^{-a},q^{a}}\] \[-[-\Delta\,n]_{p^{-a},q^{a}}[m(1-\Delta)+1]_{p^{-a},q^{a}}\biggr{)}\]
The \((p^{-1},q)\)-conformal super Witt \(n\)-algebra is given by :
\[\Big{[}\mathcal{T}^{a\Delta}_{m_{1}},\cdots,\mathbb{T}^{a\Delta}_{m_{ n}}\Big{]} =\frac{(-1)^{n+1}}{\big{(}p^{-a}-q^{a}\big{)}^{n-1}}\Big{(}\mathbb{ V}^{n}_{a\Delta}[n]_{p^{-a},q^{a}}\mathbb{T}^{n\,a\Delta}_{\bar{m}}\] \[-[n-1]_{p^{-a},q^{a}}\big{(}\mathbb{D}^{n}_{a\Delta}+\mathbb{N}^{n }_{a\Delta}\big{)}\mathbb{T}^{a(n-1)\Delta}_{\bar{m}}\Big{)}\] \[+\mathbb{H}^{\Delta}_{p^{-1},q}(m_{1},\dots,m_{n}),\]
where
\[\mathbb{V}^{n}_{a\Delta} =p^{-a(n-1)(1-\Delta)\bar{m}}\Big{(}\big{(}p^{-a}-q^{a}\big{)}^{ \binom{n}{2}}\prod_{1\leq j<k\leq n}\Big{(}[m_{j}+1]_{p^{-a},q^{a}}\] \[-[m_{k}+1]_{p^{-a},q^{a}}\Big{)}+\prod_{1\leq j<k\leq n}\Big{(}q^ {a(m_{j}+1)}-q^{a(m_{k}+1)}\Big{)}\Big{)},\]
\[\mathbb{D}^{n}_{a\Delta} =\Big{(}\big{(}p^{-a}-q^{a}\big{)}^{\binom{n}{2}}\prod_{1\leq j<k \leq n}\frac{q^{a\Delta(m_{k}+1)}}{p^{-a\Delta\,m_{k}}}\Big{(}[m_{k}+1]_{p^{- a},q^{a}}-[m_{j}+1]_{p^{-a},q^{a}}\] \[+q^{a(m_{k}+1)}-p^{-a(m_{j}+1)}\Big{)},\]
\[\mathbb{N}^{n}_{a\Delta} =(-1)^{n+1}\Big{(}\big{(}p^{-a}-q^{a}\big{)}^{\binom{n}{2}}\prod _{1\leq j<k\leq n}\frac{q^{a\Delta(m_{j}+1)}}{p^{-a\Delta\,m_{j}}}\Big{(}[m_{ j}+1]_{p^{-a},q^{a}}\] \[-[m_{k}+1]_{p^{-a},q^{a}}+q^{a(m_{j}+1)}-p^{-a(m_{k}+1)}\Big{)},\]
and
\[\mathbb{H}^{\Delta(a)}_{p^{-1},q}(m_{1},\dots,m_{n}) =\prod_{1\leq j<k\leq n}\bigg{(}[-\Delta\,m_{j}]_{p^{-a},q^{a}}[m _{k}(1-\Delta)+1]_{p^{-a},q^{a}}\] \[-[-\Delta\,m_{k}]_{p^{-a},q^{a}}[m_{j}(1-\Delta)+1]_{p^{-a},q^{a} }\bigg{)}q^{an\Delta(\bar{m}+1)}.\]
2.4. Conformal super Witt \(n\)-algebra induced by the Hounkonnou-Ngompe generalization of \(q\)-Quesne algebra
The conformal super Witt \(n\)-algebra from the Hounkonnou-Ngompe generalization of \(q\)-Quesne algebra [19] is derived as: We consider for \((\Delta\neq 0,1)\), the generalized \(q\)-Quesne conformal super operators defined by:
\[\mathcal{T}^{a\Delta}_{m} :=x^{(1-\Delta)(m+1)}\bar{\Delta}\,x^{\Delta(m+1)} \tag{143}\] \[\mathbb{T}^{a\Delta}_{m} :=\theta\,x^{(1-\Delta)(m+1)}\bar{\Delta}\,x^{\Delta(m+1)}, \tag{144}\]
where \(\mathcal{D}^{Q}_{p^{a},q^{a}}\) is the generalized \(q\)-Quesne derivative (108). From the generalized \(q\)-Quesne number (109), the relations (143) and (144) take the form:
\[\mathcal{T}^{a\Delta}_{m} =[\Delta(m+1)]^{Q}_{p^{a},q^{a}}\,x^{m}\] \[\mathbb{T}^{a\Delta}_{m} =\theta[\Delta(m+1)]^{Q}_{p^{a},q^{a}}\,x^{m}.\]
Then, the generalized \(q\)-Quesne conformal super operators (143) and (144) satisfy the product relation:
\[\mathcal{T}^{a\Delta}_{m}.\mathbb{T}^{b\Delta}_{n} =\frac{\big{(}p^{a+b}-q^{-a-b}\big{)}p^{a(n(1-\Delta)+1)}}{\big{(} -p^{-a}+q^{a}\big{)}\big{(}-p^{-b}+q^{b}\big{)}p^{b\Delta\,m}}\,\mathbb{T}^{(a +b)\Delta}_{m+n}\] \[-\frac{q^{-b\Delta(n+1)}}{p^{a(n(\Delta-1)-1)}\big{(}-p^{-b}+q^{b }\big{)}}\,\mathbb{T}^{a\Delta}_{m+n}\] \[-\frac{q^{-a(\Delta(m+1)+n+1)}}{p^{b\Delta\,m}\big{(}-p^{-a}+q^{ a}\big{)}}\,\mathbb{T}^{b\Delta}_{m+n}+\mathbb{F}^{\Delta(a,b)}_{p,q}(m,n),\]
with
\[\mathbb{F}^{\Delta(a,b)}_{p,q}(m,n)=q^{-(a+b)\Delta(m+n+1)}\,[-\Delta\,m]^{Q} _{p^{b},q^{b}}\,[n(1-\Delta)+1]^{Q}_{p^{a},q^{a}},\]
and the commutation relation
\[\Big{[}\mathcal{T}^{a\Delta}_{m},\mathbb{T}^{b\Delta}_{n}\Big{]} =\frac{\big{(}p^{a+b}-q^{-a-b}\big{)}}{\big{(}-p^{-a}+q^{a}\big{)} \big{(}-p^{-b}+q^{b}\big{)}}\bigg{(}\frac{p^{a(n(1-\Delta)+1)}}{p^{b\Delta\,m} }-\frac{p^{b(m(1-\Delta)+1)}}{p^{a\Delta\,n}}\bigg{)}\mathbb{T}^{(a+b)\Delta}_ {m+n}\] \[-\frac{q^{-b\Delta(n+1)}}{p^{a\Delta\,n}}\frac{\big{(}p^{a(n+1)}- q^{-b(m+1)}\big{)}}{\big{(}-p^{-b}+q^{b}\big{)}}\mathbb{T}^{a\Delta}_{m+n}+ \mathbb{H}^{\Delta(a,b)}_{p,q}(m,n)\] \[-\frac{q^{-a\Delta(m+1)}}{p^{b\Delta\,m}}\frac{\big{(}p^{b(m+1)}- q^{-a(n+1)}\big{)}}{\big{(}-p^{-a}+q^{a}\big{)}}\mathbb{T}^{b\Delta}_{m+n}, \tag{145}\]
with
\[\mathbb{H}^{\Delta(a,b)}_{p,q}(m,n) =q^{-(a+b)\Delta(m+n+1)}\bigg{(}[-\Delta\,m]^{Q}_{p^{b},q^{b}}\,[ n(1-\Delta)+1]^{Q}_{p^{a},q^{a}}\] \[-[-\Delta\,n]^{Q}_{p^{a},q^{a}}\,[m(1-\Delta)+1]^{Q}_{p^{b},q^{b}} \bigg{)}.\]
Putting \(a=b=1\), in the relation (145), we obtain the commutation relation:
\[\Big{[}\mathcal{T}^{\Delta}_{m},\mathbb{T}^{\Delta}_{n}\Big{]} =\frac{p^{-\Delta(n+m)}\big{(}p^{n+1}-p^{m+1}\big{)}}{\big{(}-p^ {-1}+q\big{)}}[2]^{Q}_{p,q}\mathbb{T}^{2\Delta}_{m+n}+\mathbb{H}^{\Delta}_{p,q }(m,n)\] \[-\bigg{(}\frac{q^{-\Delta(n+1)}\big{(}p^{n+1}-q^{-m-1}\big{)}}{p^ {\Delta\,n}\big{(}-p^{-1}+q\big{)}}+\frac{q^{-\Delta(m+1)}\big{(}p^{m+1}-q^{-n -1}\big{)}}{p^{\Delta\,m}\big{(}-p^{-1}+q\big{)}}\bigg{)}\mathbb{T}^{\Delta}_{m +n},\]
where
\[\mathbb{H}_{p,q}^{\Delta(1,1)}(m,n) =q^{-2\Delta(m+n+1)}\bigg{(}[-\Delta\,m]_{p,q}^{Q}\,[n(1-\Delta)+1]_ {p,q}^{Q}\] \[-[-\Delta\,n]_{p,q}^{Q}\,[m(1-\Delta)+1]_{p,q}^{Q}\bigg{)}.\]
Furthermore, the super \(n\)-bracket is defined by:
\[\big{[}\mathcal{T}_{m_{1}}^{a\Delta},\mathcal{T}_{m_{2}}^{a\Delta },\cdots,\mathbb{T}_{m_{n}}^{a\Delta}\big{]} :=\sum_{j=0}^{n-1}(-1)^{n-1+j}\epsilon_{12\cdots n-1}^{i_{1} \ldots i_{n-1}}\mathcal{T}_{m_{i_{1}}}^{a\Delta}\cdots\mathcal{T}_{m_{i_{j}}}^ {a\Delta}\] \[\times\mathbb{T}_{m_{n}}^{a\Delta}\mathcal{T}_{m_{j+1}}^{a\Delta }\cdots\mathcal{T}_{m_{i_{n-1}}}^{a\Delta}.\]
Putting \(a=b\) in the relation (145), we obtain:
\[\Big{[}\mathcal{T}_{m}^{a\Delta},\mathbb{T}_{n}^{a\Delta}\Big{]} =\frac{p^{-a\Delta(n+m)}\big{(}p^{a(n+1)}-p^{-a(m+1)}\big{)}}{-p^ {-a}+q^{a}}[2]_{p^{a},q^{a}}^{Q}\mathbb{T}_{m+n}^{2a\Delta}\] \[-\frac{1}{\big{(}-p^{-1}+q\big{)}}\bigg{(}\frac{q^{-a\Delta(n+1)} }{p^{a\Delta\,n}}\big{(}p^{a(n+1)}-q^{-a(m+1)}\big{)}\] \[-\frac{q^{-a\Delta(m+1)}}{p^{a\Delta\,m}}\big{(}p^{a(m+1)}-q^{-a( n+1)}\big{)}\bigg{)}\mathbb{T}_{m+n}^{a\Delta}+\mathbb{H}_{p,q}^{\Delta(a)}(m,n),\]
where
\[\mathbb{H}_{p,q}^{\Delta(a)}(m,n) =q^{-2a\Delta(m+n+1)}\bigg{(}[-\Delta\,m]_{p^{a},q^{a}}^{Q}[n(1- \Delta)+1]_{p^{a},q^{a}}^{Q}\] \[-[-\Delta\,n]_{p^{a},q^{a}}^{Q}[m(1-\Delta)+1]_{p^{a},q^{a}}^{Q} \bigg{)}\]
The generalized \(q\)-Quesne conformal super Witt \(n\)-algebra is given by :
\[\Big{[}\mathcal{T}_{m_{1}}^{a\Delta},\cdots,\mathbb{T}_{m_{n}}^{a \Delta}\Big{]} =\frac{(-1)^{n+1}}{\big{(}-p^{-a}+q^{a}\big{)}^{n-1}}\Big{(} \mathbb{V}_{a\Delta}^{n}[n]_{p^{a},q^{a}}^{Q}\mathbb{T}_{\bar{m}}^{n\,a\Delta}\] \[-[n-1]_{p^{a},q^{a}}^{Q}\big{(}\mathbb{D}_{a\Delta}^{n}+\mathbb{N }_{a\Delta}^{n}\big{)}\mathbb{T}_{\bar{m}}^{a(n-1)\Delta}\Big{)}\] \[+\mathbb{H}_{p,q}^{\Delta}(m_{1},\ldots,m_{n}),\]
where
\[\mathbb{V}_{a\Delta}^{n} =p^{a(n-1)(1-\Delta)\bar{m}}\Big{(}\big{(}-p^{-a}+q^{a}\big{)}^{ \binom{n}{2}}\prod_{1\leq j<k\leq n}\Big{(}[m_{j}+1]_{p^{a},q^{a}}^{Q}\] \[-[m_{k}+1]_{p^{a},q^{a}}^{Q}\Big{)}+\prod_{1\leq j<k\leq n}\Big{(} q^{-a(m_{j}+1)}-q^{-a(m_{k}+1)}\Big{)}\Big{)},\]
\[\mathbb{D}^{n}_{a\Delta} =\Big{(}\big{(}-p^{-a}+q^{a}\big{)}^{\binom{n}{2}}\prod_{1\leq j<k \leq n}\frac{q^{-a\Delta(m_{k}+1)}}{p^{a\Delta\,m_{k}}}\Big{(}[m_{k}+1]^{Q}_{p^{ a},q^{a}}-[m_{j}+1]^{Q}_{p^{a},q^{a}}\] \[+q^{-a(m_{k}+1)}-p^{a(m_{j}+1)}\Big{)},\]
\[\mathbb{N}^{n}_{a\Delta} =(-1)^{n+1}\Big{(}\big{(}-p^{-a}+q^{a}\big{)}^{\binom{n}{2}}\prod _{1\leq j<k\leq n}\frac{q^{-a\Delta(m_{j}+1)}}{p^{a\Delta\,m_{j}}}\Big{(}[m_{j }+1]^{Q}_{p^{a},q^{a}}\] \[-[m_{k}+1]^{Q}_{p^{a},q^{a}}+q^{-a(m_{j}+1)}-p^{a(m_{k}+1)}\Big{)},\]
and
\[\mathbb{H}^{\Delta(a)}_{p,q}(m_{1},\dots,m_{n}) =q^{-an\Delta(\bar{m}+1)}\prod_{1\leq j<k\leq n}\bigg{(}[-\Delta\, m_{j}]^{Q}_{p^{a},q^{a}}[m_{k}(1-\Delta)+1]^{Q}_{p^{a},q^{a}}\] \[-[-\Delta\,m_{k}]^{Q}_{p^{a},q^{a}}[m_{j}(1-\Delta)+1]^{Q}_{p^{a },q^{a}}\bigg{)}.\]
A toy model for \(\mathcal{R}(p,q)\)-super Virasoro constraints with conformal dimension \((\Delta\neq 0,1)\)
We study a toy model for the \(\mathcal{R}(p,q)\)-super Virasoro constraints with conformal dimension \((\Delta\neq 0,1).\) We use step by step the same procedure investigated in the section(2.4). Then, from the constraints on the partition function,
\[\mathcal{T}^{a\Delta}_{m}\,Z^{(toy)}(t) =0,\quad m\geq 0, \tag{146}\] \[\mathbb{T}^{a\Delta}_{m}\,Z^{(toy)}(t) =0,\quad m\geq 0, \tag{147}\]
we obtain the following lemma:
**Lemma 35**.: _The \(\mathcal{R}(p,q)\)-super conformal differential operators given by the relation (113) and_
\[\mathbb{T}^{a\Delta}_{m} =\theta\bigg{(}[\Delta(m+1)+\gamma]_{\mathcal{R}(p^{a},q^{a})}\, m!\,\tau_{1}^{a\,m}\,\frac{\partial}{\partial t_{m}}\] \[+h(p^{a},q^{a})\frac{\tau_{2}^{a(\Delta(m+1)+\gamma)}}{\tau_{1}^ {a}-\tau_{2}^{a}}\sum_{k=1}^{\infty}\frac{(m+k)!}{k!}\] \[\times B_{k}(t_{1}^{a},\cdots,t_{k}^{a})\frac{\partial}{\partial t _{m+k}}\bigg{)}.\]
Considering the relation(7),(128) and (129), the \(\mathcal{R}(p,q)\)-super Witt algebra is generated by the operators (115) and
\[\mathbb{T}^{a\Delta}_{m}=\theta\bigg{(}[x\partial_{x}+\Delta(m+1)-m]_{ \mathcal{R}(p^{a},q^{a})}\bigg{)}\,x^{m}. \tag{148}\]
Then, the following lemma holds.
**Lemma 36**.: _The \(\mathcal{R}(p,q)\)-super conformal differential operators given by the relation (114) and_
\[\mathbb{T}_{m}^{a\Delta} =\theta\bigg{(}[x\partial_{x}+\Delta(m+1)-m]_{\mathcal{R}(p^{a},q^ {a})}\,m!\,\tau_{1}^{a\,m}\,\frac{\partial}{\partial t_{m}}\] \[+h(p^{a},q^{a})\frac{\tau_{2}^{a(\Delta(m+1)+\gamma)}}{\tau_{1}^{ a}-\tau_{2}^{a}}\sum_{k=1}^{\infty}\frac{(m+1+\Delta(k-1))!}{k!}\] \[\times B_{k}(t_{1}^{a},\cdots,t_{k}^{a})\frac{\partial}{\partial t _{m+1+\Delta(k-1)}}\bigg{)}.\]
**Remark 37**.: _For some particular cases of super conformal dimension \(\Delta,\) we get:_
1. _For_ \(\Delta=0,\) _we obtain the operators (_116_) and_ \[\mathbb{T}_{m} =\theta\bigg{(}[x\partial_{x}-m]_{\mathcal{R}(p^{a},q^{a})}\,m! \,\tau_{1}^{-a\,m}\,\frac{\partial}{\partial t_{m}}\] \[+h(p^{a},q^{a})\frac{\tau_{2}^{a(\gamma)}}{\tau_{1}^{a}-\tau_{2}^ {a}}\sum_{k=1}^{\infty}\frac{(m+1)!}{k!}B_{k}(t_{1}^{a},\cdots,t_{k}^{a})\frac {\partial}{\partial t_{m+1}}\bigg{)}.\]
2. \(\Delta=1,\) _we obtain, the_ \(\mathcal{R}(p,q)\)_-super conformal differential operators (_117_) and_ \[\mathbb{T}_{m}^{a} =\theta\bigg{(}[x\partial_{x}+1]_{\mathcal{R}(p^{a},q^{a})}\,m!\, \tau_{1}^{-a\,m}\,\frac{\partial}{\partial t_{m}}\] \[+h(p^{a},q^{a})\frac{\tau_{2}^{a(m+1+\gamma)}}{\tau_{1}^{a}-\tau_ {2}^{a}}\sum_{k=1}^{\infty}\frac{(m+k)!}{k!}B_{k}(t_{1}^{a},\cdots,t_{k}^{a}) \frac{\partial}{\partial t_{m+k}}\bigg{)}.\]
**Remark 38**.: _The super conformal differential operators corresponding to the quantum algebras known in the literature are derived as:_
1. _The_ \(q\)_-super conformal differential operator is provided by the relation (_118_) and_ \[\mathbb{T}_{m}^{a\Delta} =\theta\bigg{(}[\Delta(m+1)+\gamma]_{q^{a}}\,m!\,q^{-a\,m}\, \frac{\partial}{\partial t_{m}}\] \[+\frac{q^{-a(\Delta(m+1)+\gamma)}}{q^{a}-q^{-a}}\sum_{k=1}^{ \infty}\frac{(m+k)!}{k!}B_{k}(t_{1}^{a},\cdots,t_{k}^{a})\frac{\partial}{ \partial t_{m+k}}\bigg{)}.\]
2. _The_ \((p,q)\)_-super conformal differential operator is given by the relation (_120_) and_ \[\mathbb{T}_{m}^{a\Delta} =\theta\bigg{(}[\Delta(m+1)+\gamma]_{p^{a},q^{a}}\,m!\,p^{-a\,m} \,\frac{\partial}{\partial t_{m}}\] \[+\frac{q^{a(\Delta(m+1)+\gamma)}}{p^{a}-q^{a}}\sum_{k=1}^{\infty} \frac{(m+k)!}{k!}B_{k}(t_{1}^{a},\cdots,t_{k}^{a})\frac{\partial}{\partial t_ {m+k}}\bigg{)}.\]
2. _The_ \((p^{-1},q)\)_-super conformal differential operator is given by the relation (_122_) and_ \[\mathbb{T}_{m}^{a\Delta} =\theta\bigg{(}[\Delta(m+1)+\gamma]_{p^{-a},q^{a}}\,m!\,p^{a\,m}\, \frac{\partial}{\partial t_{m}}\] \[+\frac{q^{a(\Delta(m+1)+\gamma)}}{p^{-a}-q^{a}}\sum_{k=1}^{\infty} \frac{(m+k)!}{k!}B_{k}(t_{1}^{a},\cdots,t_{k}^{a})\frac{\partial}{\partial t_ {m+k}}\bigg{)}.\]
4. _The generalized_ \(q\)_-Quesne super conformal differential operator is determined by the relation (_124_) and_ \[\mathbb{T}_{m}^{a\Delta} =\theta\bigg{(}[\Delta(m+1)+\gamma]_{p^{a},q^{a}}^{Q}\,m!\,p^{-a\, m}\,\frac{\partial}{\partial t_{m}}\] \[+\frac{q^{-a(\Delta(m+1)+\gamma)}}{p^{a}-q^{-a}}\sum_{k=1}^{\infty }\frac{(m+k)!}{k!}B_{k}(t_{1}^{a},\cdots,t_{k}^{a})\frac{\partial}{\partial t _{m+k}}\bigg{)}.\]
## 4. Generalized matrix model with conformal dimension (\(\Delta\neq 0,1\))
In this section, we characterize the matrix model with conformal dimension (\(\Delta\neq 0,1\)) from the \(\mathcal{R}(p,q)\)-deformed quantum algebra. Moreover, we derive particular cases induced by quantum algebra known in the literature.
We consider now the following relation:
\[\left\{\begin{array}{l}F(z)=z,\\ \\ G(P,Q)=\frac{q^{Q}-p^{P}}{p^{Q}\mathcal{R}(p^{P},q^{Q})},\quad\text{if}\quad l >0,\end{array}\right. \tag{149}\]
where \(l\) is given in the relation (1).
**Definition 39**.: _The \(\mathcal{R}(p,q)\)-Pochhammer symbol is given by:_
\[\big{(}z;\mathcal{R}(p,q)\big{)}_{0}=1,\quad\big{(}z;\mathcal{R}(p,q)\big{)} _{n}:=\prod_{k=0}^{n-1}\bigg{(}1-F\big{(}\frac{q^{k}}{p^{k}}\,z\big{)}G(P,Q) \bigg{)}\,,\,n\in\mathbb{N}, \tag{150}\]
_and the \(\mathcal{R}(p,q)\)-theta function \(\theta(z;\mathcal{R}(p,q))\) as follows:_
\[\theta(z;\mathcal{R}(p,q))=\prod_{k=0}^{\infty}\bigg{(}1-F\big{(}\frac{q^{k}} {p^{k}}z\big{)}G(P,Q)\bigg{)}\prod_{k=0}^{\infty}\bigg{(}1-F\big{(}\frac{q^{k +1}}{p^{k+1}}z^{-1}\big{)}G(P,Q)\bigg{)}, \tag{151}\]
Note that the \(\mathcal{R}(p,q)\)-theta function (151) satisfies the following properties:
\[\theta\big{(}qz;\mathcal{R}(p,q)\big{)}=\theta\big{(}(pz)^{-1};\mathcal{R}(p, q)\big{)}, \tag{152}\]
and
\[\theta\big{(}q^{-1}z;\mathcal{R}(p,q)\big{)}=\frac{p}{q}z^{2}\ \theta\big{(}(pz)^{-1};\mathcal{R}(p,q)\big{)}. \tag{153}\]
**Definition 40**.: _The generating function for the \(\mathcal{R}(p,q)\)-elliptic hermitian matrix model is defined as follows:_
\[Z_{N}^{\rm ell}(\{t\})=\oint\prod_{i=1}^{N}\frac{dz_{i}}{z_{i}}\prod_{i<j}\theta \bigg{(}\frac{z_{i}}{z_{j}};\mathcal{R}(p,q)\bigg{)}\theta\bigg{(}\frac{z_{j}}{ z_{i}};\mathcal{R}(p,q)\bigg{)}e^{\sum\limits_{k=0}^{\infty}\frac{t_{k}}{k!}\sum \limits_{i=1}^{N}z_{i}^{k}}, \tag{154}\]
_where \(\{t\}=\{t_{k}|k\in\mathbb{N}\}\). The integration is over the counter around the origin._
Note that, taking \(\mathcal{R}(x,1)=\frac{x-1}{x}\), we obtain the well known \(q\)-Pochhammer symbol:
\[\big{(}z;q\big{)}_{0}=1,\quad\big{(}z;q\big{)}_{n}:=\prod_{k=0}^{n-1}\big{(}1- q^{k}\,z\big{)}\,,\;n\in\mathbb{N},\]
the \(q\)-theta function
\[\theta(z,q)=\prod_{k=0}^{\infty}\big{(}1-q^{k}\,z\big{)}\prod_{k=0}^{\infty} \big{(}1-q^{k+1}\,z^{-1}\big{)}\,.\]
and the generating function for the elliptic hermitian matrix model [28]:
\[Z_{N}^{\rm ell}(\{t\})=\oint\prod_{i=1}^{N}\frac{dz_{i}}{z_{i}}\prod_{i<j} \theta(\frac{z_{i}}{z_{j}};q)\theta(\frac{z_{j}}{z_{i}};q)e^{\sum\limits_{k=0} ^{\infty}\frac{t_{k}}{k!}\sum\limits_{i=1}^{N}z_{i}^{k}}.\]
**Definition 41**.: _The \(\mathcal{R}(p,q)\)-differential opertor with conformal dimension \((\Delta\neq 0,1)\) is defined by:_
\[T_{n}^{\Delta}:=-\sum_{l=1}^{N}z_{l}^{(1-\Delta)(n+1)}D_{\mathcal{R}(p,q)}\,z_ {l}^{\Delta(n+1)}, \tag{155}\]
_which acts on the functions of \(N\) variables and \(D_{\mathcal{R}(p,q)}\) is \(\mathcal{R}(p,q)\)-derivative (3) with respect to the \(z_{l}\)-variable._
Note that, the conformal operators (155) satisfy the commutation relation (10).
**Proposition 42**.: _The \(\mathcal{R}(p,q)\)-conformal operator (155) can be given by:_
\[T_{n}^{\Delta} =\frac{K(p,q)}{p-q}\bigg{[}(\frac{q}{p})^{\Delta(n+1)-N}\sum_{l=0 }^{\infty}\frac{(l+n-2N)!}{l!}B_{l}(\tilde{t}_{1},\ldots,\tilde{t}_{l})\] \[\times\mathcal{D}_{N}\frac{\partial}{\partial t_{l+n-2N}}-p^{ \Delta(n+1)}n!\frac{\partial}{\partial t_{n}}\bigg{]}. \tag{156}\]
_where \(D_{N}\) is a differential operator\((\ref{eq:2.1})\)._
Proof.: Putting these operators under the contour integral (154), we obtain naturally zero. Now we have to evaluate how these differential operators act on the
integrand. From the relations (152) and (153) concerning the properties of the generalized \(\theta\) function and by setting
\[g(z)=\prod_{i<j}\theta\big{(}\frac{z_{i}}{z_{j}};\mathcal{R}(p,q) \big{)}\theta\big{(}\frac{z_{j}}{z_{i}};\mathcal{R}(p,q)\big{)} \tag{157}\]
and
\[f(z_{l})=z_{l}^{\Delta(n+1)},\]
we have:
\[A :=\sum_{l=1}^{N}D_{\mathcal{R}(p,q)}\bigg{(}f(z_{l})\prod_{i<j} \theta\big{(}\frac{z_{i}}{z_{j}};\mathcal{R}(p,q)\big{)}\theta\big{(}\frac{z_{ j}}{z_{i}};\mathcal{R}(p,q)\big{)}\bigg{)}\] \[=\sum_{l=1}^{N}\frac{K(P,Q)}{(p-q)z_{l}}\bigg{(}\frac{f(qz_{l})}{ f(pz_{l})}\prod_{j\neq l}\frac{p}{q}\frac{z_{j}^{2}}{z_{l}^{2}}-1\bigg{)}f(pz_{l})\] \[\times\prod_{i<j}\theta\big{(}\frac{z_{i}}{z_{j}};\mathcal{R}(p, q)\big{)}\theta\big{(}\frac{z_{j}}{z_{i}};\mathcal{R}(p,q)\big{)}\] \[=\sum_{l=1}^{N}\frac{K(p,q)}{p-q}\bigg{(}\big{(}\frac{q}{p}\big{)} ^{\Delta(n+1)}\prod_{j\neq l}\frac{p}{q}\frac{z_{j}^{2}}{z_{l}^{2}}-1\bigg{)}p ^{\Delta(n+1)}z_{l}^{\Delta(n+1)-1}\] \[\times\prod_{i<j}\theta\big{(}\frac{z_{i}}{z_{j}};\mathcal{R}(p, q)\big{)}\theta\big{(}\frac{z_{j}}{z_{i}};\mathcal{R}(p,q)\big{)}. \tag{158}\]
Thus,
\[T_{n}^{\Delta}g(z) =\sum_{l=1}^{N}\frac{K(p,q)}{p-q}\bigg{(}\big{(}\frac{q}{p}\big{)} ^{\Delta(n+1)}\prod_{j\neq l}\frac{p}{q}\frac{z_{j}^{2}}{z_{l}^{2}}-1\bigg{)}p ^{\Delta(n+1)}z_{l}^{n}\] \[\times\prod_{i<j}\theta\big{(}\frac{z_{i}}{z_{j}};\mathcal{R}(p,q )\big{)}\theta\big{(}\frac{z_{j}}{z_{i}};\mathcal{R}(p,q)\big{)}. \tag{159}\]
The \(n\)th complete Bell polynomial \(B_{n}\) satisfy the following relations:
\[B_{l}(\tilde{t}_{1},\ldots,\tilde{t}_{l})=\sum_{\nu=0}^{l}\tau_ {1}^{l-\nu}\tau_{2}^{\nu}\binom{l}{\nu}B_{\nu}(t_{1},\ldots,t_{\nu})B_{n-\nu}( -t_{1},\ldots,-t_{n-\nu}), \tag{160}\]
where \(\tilde{t}_{k}=(\tau_{1}^{k}-\tau_{2}9^{k})t_{k}\), and
\[\exp\left(\sum_{k=1}^{\infty}\frac{t_{k}}{k!}\tau^{k}z_{i}^{k} \right)=\sum_{k=0}^{\infty}\frac{1}{k!}B_{k}\left(\tilde{t}_{1},\ldots,\tilde{ t}_{k}\right)x^{k}\exp\left(\sum_{l=1}^{\infty}\frac{t_{l}}{l!}z_{i}^{l} \right). \tag{161}\]
From the formulas (158) and (161), the \(\mathcal{R}(p,q)\)-conformal operator (155) can be rewritten in the simpler form:
\[T_{n}^{\Delta}g(z) =\frac{K(p,q)}{p-q}\bigg{[}\prod_{j=1}^{N}z_{j}^{2}\sum_{l=1}^{N} \sum_{k,\nu=0}^{\infty}\big{(}\frac{q}{p}\big{)}^{\Delta(n+1)-N}q^{k}\frac{1}{k! \nu!}B_{k}(t_{1},\dots,t_{k})\] \[\times B_{\nu}(-t_{1},\dots,-t_{\nu})z_{l}^{k+\nu+n-2N}-p^{\Delta(n +1)}\,\sum_{l=1}^{N}z_{l}^{n}\bigg{]}. \tag{162}\]
Note that, the expectation value of these terms could be zero. Let us now generate these terms by using the \(t\)-derivatives of the integrand of the relation (154). Thus, to do it, we can rewrite \(\prod\limits_{i=1}^{N}z_{i}\) in terms of sums \(\sum\limits_{i=1}^{N}z_{i}^{k}\) by using the Newton's identities,
\[\prod\limits_{i=1}^{N}z_{i}=\frac{1}{N!}\left|\begin{array}{ccccccc}\nu_{1} &1&0&\dots&&\\ \nu_{2}&\nu_{1}&2&0&\dots&\\ \cdots&\cdots&\cdots&\cdots&\cdots&\\ \nu_{N-1}&\nu_{N-2}&\cdots&\cdots&\nu_{1}&N-1\\ \nu_{N}&\nu_{N-1}&\cdots&\cdots&\nu_{2}&\nu_{1}\end{array}\right|, \tag{163}\]
where \(\nu_{k}\equiv\sum\limits_{i=1}^{N}z_{i}^{k}\), the terms \(\sum\limits_{i=1}^{N}z_{i}^{k}\) may be generated by taking the derivatives with respect to \(t\). We consider the following differential operator
\[\mathcal{D}_{N}=\frac{1}{N!}\left|\begin{array}{ccccccc}2!\frac{\partial}{ \partial t_{2}}&1&0&\dots&&\\ 4!\frac{\partial}{\partial t_{4}}&2!\frac{\partial}{\partial t_{2}}&2&0& \dots&\\ \cdots&\cdots&\cdots&\cdots&\cdots&\\ (2N-2)!\frac{\partial}{\partial t_{2N-2}}&(2N-4)!\frac{\partial}{\partial t_ {2N-4}}&\cdots&\cdots&2!\frac{\partial}{\partial t_{2}}&N-1\\ (2N)!\frac{\partial}{\partial t_{2N}}&(2N-2)!\frac{\partial}{\partial t_{2N-2 }}&\cdots&\cdots&4!\frac{\partial}{\partial t_{4}}&2!\frac{\partial}{\partial t _{2}}\end{array}\right|, \tag{164}\]
with
\[\prod\limits_{j=1}^{N}z_{j}^{2}e^{\sum\limits_{k=0}^{\infty}\frac{t_{k}}{k!} \sum\limits_{i=1}^{N}z_{i}^{k}}=\mathcal{D}_{N}\left(e^{\sum\limits_{k=0}^{ \infty}\frac{t_{k}}{k!}\sum\limits_{i=1}^{N}z_{i}^{k}}\right). \tag{165}\]
By using all together we derive \(\mathcal{R}(p,q)\)-conformal operator as follows:
\[T_{n}^{\Delta} =\frac{K(P,Q)}{p-q}\bigg{[}\sum\limits_{k,\nu=0}^{\infty}\big{(} \frac{q}{p}\big{)}^{\Delta(n+1)-N}q^{k}\frac{(k+\nu+n-2N)!}{k!\nu!}B_{k}(t_{1}, \dots,t_{k})\] \[\times B_{\nu}(-t_{1},\dots,-t_{\nu})\mathcal{D}_{N}\frac{ \partial}{\partial t_{k+\nu+n-2N}}-p^{\Delta(n+1)}\,n!\frac{\partial}{ \partial t_{n}}\bigg{]}. \tag{166}\]
Note that, the above operator (166) annihilates the generating function \(Z_{N}^{\rm ell}(\{t\})\). From the property of the Bell number, the relation (156) follows and the proof is achieved.
Taking \(\Delta=1\) and \(\mathcal{R}(x,y)=(p-q)^{-1}(x-y)\), we recovered the \(q\)-differential operator given in [28].
**Remark 43**.: _Particular cases of Pochhammer symbol, theta function, and the generating function for the elliptic hermitian matrix model related to quantum algebras known in the literature are deduced as:_
(a) _the \((p,q)\)-Pochhammer symbol:_
\[\left(z;q\right)_{0}=1,\quad\left(z;q\right)_{n}:=\prod_{k=0}^{n-1}\left(1-q^{- 2k}\,z\right),\;n\in\mathbb{N},\]
_the \(q\)- theta function_
\[\theta(z,q)=\prod_{k=0}^{\infty}\left(1-q^{-2k}\,z\right)\prod_{k=0}^{\infty} \left(1-q^{-2(k+1)}\,z^{-1}\right).\]
_and the generating function for the elliptic hermitian matrix model:_
\[Z_{N}^{\rm ell}(\{t\})=\oint\prod_{i=1}^{N}\frac{dz_{i}}{z_{i}}\prod_{i<j} \theta(\frac{z_{i}}{z_{j}};q)\theta(\frac{z_{j}}{z_{i}};q)e^{\sum\limits_{k=0 }^{\infty}\frac{t_{k}}{k!}\sum\limits_{i=1}^{N}z_{i}^{k}}_{i}.\]
_Moreover, the \(q\)-conformal operator_
\[T_{n}^{\Delta}:=-\sum_{l=1}^{N}z_{l}^{(1-\Delta)(n+1)}D_{q}\,z_{l}^{\Delta(n+ 1)},\]
_can be given by:_
\[T_{n}^{\Delta} =\frac{K(q)}{q-q^{-1}}\bigg{[}q^{-2\Delta(n+1)-N}\sum_{l=0}^{ \infty}\frac{(l+n-2N)!}{l!}B_{l}(\tilde{t}_{1},\ldots,\tilde{t}_{l})\] \[\times\mathcal{D}_{N}\frac{\partial}{\partial t_{l+n-2N}}-q^{ \Delta(n+1)}n!\frac{\partial}{\partial t_{n}}\bigg{]}.\]
* _The_ \((p,q)\)_-Pochhammer symbol:_ \[\left(z;p,q\right)_{0}=1,\quad\left(z;p,q\right)_{n}:=\prod_{k=0}^{n-1}\left( p^{k}-q^{k}\,z\right),\;n\in\mathbb{N},\] _the_ \((p,q)\)_- theta function_ \[\theta(z;p,q)=\prod_{k=0}^{\infty}\left(p^{k}-q^{k}\,z\right)\prod_{k=0}^{ \infty}\left(p^{k+1}-q^{k+1}\,z^{-1}\right).\] _and the generating function for the elliptic hermitian matrix model:_ \[Z_{N}^{\rm ell}(\{t\})=\oint\prod_{i=1}^{N}\frac{dz_{i}}{z_{i}}\prod_{i<j} \theta(\frac{z_{i}}{z_{j}};p,q)\theta(\frac{z_{j}}{z_{i}};p,q)e^{\sum\limits_{ k=0}^{\infty}\frac{t_{k}}{k!}\sum\limits_{i=1}^{N}z_{i}^{k}}.\]
_Moreover, the \((p,q)\)-conformal operator_
\[T_{n}^{\Delta}:=-\sum_{l=1}^{N}z_{l}^{(1-\Delta)(n+1)}D_{p,q}\,z_{l}^{\Delta(n+1 )},\]
_can be given by:_
\[T_{n}^{\Delta} =\frac{K(p,q)}{p-q}\bigg{[}(\frac{q}{p})^{\Delta(n+1)-N}\sum_{l=0}^ {\infty}\frac{(l+n-2N)!}{l!}B_{l}(\tilde{t}_{1},\ldots,\tilde{t}_{l})\] \[\times\mathcal{D}_{N}\frac{\partial}{\partial t_{l+n-2N}}-p^{ \Delta(n+1)}n!\frac{\partial}{\partial t_{n}}\bigg{]}.\]
## 5. Concluding remarks
The \(\mathcal{R}(p,q)\)-super Virasoro algebra, the \(\mathcal{R}(p,q)\)-conformal Virasoro \(n\) algebra, the \(\mathcal{R}(p,q)\)-conformal super Virasoro \(n\)-algebra (\(n\) even) have been constructed. and discuss A toy model for the \(\mathcal{R}(p,q)\)-conformal Virasoro constraints and \(\mathcal{R}(p,q)\)-conformal super Virasoro constraints and the \(\mathcal{R}(p,q)\)-elliptic hermitian matrix model with an arbitrary conformal dimension \(\Delta\) have been discussed. Also, particular cases induced by quantum algebras existing in the literature have been derived. Note that the relation between the \(\mathcal{R}(p,q)\)-super Virasoro algebra and the nonlinear integrable system(super KdV equations) can be investigated in the futur work.
## Acknowledgments
This research was partly supported by the SNF Grant No. IZSEZ0_206010.
|
2308.14098 | Asymmetry of AMOC Hysteresis in a State-of-the-Art Global Climate Model | We study hysteresis properties of the Atlantic Meridional Overturning
Circulation (AMOC) under a slowly-varying North Atlantic (20$^{\circ}$N --
50$^{\circ}$N) freshwater flux forcing in state-of-the-art Global Climate Model
(GCM), the Community Earth System Model. Results are presented of a full
hysteresis simulation ($4,400$ model years) and show that there is a hysteresis
width of about $0.4$ Sv. This demonstrates that an AMOC collapse and recovery
do not only occur in conceptual and idealised climate models, but also in a
state-of-the-art GCM. The AMOC recovery is about a factor six faster than the
AMOC collapse and this asymmetry is due to the major effect of the North
Atlantic sea-ice distribution on the AMOC recovery. The results have
implications for projections of possible future AMOC behaviour and for
explaining relatively rapid climate transitions in the geological past. | René M. van Westen, Henk A. Dijkstra | 2023-08-27T12:59:16Z | http://arxiv.org/abs/2308.14098v1 | # Asymmetry of AMOC Hysteresis in a State-of-the-Art Global Climate Model
###### Abstract
We present a new AMOC Hysteresis model for the AMOC Hysteresis in a State-of-the-Art Global Climate Model. We show that the AMOC Hysteresis model is a state-of-the-art global climate model. We show that the AMOC Hysteresis model is a state-of-the-art global climate model. We also show that the AMOC Hysteresis model is a state-of-the-art global climate model.
###### Abstract
We study hysteresis properties of the Atlantic Meridional Overturning Circulation (AMOC) under a slowly-varying North Atlantic (\(20^{\circ}\)N - \(50^{\circ}\)N) freshwater flux forcing in state-of-the-art Global Climate Model (GCM), the Community Earth System Model. Results are presented of a full hysteresis simulation (\(4,400\) model years) and show that there is a hysteresis width of about \(0.4\) Sv. This demonstrates that an AMOC collapse and recovery do not only occur in conceptual and idealised climate models, but also in a state-of-the-art GCM. The AMOC recovery is about a factor six faster than the AMOC collapse and this asymmetry is due to the major effect of the North Atlantic sea-ice distribution on the AMOC recovery. The results have implications for projections of possible future AMOC behaviour and for explaining relatively rapid climate transitions in the geological past.
## Plain Language Summary
The Atlantic Meridional Overturning Circulation (AMOC) is considered to be a tipping element in the climate system. We here simulate AMOC tipping events by slowly varying the North Atlantic freshwater forcing in a state-of-the-art Global Climate Model. The AMOC collapses when this forcing is sufficiently large and, when reversing this forcing, the recovery occurs at much smaller values of the forcing than the collapse, giving rise to hysteresis behaviour. The AMOC changes are much faster during its recovery than during its collapse and this asymmetry is caused by the effect of the North Atlantic sea-ice distribution on the AMOC recovery. The results demonstrate that AMOC tipping does not only occur in simplified climate models, but in a large hierarchy of climate models.
## 1 Introduction
The future development of the Atlantic Meridional Overturning Circulation (AMOC) under climate change is a hot topic of research (Ditlevsen and Ditlevsen, 2023). The AMOC, as part of the global ocean circulation is crucial in maintaining the meridional heat transport in Earth's climate (Srokosz and Bryden, 2015). Models of the Climate Model Inter-comparison Project phase 6 (CMIP6) project that the AMOC strength will gradually decrease by the end of this century, with relatively little differences between the various Shared Socioeconomic Pathway scenarios (Weijer et al., 2020). However, the AMOC has also been recognised as a climate tipping element (Armstrong McKay et al., 2022), with a possible abrupt change in AMOC strength having severe global impacts (Orihuela-Pinto et al., 2022). Strong fluctuations in AMOC strength have occurred in the geological past, for example during the so-called Dansgaard-Oeschger events, leading to local temperature variations of more than \(10^{\circ}\)C on Greenland (Lynch-Stieglitz, 2016).
First ideas on AMOC tipping were proposed already in the 1960s (Stommel, 1961) using a highly conceptual model of the AMOC in which the ocean circulation was only represented by the flow between two boxes. In this conceptual view, the present-day AMOC state is sensitive to changes to its North Atlantic surface freshwater flux due to the so-called salt-advection feedback. Freshwater anomalies in the North Atlantic will decrease the strength of the AMOC, thereby decreasing the northward salt transport and hence amplifying the original freshwater anomaly (Marotzke, 2000). In conceptual models, AMOC tipping is induced by increasing the freshwater flux forcing and transitions occur between multiple equilibrium AMOC states which exist for the same freshwater forcing conditions (Cessi, 1994).
In more detailed climate models, there is evidence of multiple AMOC equilibrium states due to the appearance of hysteresis behaviour. When the freshwater forcing in the North Atlantic is increased very slowly, such that the AMOC remains in near-equilibrium,
the AMOC is found (Rahmstorf et al., 2005) to collapse in many Earth system Models of Intermediate Complexity (EMICs). When reversing the freshwater forcing, the AMOC recovery occurs at much smaller freshwater forcing strengths than when it collapsed, giving rise to hysteresis behaviour. Such hysteresis behaviour has been found in relatively coarse climate models, such as the FAMOUS model (Hawkins et al., 2011) and in an early version of the Community Climate System Model (Hu et al., 2012).
However, in state-of-the-art Global Climate Models (GCMs) the computation of such hysteresis behaviour is very costly and so far only transient experiments with large freshwater perturbations have been performed to force AMOC collapses (Weijer et al., 2019; Jackson et al., 2022). Substantially weaker (or collapsed) AMOC states have been found in these transient simulations (Stouffer et al., 2006; Mecking et al., 2016; Jackson & Wood, 2018b), but it has been difficult to identify a freshwater forcing regime where hysteresis behaviour occurs (Jackson & Wood, 2018a). Hence, one might conclude that AMOC tipping does not occur in state-of-the-art GCMs, because these models include many more (in particular, negative) feedbacks than idealized climate models such as EMICs and FAMOUS (Gent, 2018).
Here we show results from a full hysteresis numerical simulation with a state-of-the-art GCM, the Community Earth System Model (CESM), using a configuration that is shortly described in Section 2. The results in Section 3 show a large hysteresis width with a strong asymmetry in the collapse and recovery of the AMOC. The analysis of these results, in particular the asymmetry of the hysteresis, show the important role for sea ice in the AMOC recovery. Implications of the results are discussed in Section 4.
## 2 Methods
We use the CESM version 1.0.5 (the f19_g16 configuration) with horizontal resolutions of 1\({}^{\circ}\) for the ocean/sea ice and 2\({}^{\circ}\) for the atmosphere/land components. The simulation was branched off from the end (model year 2,800) of the pre-industrial CESM control simulation described in Baatsen et al. (2020). There it is shown that the upper 1,000 m of the ocean is well equilibrated after 2,800 years of model integration. We first linearly increased the surface freshwater forcing between latitudes 20\({}^{\circ}\)N and 50\({}^{\circ}\)N (inset in Figure 1a) with a rate 3\(\times\)10\({}^{-4}\) Sv yr\({}^{-1}\) up to model year 2,200, reaching a freshwater flux forcing of \(F_{H}=0.66\) Sv. This freshwater flux anomaly is compensated over the rest of the domain to conserve salinity. Starting from the end of that simulation (model year 2,200) we linearly decreased the freshwater flux forcing back to zero with a rate of \(-\)3\(\times\)10\({}^{-4}\) Sv yr\({}^{-1}\). The rate of change is comparable to that in Hu et al. (2012), who used \(2\times 10^{-4}\) Sv yr\({}^{-1}\), also over the same area in the North Atlantic.
In the results below, we use the following diagnostics. The AMOC strength is defined as the total meridional volume transport at 26\({}^{\circ}\)N over the upper 1,000 m:
\[\mathrm{AMOC}(y=26^{\circ}\mathrm{N})=\int_{-1000}^{0}\int_{x_{W}}^{x_{E}}v~{ }\mathrm{d}x\mathrm{d}z \tag{1}\]
The freshwater transport by the overturning (AMOC) component (\(F_{\mathrm{ov}}\)) and azonal (gyre) component (\(F_{\mathrm{ax}}\)) as a function of the meridional coordinate \(y\) are determined as:
\[F_{\mathrm{ov}}(y) = -\frac{1}{S_{0}}\int_{-H}^{0}\left[\int_{x_{W}}^{x_{E}}v^{*} \mathrm{d}x\right]\left[\left<S\right>-S_{0}\right]\mathrm{d}z \tag{2}\] \[F_{\mathrm{ax}}(y) = -\frac{1}{S_{0}}\int_{-H}^{0}\int_{x_{W}}^{x_{E}}v^{\prime}S^{ \prime}\mathrm{d}x\mathrm{d}z \tag{3}\]
where \(S_{0}~{}=~{}35\) g kg\({}^{-1}\) is a reference salinity. Here, \(v^{*}\) indicates the baroclinic velocity and is defined as \(v^{*}=v-\hat{v}\), where \(v\) is the meridional velocity and \(\hat{v}\) the barotropic meridional velocity (i.e., the section spatially-averaged meridional velocity). The quan
tity \(\langle S\rangle\) indicates the zonally-averaged salinity and primed quantities (\(v^{\prime}\) and \(S^{\prime}\)) are deviations from their respective zonal averages (Juling et al., 2021). We will use \(F_{\rm ovS}=F_{\rm ov}(y=34^{\circ}\)S) and \(F_{\rm ovN}=F_{\rm ov}(y=60^{\circ}\)N, with similar expressions for \(F_{\rm aaS}\) and \(F_{\rm axN}\)), respectively.
## 3 Results
### AMOC Hysteresis
The hysteresis behaviour for the AMOC strength is shown in Figure 1a, where the black curve is the forward simulation (increasing freshwater flux forcing) and the red curve is the reversed simulation. We display time on the horizontal axis as we follow a quasi-equilibrium approach and the magnitude of the freshwater flux over time is indicated by the cyan-coloured dashed lines. The initial AMOC strength is about 17 Sv and the AMOC pattern is shown in Figure 2a. In the forward simulation, the AMOC collapses near model year 1,750 from about 10 Sv (Figure 2b) to about 2 Sv (model year 1850) and it is near zero Sv at model year 2,200. The freshwater transport carried by AMOC at 34\({}^{\circ}\)S, \(F_{\rm ovS}\), decreases in the forward experiment, goes through zero at model year 1,000 and after the AMOC collapse it becomes positive (Figure 1b). The details of the AMOC collapse, its climate impact and its early warning signals were presented elsewhere (van Westen et al., 2023).
In the reversed simulation, years 2,200 to 4,400, the AMOC stays near zero Sv to about model year 3,100 (Figure 2c). A weak and shallow (\(<1000\) m) northward overturning cell develops 400 years later (Figure 2d) and the AMOC strength (at 1,000 m and 26\({}^{\circ}\)N) is about 1.5 Sv. This overturning cell strengthens over time (Figure 2e) and there is a clear northward overturning cell present prior to (full) AMOC recovery, the AMOC strength (at 1,000 m and 26\({}^{\circ}\)N) is about 4 Sv. This gradual AMOC increase of about 5 Sv over a 1,000-year period is followed by AMOC recovery between model years 4,090 and 4,170 (Figure 1a) where its strength increases from about 5 Sv to a maximum of 33 Sv and thereafter slowly decreases again. Hence, there is a broad interval in the freshwater flux forcing (\(F_{H}\) varies from about 0.1 to 0.5 Sv) which determines the hysteresis width (about 0.4 Sv). In the reversed simulation, \(F_{\rm ovS}\) gradually decreases with decreasing freshwater flux forcing from a positive value at model year 2,200 to about -0.12 Sv prior to AMOC recovery. Then \(F_{\rm ovS}\) steeply decreases to minimum of -0.32 Sv in model year 4,170 and afterwards strongly increases.
It is interesting to compare the responses in the different freshwater transport components under the varying freshwater flux forcing (Figure 1d) with those expected from the bifurcation analyses from idealised ocean-climate model studies (Dijkstra, 2007; Huisman et al., 2010). Based on these idealised model results, one expects that the AMOC has one steady state for \(F_{\rm ovS}^{+}>0\), the + superscript indicating its value on the forward simulation. The multiple equilibrium regime would be entered by model year 1,000 (where \(F_{\rm ovS}\) changes sign) and hence AMOC recovery would be expected near \(F_{H}\,\sim\,0.3\) Sv, which is around model year 3,400 in the reversed simulation. Note that some delay in recovery can be expected due to the quasi-equilibrium approach (Hawkins et al., 2011), but the AMOC recovery in the CESM, is 700 years later than expected. Hence, at first sight, this is substantially different from what idealised model studies (Dijkstra, 2007; Huisman et al., 2010) would suggest.
However, the Hovmoller diagram for AMOC at 500 m depth (Figure 2f) shows the development of the northward overturning cell (from 30\({}^{\circ}\)S to 40\({}^{\circ}\)N) from model year 3,500 and onwards; the Hovmoller diagrams between 500 m and 1,000 m depths show similar results (not shown). The timing for the development of the northward overturning cell aligns well with the sign change in \(F_{\rm ovS}^{+}\). Prior to the full AMOC recovery, the AMOC strength remains fairly low (\(<\,5\) Sv) and the overturning cell has a maximum merid
Figure 1: (a): The AMOC strength at 1,000 m and 26\({}^{\circ}\)N. The cyan-coloured lines in panels a and b indicate the magnitude of the freshwater forcing \(F_{H}\). Inset: Fresh water is added to the ocean surface between 20\({}^{\circ}\)N – 50\({}^{\circ}\)N in the Atlantic Ocean (+\(F_{H}\)) and is compensated over the remaining ocean surface (\(-F_{H}\)). The black sections indicate the 26\({}^{\circ}\)N and 34\({}^{\circ}\)S latitudes over which the AMOC strength and \(F_{\rm ovS}\) are determined, respectively. (b): The freshwater transport by the AMOC at 34\({}^{\circ}\)S, \(F_{\rm ovS}\). (c): The AMOC strength versus \(F_{\rm ovS}\) with time parameterised; the time series are displayed as 25-year averages (to reduce their variability). The markers indicate the 50-year average over a particular period, where the same marker shape indicates the same freshwater forcing (\(F_{H}\)) between two periods. The cyan-coloured curve indicates the present-day (1994 – 2020) CMIP6 regression and 1 standard deviation (van Westen and Dijkstra, 2023b). (d): The freshwater transport at 34\({}^{\circ}\)S (\(F_{\rm ovS}\), black curve), 60\({}^{\circ}\)N (\(F_{\rm ovS}\), blue curve), the freshwater convergence (\(\Delta F_{\rm ov}~{}=~{}~{}F_{\rm ovS}~{}-~{}F_{\rm ovN}\), red curve) and the azonal (gyre) component at 34\({}^{\circ}\)S (\(F_{\rm zs}\), cyan curve). The time series are displayed as 25-year averages (to reduce the variability of the time series). The yellow shading in all panels indicates observed ranges (Garzoli et al., 2013; Mecking et al., 2017; Smeed et al., 2018; Worthington et al., 2021) for \(F_{\rm ovS}\) and AMOC strength.
ional extent to 40\({}^{\circ}\)N (Figure 2f). When the AMOC fully recovers (from model year 4,090), the northward overturning cell extends to higher latitudes (60\({}^{\circ}\)N). This suggest that between the model years \(3,400\) and \(4,090\), there are processes in the CESM that prevent a larger meridional extent of the overturning cell and delay a full AMOC recovery; details are discussed in the next subsection.
The \(F_{\rm ovS}\) minimum at model year 4,170 can be understood from the different adjustment time scales for the salinity and velocity fields at 34\({}^{\circ}\)S (Figure S1). The velocity fields adjust faster to the changing forcing than the salinity fields and velocity changes then control the magnitude of \(F_{\rm ovS}\). Prior to AMOC recovery, \(F_{\rm ovS}\) is negative and hence
Figure 2: (a – e): The AMOC streamfunction (\(\Psi\)) for various 50-year periods. The contours indicate the isolines for different AMOC values as indicated in the legend. (f): Hovmöller diagram of the AMOC at 500 m depth for the reverse simulation where the dashed line indicates where \(F_{\rm ovS}\) switches sign in the forward simulation.
the AMOC transports salt into the Atlantic basin. Surface velocities increase rapidly, as the AMOC increases during its recovery, and hence more salt is transported into the Atlantic basin, reducing the value of \(F_{\rm ovS}\). Eventually, the salinity fields adjust to the recovered ocean circulation state and \(F_{\rm ovS}\) becomes positive again. The \(F_{\rm ovS}\) value at the end of the simulation is slightly higher than its initial value, because the 34\({}^{\circ}\)S salinity profiles are still different than the initial profiles. It is expected that AMOC and \(F_{\rm ovS}\) eventually recover to their initial state (Figure 1c) when extending the simulation beyond model year 4,400 under zero anomalous freshwater flux forcing (\(F_{H}=0\) Sv), but we did not perform a simulation to check this.
During its recovery, the AMOC strongly overshoots and its maximum (32.7 Sv in model year 4,168) is about twice as strong compared to its initial value. This overshoot is expected because meridional temperature and salinity gradients are (strongly) different when comparing those to the initial state (Figure S2) and favour a (temporarily) stronger AMOC state. The transient response is also larger during AMOC recovery than during AMOC collapse (Figure 3a) and when comparing two 100-year periods (model years 1,750 - 1,850 versus 3,090 - 3,190) the (absolute) linear trends differ by a factor of 3.6 for AMOC strength. Over this 100-year period, however, the AMOC strength shows a non-linear change (in particular for the AMOC recovery) and when determining the trends over a 20-year sliding window we find maximum (absolute) changes of 0.12 Sv yr\({}^{-1}\) (AMOC collapse) and 0.76 Sv yr\({}^{-1}\), which is about a factor 6 difference.
The patterns in 2-meter atmospheric surface temperature trends are very similar between collapse and recovery (Figure 3b) but of course differ by the sign of the trend. The magnitudes of the trends are, however, larger during AMOC recovery (in particular for the Northern Hemisphere) and the globally-averaged trend is a factor 1.5 larger during AMOC recovery than during collapse. Note that, during AMOC collapse and recovery, the changes in freshwater forcing \(F_{H}\) are very small (\(<0.03\) Sv), meaning that the transient responses are mainly driven by internal feedbacks.
Figure 3: (a): The globally-averaged 2-meter surface temperature and AMOC strength (at 1,000 m and 26\({}^{\circ}\)N, similar as Figure 1a) for the AMOC collapse (lower x-axis) and AMOC recovery (upper x-axis). The temperature differences are w.r.t. model year 1,600 (grey curve) and model year 3,940 (brown curve). The yellow shading indicates the 100-year period centred around the AMOC transient responses and is used for determining the 2-meter temperature trends in panel b. (b): The 2-meter surface temperature trend (model years 4,090 – 4,190) during AMOC recovery. The circled (squared) markers indicate where the absolute magnitude of the trend are more than 50% larger (smaller) during AMOC recovery than during AMOC collapse (model years 1,750 – 1,850), and are only displayed when the local trend over both periods are significant (\(p<0.05\), two-sided t-test (Santer et al., 2000)).
### AMOC Recovery: Role of sea ice
To explain the delay in AMOC recovery, we consider the sea-ice distribution in the North Atlantic, which has undergone a vast expansion after the AMOC collapse (van Westen et al., 2023). From a recent CMIP6 model study it was shown that sea ice suppresses air-sea fluxes and a relatively large sea-ice cover results in a general weaker upper ocean mixing and AMOC strength (Lin et al., 2023). In the CESM simulation here, this effect of sea ice on air-sea fluxes and upper ocean mixing also occurs over the Irminger basin (region of deep convection) during and after the AMOC collapse (Figure 4c). The sea-ice cover in the CESM is actually quite large compared to many CMIP6 models (Lin et al., 2023) and explains the weak AMOC state (\(<5\) Sv). The sea-ice edge of the collapsed state extends down to 45\({}^{\circ}\)N in the Atlantic Ocean (blue curve in Figure 4a) and this restricts the meridional extent of the northward overturning cell to 40\({}^{\circ}\)N (Figure 2d).
During the gradual spin-up of the northward overturning cell from model year 3,500 and onwards, more heat is transported northward leading to less favourable sea-ice formation conditions over time. During the winter of model years 4,090/4,091, a few years
Figure 4: (a): The sea-ice fraction for March model year 4,091, the curves display the sea-ice edge (i.e., the 15% sea-ice fraction isoline) for model years 3,901 – 3950 (dark blue curve) and 4,201 – 4250 (cyan curve). (b): The mixed layer depth for March model year 4,091. The black outlined region is the Irminger basin, used in panels c and d. (c & d): The spatially-averaged time series of sea-ice fraction, mixed layer depth (MXL) and surface heat flux (SHF) over the Irminger basin for March. All values of SHF are positive indicating heat loss from the ocean to the atmosphere (note the flipped vertical range on the right axis). The time series are displayed as 5-year averages (to reduce the variability of the time series). The inset in panel d shows the March sea-ice fraction (over the Irminger basin) and the yearly-averaged AMOC strength (at 1,000 m and 26\({}^{\circ}\)N). The dashed line indicates model year 4,091 when the (Irminger basin) polynya forms.
before the full AMOC recovery, the seasonally-varying Arctic sea ice advances southward (Figure S3). The enhanced meridional heat transport prevents the Irminger basin from (fully) freezing over and a small polynya, i.e., a sea-ice free region (with sea-ice fractions lower than 15%) within the sea-ice pack, forms. The sea-ice free ocean surface is strongly cooled by the atmosphere resulting in buoyancy loss and deep convection. Deep convection mixes relatively warm water from greater depth to the surface and this prevents the polynya from freezing over. The polynya starts to grow in the following months and in March model year 4,091 the polynya has a surface area of \(6.46\times 10^{5}\) km\({}^{2}\) (Figure 4a) and the mixed layer depth extends down to 1,600 m (Figure 4b).
The next winter (model years 4,091/4,092) the ocean surface near the Irminger basin remains relatively warm (from the deep convection of the polynya). This prevents again a complete sea-ice coverage over the Irminger basin (inset in Figure 4d) and the sea-ice free region induces again deep convection. These deep convection events during the winter months start to spin-up the AMOC and, from model year 4,094, the AMOC strength strongly increases (inset in Figure 4d). The stronger AMOC results in more meridional heat transport leading to less sea-ice cover and more deep convection (Figure 4d) which further increases the AMOC strength. Eventually the sea-ice cover retreats further northward (cyan curve in Figure 4a) and hardly influences deep convection anymore over the Irminger basin (and Labrador basin).
In summary, the strong effect of North Atlantic sea-ice on the AMOC is responsible for the delay in the AMOC recovery. The AMOC restarts around model year 3,400 when \(F_{\rm ovS}^{+}\) switches sign (Figure 2f), but remains weak due to the extensive sea-ice cover. Only when passing a critical value of the AMOC strength, which depends on the sea-ice distribution, the recovery is completed. The Irminger basin polynya during the winter of model years 4,090/4,091 marks the point in time when AMOC recovery is inevitable. This explains why such sea-ice induced AMOC recovery delay is not found in models where there is no sea-ice component (Rahmstorf, 1996; Dijkstra, 2007; Huisman et al., 2010). It also implies that in CMIP6 models, which have a strong sea-ice model dependency (Lin et al., 2023), it is expected that the timing of AMOC recovery is quite model dependent.
## 4 Discussion
The Atlantic Meridional Overturning Circulation (AMOC) is considered to be a tipping element in the climate system (Armstrong McKay et al., 2022). We have shown here a first full AMOC hysteresis simulation in the Community Earth System Model, a state-of-the-art Global Climate Model. This result indicates that AMOC tipping does not only occur in conceptual (Stommel, 1961) and idealised climate models (Rahmstorf et al., 2005), but actually in the full hierarchy of climate models. The shape of the hysteresis is also relatively smooth and without sharp strong local transitions as occur in ocean-only models (Rahmstorf, 1996; Lohmann et al., 2023).
The hysteresis width, measured as the difference in freshwater forcing between collapse and recovery, found in the CESM (about 0.4 Sv) is quite large compared to that found in many EMICs (Rahmstorf et al., 2005), also much larger than that in CCSM (Hu et al., 2012), but similar to that in FAMOUS (Hawkins et al., 2011). We have shown here that the Northern Hemisphere sea-ice distribution has a substantial effect on the recovery of the AMOC. The fact that sea ice inhibits air-sea fluxes, and hence convection, appears important (Lin et al., 2023) for the recovery of the AMOC in the CESM and hence for the hysteresis width. The sea ice is the main reason for the strong asymmetry in the hysteresis, where the recovery is a factor six faster than the collapse. This strong asymmetry is likely relevant for explaining proxy records of the Dansgaard-Oeschger events (Rahmstorf, 2002; Henry et al., 2016; Lynch-Stieglitz, 2016). These events clearly show that warming is much faster than the cooling, consistent with that AMOC recovery occurs on (much) shorter time scales than its collapse.
What is intriguing is that the AMOC collapse (van Westen et al., 2023) and recovery can be connected to theory about the role of the AMOC induced freshwater transport at the southern boundary of the Atlantic, \(F_{\rm ovS}\), on AMOC stability. In idealised global ocean models, the recovery tipping point is connected to a sign change in \(F_{\rm ovS}^{+}\) on the northward overturning AMOC state. At this point, a change from the AMOC exportting salt to exporting freshwater occurs (Dijkstra, 2007). In the CESM simulation here, the AMOC indeed reactants when there is this sign change in \(F_{\rm ovS}\), but the AMOC strength remains low due to the effect of the sea ice. These results indicate that the AMOC hysteresis can be captured in a low-dimensional dynamical context giving more weight to the results of conceptual and idealized AMOC models.
The CESM version used here has substantial biases in freshwater forcing in its pre-industrial reference state (van Westen et al., 2023), which is the reason that the initial value of \(F_{\rm ovS}\) is positive. It has been shown using the global ocean model as used in Dijkstra (2007) that this shifts the AMOC tipping points associated with collapse and recovery to higher values of the freshwater forcing (Dijkstra & van Westen, 2023). Assuming that the CESM will show a similar qualitative behaviour, this implies that collapse and recovery, when biases are corrected, will likely occur for much smaller values of the freshwater forcing than used in the simulation here. Combined with the change from pre-industrial to present-day forcing, and including climate change, it remains to be investigated whether the AMOC can tip before the year 2100 (Ditlevsen & Ditlevsen, 2023) and assess the probability of such a transition. The results here have demonstrated that this is an important line of research as such tipping behaviour occurs in models which are considered to adequately represent the development of Earth's large-scale climate.
## Open Research Section
All model output and code to generate the results are available (van Westen & Dijkstra, 2023a) from [https://doi.org/10.5281/zenodo.8262424](https://doi.org/10.5281/zenodo.8262424).
We thank Michael Kliphuis (IMAU, UU) for performing the CESM simulation. The model simulation and the analysis of all the model output was conducted on the Dutch National Supercomputer (Snellius) within NWO-SURF project 17239. R.M.v.W. and H.A.D. are funded by the European Research Council through the ERC-AdG project TAOC (project 101055096).
|
2307.13172 | HasTEE: Programming Trusted Execution Environments with Haskell | Trusted Execution Environments (TEEs) are hardware-enforced memory isolation
units, emerging as a pivotal security solution for security-critical
applications. TEEs, like Intel SGX and ARM TrustZone, allow the isolation of
confidential code and data within an untrusted host environment, such as the
cloud and IoT. Despite strong security guarantees, TEE adoption has been
hindered by an awkward programming model. This model requires manual
application partitioning and the use of error-prone, memory-unsafe, and
potentially information-leaking low-level C/C++ libraries.
We address the above with \textit{HasTEE}, a domain-specific language (DSL)
embedded in Haskell for programming TEE applications. HasTEE includes a port of
the GHC runtime for the Intel-SGX TEE. HasTEE uses Haskell's type system to
automatically partition an application and to enforce \textit{Information Flow
Control} on confidential data. The DSL, being embedded in Haskell, allows for
the usage of higher-order functions, monads, and a restricted set of I/O
operations to write any standard Haskell application. Contrary to previous
work, HasTEE is lightweight, simple, and is provided as a \emph{simple security
library}; thus avoiding any GHC modifications. We show the applicability of
HasTEE by implementing case studies on federated learning, an encrypted
password wallet, and a differentially-private data clean room. | Abhiroop Sarkar, Robert Krook, Alejandro Russo, Koen Claessen | 2023-07-24T23:37:50Z | http://arxiv.org/abs/2307.13172v1 | # HasTEE: Programming Trusted Execution Environments with Haskell
###### Abstract.
Trusted Execution Environments (TEEs) are hardware enforced memory isolation units, emerging as a pivotal security solution for security-critical applications. TEEs, like Intel SGX and ARM TrustZone, allow the isolation of confidential code and data within an untrusted host environment, such as the cloud and IoT. Despite strong security guarantees, TEE adoption has been hindered by an awkward programming model. This model requires manual application partitioning and the use of error-prone, memory-unsafe, and potentially information-leaking low-level C/C++ libraries.
We address the above with _HasTEE_, a domain-specific language (DSL) embedded in Haskell for programming TEE applications. HasTEE includes a port of the GHC runtime for the Intel-SGX TEE. HasTEE uses Haskell's type system to automatically partition an application and to enforce _Information Flow Control_ on confidential data. The DSL, being embedded in Haskell, allows for the usage of higher-order functions, monads, and a restricted set of I/O operations to write any standard Haskell application. Contrary to previous work, HasTEE is lightweight, simple, and is provided as a _simple security library_; thus avoiding any GHC modifications. We show the applicability of HasTEE by implementing case studies on federated learning, an encrypted password wallet, and a differentially-private data clean room.
Trusted Execution Environment, Haskell, Intel SGX, Enclave, 2023
in applications running inside TEEs (Munoz et al., 2023). Secondly, _current TEE programming models are insufficient to enforce security policies_. Applications should be written in a way such that they do not accidentally reveal confidential information. Furthermore, inputs and outputs to an enclave must be correctly encrypted, signed, decrypted, and verified to protect against malicious hosts. Thirdly, _little support is given to migrate legacy applications inside enclaves_. Applications inside enclaves often rely on their own Operating System (OS) since they cannot trust the one in the host machine. Library OS-based approaches exist to provide this functionality. However, for legacy applications written in high-level languages relying on non-trivial runtimes, the porting of the runtime becomes a challenging task.
Efforts have been made to address these challenges. The work by Ghosh et al. (2019) introduces GoTEE, a modification of the Go programming language with support for _secure routines_ that are executed inside enclaves. In GoTEE, the authors heavily modify the Go compiler and extend the language to support new TEE-specific abstractions that helps to automatically partition an application. GoTEE does not provide any control over how sensitive information moves within the application, which could enable accidental data leaks. In a similar spirit, Oak et al. (2021) introduce \(J_{E}\), a subset of Java with support for enclaves. \(J_{E}\) focuses on providing information-flow control (IFC) to ensure that the code does not leak sensitive data by accident or by coercion of a malicious host. \(J_{E}\) uses a sophisticated compilation pipeline to first partition the application and then uses another compiler to check that sensitive information is not leaked. Virtualization-based solutions, such as AMD SEV (AMD, 2018), attempt to alleviate the effort required to port legacy applications. However, the trade-off is that the TCB becomes larger and the granularity to identify sensitive data becomes much coarser.
Our contribution through this paper is _HasTEE_, a domain-specific language (DSL) embedded in Haskell for programming TEE applications. HasTEE integrates TEE-specific abstraction and semantics while hiding low-level hardware intricacies making it hardware neutral! Additionally, HasTEE offers IFC to prevent accidental leakage of sensitive data. Owing to its embedding in Haskell, developers can use familiar abstractions such as high-order functions, monads, and a limited set of I/O operations to write applications in a conventional manner. This design choice enables seamless integration with all of the existing Haskell features. Compared to the previous work, HasTEE is lightweight, simple, and is provided as a _simple security library_; thus avoiding any GHC (Jones et al., 1993) compiler modifications!
#### HasTEE by Example
Listing 1 presents a sample password checker application written using HasTEE.
```
pudChr ::EnclaveString->String->EnclaveBool : pudChrbrpudguess=fmap(==guess)pud : passwordChecker::AppDone password<=do password<=inEnducConstant"secret" func<-inEnduc>8pwdChrpasswd
runClient$do--Clientcode lift10$putStrLn"Enteryourpassword" userInput<-lift10getline res<-gateway(efunc<<>userInput) lift10$putStrLn("Loginreturned"++showres)
```
Listing 1: A password checker written in HasTEE
The distinction between the trusted and untrusted parts of the application is done via the type system that encodes the former as the Enclave type (line 1) and the latter as the Client type (type inferred in line 8).
The function pwdChr takes a sensitive string located in the enclave (Enclave String), a public string from the client host (String) and produces a sensitive Boolean in the enclave (Enclave Bool). Line 6 holds the secret string that we want to protect (inEnclaveConstant). Line 7 uses the inEnclave call to obtain a reference to the function pwdChr located in the enclave. The function gateway (line 11) is responsible for transmitting the collected arguments to the enclave function, and bringing the result back to the client. The gateway function acts as an interface between the enclave and non-enclave environment. The untrusted _host client_ is in charge of driving the application, while the _enclave_ is assigned the role of a computational and/or storage resource that services the client's requests. HasTEE connects an application (passwordChecker) to Haskell's main method using the runApp::App a -> IO a function that executes the application. From an IFC perspective, lines 6 and 7 correspond to labelling, i.e., establishing, which inputs are sensitive for the program--an activity that is part of the TCB. In general, HasTEE code starts by labelling the sensitive input with the inEnclave primitive. Subsequently, the client code is compelled to manipulate secrets in a _secure_ manner. In this setting, secure means that no sensitive information in the enclave gets leaked except that it has been obtained via the primitive gateway. The HasTEE API is explained in Section 4.2, and the semantics are discussed in Section 4.3.
### Contributions
_A type-safe, secure, high-level programming model._ The HasTEE library enables developers to program a TEE environment, such as Intel SGX, using Haskell - a type-safe, memory-managed language whose expressive type system can be leveraged to enforce various security constraints. Additionally, HasTEE allows programming in a familiar client-server style programming model (Section 4.2 and 5.2), an improvement over the low-level Intel SGX APIs.
_Automatic Partitioning._ A key part of programming TEEs, partitioning the trusted and untrusted parts of the program is done automatically using the type system (details in Section 3 and 4.3). Crucially, our approach does not require any modification of the GHC compiler and can be adapted to other programming languages, as long as their runtime can run on the desired TEE infrastructure.
_Information Flow Control._ Drawing inspiration from restricted IO monad families in Haskell, we designed an Enclave monad that prevents accidental leaks of secret data by TEE programmers (Section 5.3). Hence, our Enclave monad enables writing applications with a relatively low level of trust placed on the enclave programmer.
_Portability of Haskell's runtime._ We modify the GHC runtime, without modifying the compiler, to run on SGX enclaves. This enables us to host the complete Haskell language, including extensions, supported by GHC 8.8 (Section 5.1).
_Demonstration of expressiveness._ We illustrate the practicality of the HasTEE through three case studies across different domains: (1) a Federated Learning example (Section 6.1), (2) an encrypted password wallet (Section 6.2) and (3) a _differentially-private_ data clean room (Section 6.3). The examples also demonstrate the simplicity of TEE development enabled by HasTEE.
## 2. Background
### Intel Software Guard Extensions (SGX)
Intel Software Guard Extensions (SGX) [Intel 2015] is a set of security-related instructions supported since Intel's sixth-generation Skylake processor, which can enhance the security of applications by providing a _secure enclave_ for processing sensitive data. The enclave is a disjoint portion of memory separate from the DRAM, where sensitive data and code reside, beyond the influence of an untrusted operating system and other low-level software.
Intel offers an SGX SDK for programming enclaves. The SDK requires dividing the application into trusted and untrusted parts, where sensitive data resides in the trusted project. It provides specialized function calls called _ecall_ for enclave access and an _ocall_ API for communication with the untrusted client. The boundary between the client and enclave is defined using an _Enclave Description Language (EDL)_. The SDK utilizes a tool called _edger8r_ to parse EDL files and generate two _bridge_ files. These files ensure secure data transfer between projects through _copying_ instead of sharing via pointers, preventing potential manipulation of the enclave's state. Fig 1 shows the SDK's programming model.
Application developers working with enclaves aim to minimize the Trusted Computing Base (TCB) by keeping the operating system and system software outside the enclave. The SGX SDK offers a restricted C standard library implementation (tlibc) for essential system software. Programming SGX enclaves involves understanding the complex control flow between trusted and untrusted components. Enforcing SGX's programming model on typical software projects can be challenging, and the limited tlibc library restricts running applications beyond those written in vanilla C/C++.
## 3. Key Idea: A Typed DSL for Enclaves
### The Programming Model and Partitioning
HasTEE supports the automatic partitioning of programs by utilizing a combination of the type system to identify the enclave and a conditional compilation tactic to provide different semantics to each component. The compilation tactic was first used in Haste.App [Ekblad and Claessen, 2014], to partition a single program into a Client and Server type. Fig 2 shows the partitioning procedure at a high level.
Importantly, this approach does not require any compiler extensions or elaborate dependency analysis passes to distinguish between the underlying types. The codebase involved in other complex partitioning approaches [Ghosn et al., 2019; Oak et al., 2021] becomes part of the Trusted Computing Base (TCB), creating a larger TCB. In contrast, our approach does not add any partitioning code to the TCB. Fig 3 shows the partitioned software stack in the HasTEE approach.
Figure 1. Intel SGX SDK Programming Model
Figure 3. The untrusted (left) and trusted (right) software
Figure 2. The HasTEE partitioning scheme
Post-partitioning, the client-server-style programming model is used for programming the enclave. In this model, the client takes on the primary role of driving the program and utilizes the enclave as a computational and/or storage resource. The source program, written in Haskell, benefits from type safety, while HasTEE internally handles the message transfer between the client and enclave memory at runtime.
### Information Flow Control on Enclaves
Being a Haskell library enables HasTEE to tap into the library-based Information Flow Control techniques in Haskell (Buiras et al., 2015; Russo, 2015; Russo et al., 2008). The IFC literature distinguishes between sensitive and non-sensitive computations via monads indexed with security levels (Russo et al., 2008), e.g., Sec H and Sec L, where security levels H and L are assigned to sensitive and public information, respectively. Public information can flow into sensitive entities but not in the other way around. We have a similar security-level hierarchy between the Enclave and Client monads, respectively. Accordingly, we design the Enclave monad such that it restricts the possible variants of I/O operations. Internally, the Enclave monad constrains the scope of side-effecting operations to protect the confidentiality of data within the enclave (details in Section 5.3). Furthermore, HasTEE demands to explicitly mark where information is being sent back to the client (gateway), thus clearly indicating where to audit and control information leakages. Due to the security-critical nature of the Enclave monad, we include a trust operator, which is similar to the endorse function found in IFC literature.
### Trusted GHC Runtime
One of the key challenges in allowing Haskell programs to run on TEE platforms is to provide support for the GHC Haskell Runtime (Marlow et al., 2009) itself. A Haskell program relies on the runtime for essential tasks such as memory allocation, concurrency, I/O management, etc. The GHC runtime heavily depends on well-known C standard libraries, such as glibc on Linux (GNUDevs, 1991) and msvcrt on Windows (Microsoft, 1994). In contrast, the Intel SGX SDK provides a much more restricted libc known as libc.
This results in the fact that several libc calls used by the GHC runtime such as mmap, madvise, epoll, select and 100+ other functions become unavailable. Even the core threading library used by the GHC runtime, pthread, has a much more restricted API on the SGX SDK. To solve this conundrum, we have patched portions of the GHC runtime and used functionalities from a library OS, Gramine (C. Tsai, Porter, et al., 2017), to enable the execution of GHC-compiled programs on the enclave.
### TEE Independence
Finally, HasTEE provides an abstraction over low-level system APIs offered by TEEs. As a result, the principles applied in programming Intel SGX should translate to the programming of other popular TEEs, such as the ARM TrustZone.
## 4. Design of HasTEE
### Threat Model
We begin by discussing the threat model of the HasTEE DSL. HasTEE has the very same threat model as that of Intel SGX. In this model, only the software running inside the enclave memory is trusted. All other application and system software, such as the operating system, hypervisors, driver firmware, etc., are considered compromised by an attacker. A very similar threat model is shared by a number of other work based on Intel SGX (Arnautov et al., 2016; Baumann et al., 2015; Ghosh et al., 2019; Lind et al., 2017).
In this work, we enhance the application-level security firstly by using a memory-safe language, Haskell, and secondly use the Enclave monad to introduce information flow control. Our implementation strategy of loading the GHC runtime on the enclave allows us to handle Iago attacks (Checkoway and Shacham, 2013) (see Section 5.1). We trust the underlying implementation of the SGX hardware and software stack (such as tlibc) as provided by Intel. Known limitations of Intel SGX such as denial-of-service attacks and side-channel attacks (Schaik et al., 2022) are beyond the scope of this paper.
An ideally secure development process should include auditing the code running on the enclave either through static analyses or manual code reviews or both. The conciseness of Haskell codebases should generally facilitate the auditing process. However, the mechanisms for fail-proof audits are beyond the scope of this paper as well.
### HasTEE API
We show the core API of HasTEE in Fig 4. The functions presented operate over three principal Haskell data types: (1) Enclave, (2) Client, and (3) App. All three types are instances of the Monad typeclass, which allows for the use of do notation when programming with them. One of the key differences in functionality provided by the Client and Enclave monads is that Client allows for arbitrary I/O, whereas Enclave only provides restricted I/O. More on the latter in Section 5.3. The App monad sets up the infrastructure for communication between the Client and Enclave monad. We show a simple secure counter written using most of the API in Listing 2.
Listing 2 internally gets partitioned into the trusted and untrusted components via conditional compilation. In line 3, liftNewRef is used to create a secure reference initialised to the value 0. Followed by that, the computation to increment this value inside the enclave is given in lines 4 - 7. Applying inEnclave on the enclave computation (line 4) yields the type App (Secure (Enclave Int)). The Secure type is HasTEE's internal representation of a closure. Line
8 uses the critical gateway function to actually execute the enclave computation within the enclave memory and get the result back in the client memory. This resulting value, v, is displayed to the user.
The only function from Fig. 4 not used in Listing 2 is the <@> operator, used to collect arguments that are sent to the enclave. For example, an enclave function, f, that accepts two arguments, arg1 and arg2, would be executed as gateway (f <@> arg1 <@> arg2). Parameters to secure functions are copied to the enclave before the function is invoked, and results are copied from the enclave to the client before the client consumes execution. To do this copying, gateway and <@> has a Binary constraint on the types involved. This specifies that the values of the types involved have to be serialisable. Listing 1 in Section 1 shows a concrete usage of the operator. We have larger case studies in Section 6.
### Operational Semantics of HasTEE
We provide big-step operational semantics of the HasTEE DSL. Note, we illustrate the semantics using an interpreter written in Haskell that shows the transition of the client as well as the enclave memory as each operators gets interpreted. We show our _expression language_ and the abstract machine values to which we evaluate below:
```
typeName=String dataExp=LitInt|VarName|Fun[Name]Exp |AppExp|LetNameExp|PlusExp |InEnclaveExp|GatewayExp|EnclaveAppExp|-HasTEE dataValue=IntValInt|Closure[Name]ExpEnv |SecureClosureName[Value]|ArgList[Value]|Dummy ErrState-Errorconditions
```
The Exp language above is a slightly modified version of lambda calculus with the restriction of allowing only fully applied function application. This restriction is done to reflect the nature of the HasTEE API, which through the type system, only permits fully saturated function applications for functions residing in the enclave. The lambda calculus language is then extended with the core HasTEE operators.
In the Value type, the Closure constructor, owing to saturated function application, captures a list of variable names and the environment. Notable in the Value type is the SecureClosure constructor that represents a closure residing in the enclave memory. This constructor does not capture the body of the closure as the body could hold any hidden state that lies protected within the enclave memory. The SecureClosure value is used by the Gateway function to invoke functions residing in the enclave.
The ArgList constructor supports the <@> operator that collects enclave function arguments. Lastly, the Dummy value is used as a placeholder for operators lacking semantics depending on the client or the enclave memory. For instance, the Gateway function has no meaning inside the Enclave monad, it is only usable from the Client monad. The Dummy crucially enables the conditional compilation trick in HasTEE by acting as a placeholder for meaningless functions in the respective client and enclave memory.
Our evaluators will show transition relations operating on two distinct memories that maps variable names to values - the enclave memory and the client memory.
```
typeClientEnv=[(Name,Value)] typeEnclaveEnv=[(Name,Value)]
```
Accordingly, we define two evaluators - evalEnclave (Fig. 5) and evalClient (Fig. 6). The complete evaluator run in two passes. In the first pass, it runs a program and loads up the necessary elements in the enclave memory and then in the second pass, the loaded enclave memory is additionally passed to the client's evaluator.
Two helper functions, genEncVar and evalList are not shown for concision. They generate unique variable names and fold over a list of expressions respectively. Appendix A contains the complete, typechecked semantics as runnable Haskell code.
We use Listing 3 to illustrate how the enclave, as well as the client memory, evolves as a program gets evaluated. Our
Figure 4. The core HasTEE API
semantic evaluator operates in two passes. In the first pass, the evalEnclave evaluator from Fig. 5 is run. Fig. (a)a shows the state of the enclave environment after the evaluator has completed evaluating Listing 3. Notably, the variable y maps to a value with no semantic meaning, as the evaluator is already running in the secure memory.
In the second pass, the environment from Fig. (a)a is additionally passed as a state variable to the evaluator evalClient from Fig. 6. Note the different value mapped to the variable y in Fig (b)b. EnclaveApp is evaluated on lines 25-34 in Fig 6. It generates the value SecureClosure "\(EncVar_{0}\)" [Lit 2].
Notable is the evaluation of the gateway call on line 4 of Listing 3. The semantics for this evaluation are in lines 12-24 of Fig 6. The evaluator upon finding a reference \(EncVar_{0}\) with no semantics in the client memory (Fig (b)b) looks up \(EncVar_{0}\) in the enclave environment (Fig (a)a) and finds a \(\mathtt{Closure}\) with a body. Crucially, **it evaluates the Closure by invoking the evalEnclave function on line 21 of Fig. 6 using the enclave environment.** This part models how the SGX hardware switches to the enclave memory when executing the secure function f rather than the client memory. An important point is generating an identical fresh variable name, \(EncVar_{0}\), that the client uses to identify and call the functions in the enclave memory.
Figure 5. Operational Semantics of the Enclave
Figure 6. Operational Semantics of the Client
Figure 7. (a) gets loaded during the first evaluator pass, and the Client Environment remains empty. In the second pass, (b) gets loaded while having access to the memory (a), as can be seen in Fig 6.
### Practical security analysis
In what follows, we perform a security analysis of HasTEE. We start by making explicit that the only communication from the enclave back to the host client is primitive gateway. In this regard, we have the following claim capturing a (progress-insensitive (Askarov, Hunt, et al., 2008)) non-interference property. Intuitively, this property states that (side-effectful) programs do not leak information except via their termination behavior.
**Proposition 4.1** (Non-interference).: _Given a HasTEE program \(\mathtt{p}\) :: Enclave a -> App Done, where \(\mathtt{p}\) does not use primitive gateway, and two enclave computations \(\mathtt{e1}\) :: Enclave a and \(\mathtt{e2}\) :: Enclave a, then \(\mathtt{p}\)\(\mathtt{e1}\) and \(\mathtt{p}\)\(\mathtt{e2}\) perform the same side-effects in the host client._
This proposition states that in \(\mathtt{p}\) the public effects on the host client cannot depend on the content of the argument of type Enclave a. The veracity of this proposition can be proven from the semantics of gateway, which is the only primitive calling evalEnclave from evalClient Fig. 6. If non-interference does not hold in the context of developing HasTEE, it could indicate the presence of vulnerabilities in the system. For example, it could suggest that data is being leaked into the host environment due to an error in the partitioning process of the HasTEE compiler. Alternatively, it might imply that certain side effects within the enclave are unintentionally revealing data back to the host, contrary to our expectations. Non-interference serves as an important initial security condition in the development of HasTEE as it helps identify and address numerous vulnerabilities that may arise during the process.
When it comes to reason about programs with the primitive gateway, we need to reason about IFC _declassification_ primitives (or intended ways to release sensitive information) (Sabelfeld and Sands, 2005) and how to avoid exploiting it to reveal more information than intended. Gollamudi and Chong (2016) utilizes _delimited release_ as the security policy. This security policy extends information-flow control beyond non-interference. It allows for explicit points of controlled information release, called _escape hatches_, where sensitive information can be sent to public channels. This policy stipulates that information may only be released through escape hatches and no additional information is leaked. The function gateway is our escape hatch. If we apply delimited release to HasTEE, then host clients can always learn what the function gateway \(\mathtt{e}\) returns, given that expression \(\mathtt{e}\) evaluates to the same value in the initial states \(\mathtt{s1}\), \(\mathtt{s2}\) :: Enclave \(\mathtt{a}\) given to a program \(\mathtt{p}\)--a condition to avoid misusing escape hatches to reveal more information than intended. Our case studies (Section 6) satisfy delimited release.
Automatically enforcing delimited release or robust declassification (Myers, Sabelfeld, et al., 2004) imposes severe restrictions in either the information being declassified or how declassification primitives are used. Hence, we leave enforcing such security policies as future work. Instead, our DSL explicitly requires marking the points where information is sent back to the client (i.e., gateway), making it clear where to audit and control information leakages.
## 5. Implementation of HasTEE
### Trusted GHC Runtime
One of the crucial challenges in implementing the HasTEE library is enabling Haskell programs to run within an Intel SGX enclave. All Haskell programs compiled via the Glasgow Haskell Compiler (GHC), rely on the GHC runtime (Marlow et al., 2009) for crucial operations such as memory allocation and management, concurrency, I/O management, etc. As such, it is essential to port the GHC runtime in order to run Haskell programs on the enclave.
The GHC runtime is a complex software that is heavily optimized for specific platforms, such as Linux and Windows, to maximize its performance. For instance, on Linux, the runtime relies on a wide variety of specialised low-level routines from a C standard library, such as glibc (GNUDevs, 1991) or mus1 (Felker, 2005), to provide essential facilities like memory allocation, concurrency, and more. The challenge lies in porting the runtime due to the limited and constrained implementation of the C standard library in the SGX SDK, called tlibc (Intel, 2018). Specifically, tlibc does not support some of the essential APIs required by the GHC runtime, including mmap, madvise, mmap, select, poll, a number of pthread APIs, operations related to timers, file reading, writing, and access control, and 100+ other functions.
Given the magnitude of engineering effort required to port the GHC runtime, we fall back on a library OS called Gramine (C. Tsai, Porter, et al., 2017). Gramine internally intercepts all libc system calls within an application binary and maps them to a Platform Abstraction Layer (PAL) that utilizes a smaller ABI. In Gramine's case, this amounts to only 40 system calls that are executed through dynamic loading and runtime linking of a larger libc library, such as glibc or mus1. Importantly, to protect the confidentiality and integrity of the enclave environment, Gramine uses a concept known as _shielded execution_, pioneered by the Haven system (Baumann et al., 2015), where a library is only loaded if its hash values are checked against a measurement taken at the time of initialisation. Shielded execution further protects applications against _lago attacks_(Checkoway and Shacham, 2013) in Gramine.
However, there are additional difficulties in loading the GHC runtime on the SGX enclave via Gramine. Owing to Gramine's diminished system ABI, it has a dummy or incomplete implementation for several important system calls that the runtime requires. For instance, the absence of the select, pselect, and poll functions, which are used in the GHC IO manager, required us to modify the GHC I/O manager to
manually manage the polling behavior through experimental heuristics. Similarly, the critical mmap operation in GHC uses specific flags (MAP_ANONYMOUS) that require modification. In addition, other calls, such as madvise, getrusage, and timer-based system calls, also require patching. We hope to quantify these modifications' performance in the future.
After the GHC runtime is loaded onto an enclave, communication between the untrusted and trusted parts of the application effectively occurs between two disjoint address spaces. Communication between them can happen over any binary interface, emulating a remote procedure call. Our early prototype stage implementation uses an inter-process communication (IPC) call to copy the serialised data (Fig 8). A production implementation should communicate via the C ABI using Haskell's Foreign Function Interface (FFI), as this would be significantly faster than an IPC.
The Gramine approach requires 57,000 additional lines of code in the Trusted Computing Base (TCB) (C. Tsai, Porter, et al., 2017). However, this is still an improvement over traditional operating systems, like Linux, with a TCB size of 27.8 million lines of code (Larabel, 2020).
### HasTEE Library
The API of the HasTEE library was already shown (Figure 4) and discussed in Section 4.2. The principal data types, Enclave and Client, have been implemented as wrappers around the IO monad, as shown below:
```
1:newtypeEnclavea=Enclave(I0a)==dataconstructornot exportedtypeClient=IO
```
A key distinction is that the Enclave data type does not instantiate the MonadIO typeclass, as a result of which arbitrary IO actions cannot be lifted inside the Enclave monad. This is to ensure that the enclave does not perform leaky IO operations such as writing to the terminal. These are effectful operations that may leak information, which may not be rolled back. However, the Enclave monad does instantiate a RestrictedIO typeclass that will be discussed in the following section. The conditional-compilation-based partitioning technique is achieved by having dummy implementations of certain data types in one of the modules, while the concrete implementation of those types is defined in the second module. We give an example of this using two different data types from the API.
```
1:=Enclave.hs
2:dataSecurea=SecureDummy
3:\(\texttt{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{{\small{\small{{\
programmer wishes to send any user-defined data type to the untrusted client, they need to provide an instance of the Binary typeclass. Writing this typeclass instance for some confidential data type, such as a private key, equips the confidential data with the capacity to leave the enclave boundary, which should be done in a highly controlled manner.
Besides the gateway function, the Enclave monad has occasional requirements to interact with general I/O facilities like file reading/writing or random number generation. For such operations, the Enclave monad would need a MonadIO instance in Haskell to perform any I/O operations. However, as discussed in the previous section, we do not provide the lenient MonadIO instance to the Enclave monad but instead, use a RestrictedIO typeclass to limit the types of I/O operations that an Enclave monad can do.
RestrictedIO, shown in Listing 4, is a collection of typeclasses that constrains the variants of I/O operations possible inside an Enclave monad. For instance, if a programmer, through the usage of a malicious library, mistakenly attempts to leak confidential data through a network call, the typeclass would not allow this.
```
typeRestrictedIOm=(EntropyIOm,UnsafeFileIOm)--other typeclassesnotshown
```
```
classEntropyIO(m::Type->Type)where
``` classEntropyPool::m(Entropym)
```
classUnsafeFileIO(m::Type->Type)where
``` untrustedReadFile::FilePath->m(UntrustedString) ```
Listing 4The Restricted IO typeclass
This approach is invasive in that it restricts how a library (malicious or otherwise) that interacts with a HasTEE program conducts I/O operations. For instance, we had to modify the HsPaillier library [L.-T. Tsai and Sarkar, 2016] that used the genEntropy function for random number generation. Initially, the library could use the Haskell IO monad freely, but to interact with a package written in HasTEE, it had to be modified to use the more restricted type class constraint (EntropyIO) for its _effectful_ operations. This limits potential malicious behaviour within the library. Notably, our changes involve only five lines of code that instantiate the type class and generalize the type signature of effectful operations.
Another aspect of IFC captured in our system is the notion of _endorsement_[Hedin and Sabelfeld, 2012], which is the dual of declassification. Endorsement is concerned with the integrity, i.e., trustworthiness, of information. In HasTEE, we utilize endorsement to ensure that the integrity of secrets is not compromised by data being introduced into the enclave.
HasTEE allows file reading operations inside the Enclave monad, which can potentially corrupt the enclave's data integrity. To control this, HasTEE provides two forms of file reading operation - (1) untrusted file read and (2) trusted encrypted file reads. For (1), data read from untrusted files require manual endorsement via the trust :: Untrusted a -> a operator (where Untrusted a is a wrapper over the data read). This provides an additional check before untrusted data interacts with the trusted domain.
For point (2), HasTEE relies on an Intel SGX feature known as _sealing_. Every Intel SGX chip is embedded with a unique 128 bit key known as the Root Seal Key (RSK). The SGX enclave can use this RSK to encrypt trusted data that it wishes to persist on untrusted media. This process is known as sealing; HasTEE provides a simple interface to seal as well as unseal the trusted data being persisted, as shown below:
``` idataSecurePath=SecurePathString ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;; ; ; ; ; ; ;; ;
The above setup only requires the cloud server supporting Intel SGX technology so that even mobile devices can participate in training as a worker role. We can very conveniently model this entire setup as three clients and a server with an enclave in HasTEE. For illustration purposes, we will use GHC's threads to represent the three clients instead of three separate data owner machines.
Listing 5 models the server's state. Note that the weights are kept in plaintext form. The enclave state holds both its public and private keys. However, only the public key should be allowed to move to the client. We enforce this by not providing an instance of the Binary typeclass for the private key. If untrusted modules try to attack such enforcement by adding new instances to Binary, or even providing overlapping ones to override the behaviour of overloaded methods, then Safe Haskell (Terei et al., 2012) will indicate GHC to not compile the code. Haskell is unique in terms of having an extension like Safe Haskell. Safe Haskell enforces sandboxing for trusted code by banning extensions that introduce _loopholes_ and compromise type-safety or module abstraction (often for the sake of performance). As discussed in Section 5.3, the lack of a Binary instance for the privateKey will prevent the enclave programmer from accidentally leaking the security-critical private key.
Listing 6 shows the API exposed to the client machine. Instead of the complex SGX_ECALL machinery, our API is expressed in idiomatic Haskell. Calling any function f from the record api with an argument arg in this API is expressed simply as gateway ((f api) <@> arg).
```
1:typeAccuracy=Double
2:typeLoss=Double
3:dataAPI=API{
4aggregateModel::Secure(Epoch->VectorCipherText->Enclave(Maybe(VectorCipherText))),
5:validateModel::Secure(Enclave(Accuracy,Loss)),
6:getPublicKey::Secure(EnclavePubKey),
7:reEncrypt::Secure(CipherText->EnclaveCipherText))
```
Listing 7 shows the main ML model training loop. A few functions have been elided for brevity, but the key portions of the client-server interaction in HasTEE should be visible. The Config type holds the state containing encrypted weights sent from the cloud server, the learning rate, the current epoch number and the public key. After each epoch it updates the weights to the new aggregated value (Line 12). The value x' is the data set that the data owners are protecting and y is the result of the learning algorithm. The adjustModelWithLearningRate function (body elided, line 6) takes the computed gradient (line 5) and tries to converge on the desired result.
On line 7 the server is communicated to aggregate models spread across different clients, with the server returning the encrypted updated weights wt'. We use a wrapper over gateway, called retryOnEnclave (body elided), to allow the server to move in lock step with all the clients. Then in line 8, the server is communicated again to collect the accuracy and loss in the ongoing epoch number, which gets displayed in line 9. Finally, the loop continues in line 10.
```
1:handleSingleEpoch::API->CurrentEpochNum->MaxEpochNum->MatrixDouble->VectorInt->Config->ClientConfig
2:handleSingleEpochapinnx'ycfg'
3:1n==m=returncfg'
4:1otherwise=do
5:grad<computeGradientapicfg'x'y
6:cfgNew<-adjustModelWithLearningRateapi
7:(cfg'{iterN=n>}grad
8:wt'<-retryOnEnclave$(aggregateModelapi)<@>n
9:\(\Leftrightarrow\)(weightscfgNew)
10:(acc,loss)<-gateway(validdetodelapi)
11:printClient$='Iterationno:"<>shown
12:"Accuracy:"<>showacc<"<-loss:"<>showloss
13handleSingleEpochapin(n+1)mx'y
14:(cfgNew{weights=wt'}))
```
Listing 7 shows the key model training loop
Listing 7 above features a complex control flow with at least two interactions visible in the loop itself. Internally, computeGradient and adjustModelWithLearning both communicate with the enclave, calling the reEncrypt function to remove noise from the homomorphic encryption operation. HasTEE can represent a fairly complex, asynchronous control flow as simple Haskell function calls.
Figure 9. A Federated Learning setup where the data owners are protecting their data and the ML model owner is protecting their model. The training with encrypted weights can be done using homomorphic encryption.
In terms of Information Flow Control, there are two important aspects in this case study. Firstly, the RestrictedIO typeclass constrains potentially malicious libraries from misbehaving. For example, consider the library HsPaillier [L.-T. Tsai and Sarkar2016], which implements the Paillier Cryptosystem [Paillier1999] for partial homomorphic encryption. All effectful operations from this library, such as genKey :: Int -> IO (PubKey, PrvKey), need to be rewritten for them to be usable within the Enclave monad. The following snippet shows our typeclass instantiation and a sample type signature change needed inside the library.
```
instance(IO~m)=>EntropyIOmwhere
```
2typeEntropym=EntropyPool ```
3genEntropyPool=createEntropyPool
```
4
5==genKey::Int->IO(PubKey,PrvKey)--originaltype genKey::(Monadm,EntropyIOm)>Int->m(PubKey,PrvKey) ```
The second aspect of IFC arises when the client machine queries the server for accuracy and loss by asking it to validate the model. On the server side, the enclave has to read a file with test data. This test data resides outside of the enclave and is potentially an attack vector. In order to not inadvertently trust such an exposed source, the enclave uses the untrustedReadFile function from the RestrictedIO typeclass (Listing 4). The file is read as an Untrusted String and requires explicit programmer _endorsement_ via the trust operator for the compiler to typecheck the program.
Overall the case study constitutes only 500 lines of code. It naturally fits into the client-server programming model, and the usage of Haskell provides type safety and enables IFC-based security.
### Encrypted Password Wallet
For this case study, we use HasTEE to implement a secure password wallet that stores authentication tokens in encrypted form on the disk. An authentication token can be retrieved from the wallet if the right master password is supplied. The definition of a password wallet used by the case study follows in Listing 8.
```
1Asingleentryofauthenticationtokens dataItem=Item(title::String,username::String,password::Password)deriving(Show,Read)
2[]Thesecurewallet dataWallet=Wallet(items::[Item],size::Int,masterPassword::Password)deriving(Show,Read) ```
Listing 8: The definition of a password wallet as a regular Haskell data type.
The Show and Read instances are used to convert a wallet to and from a string. This allows us to write the wallet to disk, and by writing to a secure file path we ensure that the stored wallet is encrypted, as described in section 5.3. By omitting a Binary instance we ensure that the wallet is not inadvertently leaked to the client directly. The code in Listing 9 implements the functions that store and load the wallet. We emphasize that the code does not need to explicitly reason about encryption and decryption, except for defining the secure file path.
Our password wallet has the following features - (1) adding an authentication token, (2) retrieving a password, (3) deleting a token and (4) changing the master password. It is designed as a command-line utility where the commands are handled by an untrusted client and the passwords are protected by the enclave. The complete implementation is roughly 200 lines of Haskell code.
The hardware-enforced security provided by our secure wallet makes it a natural fit for designing wallets that are protected by biometrics. A similar approach is used on modern iPhones, where passwords are stored in a secure enclave [Apple2021] to ensure confidentiality, and the user's biometric data is used as the master password. In our case, the usage of a high-level language like Haskell enables expressing this relatively complex application concisely.
### Data Clean Room with Differential Privacy
A _Data Clean Room_ (DCR) [AWS 2022] is a technology that provides aggregated and anonymised user information to protect user privacy while providing advertisers and analytic firms with non-personally identifiable information to target a specific demographic with advertising campaigns and analytics-based services.
DCRs compute and release aggregated results based on the user data. To prevent attackers from compromising individual user information from aggregate data (via statistical techniques), DCRs employ _differential privacy_[Dwork 2006]. Differential privacy adds calibrated noise to the aggregate data making it computationally hard for attackers to compromise individual data. The noise calibration can be adjusted for increased privacy (more noise) or increased accuracy (less noise).
Our third case study implements a _differentially-private_ DCR within an SGX enclave using HasTEE. The DCR consists of record, User, containing fields such as name, occupation, salary, gender, age, etc. The User record is encrypted before being provisioned to the DCR, after which we use the _Laplace Mechanism_(Dwork and Roth, 2014) when performing counting queries to add noise to the result. The mechanism introduces noise by sampling a Laplace distribution. The code implementing the Laplace mechanism can be found in Appendix B.
The DCR does not provide a Binary instance for the User record to ensure that it is not transferred to the enclave via plain serialisation. Instead, we expose functions that encrypt and decrypt users.
The Laplace Mechanism for adding noise requires a source of randomness. Here, we use Haskell's System.Random package, which internally reads from /dev/urandom. For production environments, a more cryptographically secure source of randomness is required. We extend the RestrictedIO (Section 5.3) interface to allow this operation as long as the programmer _endorse_ the file read.
Consider a sample query to test how many individuals in a data set have a salary within a specific range.
```
salarywithin::Integer>Integer>User->Bool salarywithin1hu=1<=salaryu&&salaryu<=h
```
The HasTEE code for the DCR executing this query is shown in Listing 10. Lines 3 to 8 specify the API of the data clean room. The DCR's API supports (1) initialisation, (2) fetching of the public key, (3) provisioning user data to the enclave, and (4) executing the salary query. Line 8 is used to generate some arbitrary users (for testing), after which the client code takes over. The client initializes the DCR and fetches its public key. After this, the users are encrypted and sent to the DCR. On line 15 the salary query is executed in the DCR, and then the result is printed.
Generating arbitrary users to test the setup is done purely for illustration purposes. In a more faithful implementation, the client would relay the public key to data owners that would then send already encrypted user records to the client, which provisions them to the DCR. Owing to HasTEE's client-server programming model and the use of a high-level language like Haskell, the implementation becomes very compact with roughly 200 LOC.
## 7. Evaluations
### Discussion
In contrast to development on the Intel C/C++ SGX SDK, HasTEE's high-level programming model entirely abstracts away the complexity of dealing with the low-level ed1 files in the SGX SDK. The remote procedure calls that happen between the untrusted client and trusted enclave are typechecked in Haskell, unlike the SGX SDK. The benefits of high-level of abstraction can also be seen in the password wallet example, where functions readSecure and writeSecure (Listing 9) relieves developers from the burden of key management. Furthermore, HasTEE warns a program against accidental data leaks and can enforce stronger compile-time guarantees than Intel C/C++ SGX SDK. For instance, in all three case studies, the lack of the Binary type-class constraint would, by construction, prevent accidental leakage of the secret data from the enclave. All three case studies restrict the I/O operations possible in the Enclave monad by the type-class RestrictedIO. Notably, in the federated learning example, we adapted the homomorphic encryption library to limit the effects possible in the IO monad.
### Performance Evaluations
Our evaluations were conducted on an Azure Standard DC1s v2 (1 vcpu, 4 GiB memory) SGX machine. We use the password wallet case study as the canonical example to present performance evaluations across different parameters. We chose this example as it covers all the major aspects of the HasTEE API, such as protecting the confidentiality of data across the memory as well as the disk.
**Memory Overhead.** We show the memory consumption of our modified GHC runtime, sampled across 100 runs, where a sample was collected every second.
```
MemoryRSSVirtualSizeDiskSwap
Atrest19,132KB287,920KB0KB
Peak20,796KB290,032KB0KB
```
Although the memory usage of HasTEE will certainly vary across applications, these numbers provide a general estimate of the trusted GHC runtime's space usage. The Resident Set Size (RSS) indicates that the application fits within 20 MB at peak usage. RSS is an overestimate of memory usage
as it includes the memory occupied by shared libraries as well. As a result, we can be certain that our application fits within the Enclave Page Cache limit (Section 2) of 93 MB.
**Latency.** We measure the latency and throughput for an instance of password retrieval, that includes - (i) an enclave crossing to call the trusted runtime, (ii) standard GHC execution time, (iii) encrypted file load, (iv) file decryption, (v) file read, and (v) a second enclave crossing to return the result.
Our measurements show that using the Linux send/recv call for enclave crossing results in a _60 milliseconds overall latency_. As our current socket-based communication is a proof-of-concept, it incurs a substantial overhead compared to native SGX enclave crossings. As a baseline, we measured the latency of an encrypted password retrieval in unmodified GHC (file encrypted with gpg (GNU, 1999)). The baseline number comes out to be 0.6 milliseconds showing an overall 100x slowdown. Note that an average SGX ECALL operation incurs at least a 10x slowdown via the native SDK (Ghosn et al., 2019). We believe switching to native ECALLs has the potential to improve our latencies.
**Throughput.** In terms of throughput, HasTEE is able to handle on average 11 requests for password retrieval per second. Again, this number has the potential for further improvemt by switching to native SGX ECALLs.
We currently present coarse-grained measurements of the various metrics but envision future work, where more fine-grained parameters, such as the correlation between the GC pauses across the two runtimes can be presented. Section 7.3 provides a qualitative comparison of HasTEE against GoTEE and \(J_{E}\).
### Comparing HasTEE to GoTEE and \(J_{e}\)
Table 1 presents a comparison between HasTEE and its two closest counterparts - GoTEE (Ghosn et al., 2019) and \(J_{E}\)(Oak et al., 2021). While both GoTEE and \(J_{E}\) had to modify the respective compilers, HasTEE required no modifications to the compiler. The specific runtime used by \(J_{E}\) is not mentioned in the paper (Oak et al., 2021); however, it suggests that no modification of the runtime was required, as it was run on a large virtualized host - SGX-LKL (Priebe et al., 2019). In contrast, the runtimes for HasTEE and GoTEE required modification. GoTEE required significant modifications to the Golang runtime system to enable communication between the trusted and untrusted memory.
Both GoTEE and \(J_{E}\) use sophisticated static analysis passes and program transformations to partition a program into its two components. In contrast, HasTEE's conditional compilation-based approach is much simpler, which is beneficial when it comes to security. Having less and simpler code makes it easier to verify for correctness. Notably, the purity of Haskell enables the user to inspect the type of a function and infer that it is naturally confined whenever a function is side-effect free. Inferring the confinement of a pure function is much more challenging in imperative languages like Java and Go.
## 8. Related Work
**Managed programming languages.** While there are imperative and object-oriented languages with TEE support (e.g., Go-based (Ghosn et al., 2019), and Java-based(Oak et al., 2021; C. Tsai, Son, et al., 2020), HasTEE is (to the best of our knowledge) the first functional language running on a TEE environment. The Rust-SGX (Wang et al., 2019) project provides foreign-function interface (FFI) bindings to the C/C++ Intel SGX SDK. Different from HasTEE, Rust-SGX does not aim to introduce any programming model or IFC to protect against leakage of sensitive data. Instead, Rust-SGX's main goal is application-level memory safety when programming with the low-level SGX SDK. HasTEE provides memory safety by the virtue of running Haskell, a memory-safe language, on the enclaves. TrustJS(Goltzsche et al., 2017) takes a similar FFI-based approach as Rust-SGX for programming enclaves with JavaScript. An important project in this space is the WebAssembly (WASM) initiative (Rossberg, 2019). There have been WASM projects, both academic, such as Twinne (Menetrey et al., 2021), as well as commercial, such as Enarx (Red Hat, 2019), aimed at allowing WASM runtimes to operate within SGX enclaves. Our initial approach was to use the experimental Haskell WASM backend (Tweag.io, 2022) to run Haskell on SGX enclaves. However, the aforementioned runtimes are not supported by GHC and lack several key features required for loading Haskell onto an enclave.
**Automatic partitioning.** HasTEE has a seamless program partitioning and familiar client-server-based programming model for enclaves. HasTEE's lightweight partitioning approach is inspired by the Haste_App library (Ekblad and Claessen, 2014)--a library to write web applications in Haskell and deploy parts of it into JavaScript on the web browser. The most well-known automatic partitioning tool for C programs on an SGX enclave is Glamdring (Lind et al., 2017). The general idea of partitioning a single program has been studied as multitier programming (Weisenburger et al., 2021). Among the existing approaches to multitier programming, HasTEE provides a lightweight alternative that does not require any compiler modification or elaborate dataflow analysis to partition the program.
**Application development.** There have been attempts to virtualize entire platforms within the enclave memory to reduce the burden of dealing with the two-project programming model of Intel SGX. Haven (Baumann et al., 2015) virtualizes the entire Windows operating system as well as an entire SQL server application running on top of it. SCONE (Arnautov et al., 2016) virtualizes a Docker container instance within an SGX enclave. The libraryOS Gramine (C. Tsai, Porter, et al., 2017), which is used in this work, is an example of lightweight virtualization.
AMD's TEE system, AMD SEV (AMD, 2018), is natively a virtualization-based approach. While it eases development, virtualization can result in drastically increasing the size of
the TCB. We chose to apply a libraryOS approach for HasTEE in order to have a TCB of 57k lines of code (Gramine). As a future work, we can move away from Gramine and make the GHC runtime a standalone library inside the SGX enclave.
**Information Flow Control.** HasTEE draws inspiration from the work on static IFC security libraries (e.g., (Buiras et al., 2015; Russo, 2015; Russo et al., 2008)). Such approaches relies on the purity of Haskell to detect and stop malicious behaviour. HasTEE can support IFC in a dynamic manner (Stefan et al., 2011) by adapting the interpretation of the Enclave type to be a runtime monitor rather than just a wrapper for IO, where gateway performs security checks when sending/receiving information--an interesting direction for future work.
The work on IMP\({}_{E}\)(Visser and Smaragdakis, 2016) studies IFC non-interference for passive and active attackers on the host client. Gollamudi, Chong, and Arden (2019) present a calculus for reasoning about IFC for applications distributed across several enclaves. \(J_{E}\)(Oak et al., 2021) studies how compromised host clients can abuse gateway (declassification) primitives. Their security property and enforcement is based on the notion of robust declassification (Myers, Sabelfeld, et al., 2004; Waye et al., 2015). Intuitively, this policy ensures that low-integrity data cannot influence the declassification of secret data. HasTEE enforces a simpler IFC policy for passive attackers--along the lines of Visser and Smaragdakis (2016)--and defer automatic analyses of the use of gateway for future work. Another interesting line of work is Moat (Sinha et al., 2015), which formally verifies enclave programs running on Intel SGX such that data confidentiality is respected. It uses IFC to enforce the policies and automated theorem proving to verify the policy enforcement mechanism.
## 9. Conclusion & Future Work
This paper presents HasTEE, a domain-specific language to write TEE programs while ensuring confidentiality of data by construction. Unlike previous work, HasTEE provides its partitioning of source code and IFC as a library! For HasTEE to work, we ported GHC's runtime to run within SGX enclaves by using the libraryOS Gramine. We demonstrate through three diverse case studies how HasTEE's IFC mechanism can help prevent accidental data leakage while producing concise code. We hope HasTEE opens future research avenues at the intersection of TEEs and functional languages.
There are several directions for future work. The IFC scheme we consider operates on two security levels - sensitive (Enclave) and public (Client) data. A natural extension is to enable multiple security levels (Myers and Liskov, 2000; Stefan et al., 2011) to represent the concerns of different principals contributing data to enclaves. TEEs also provide a verifiable launch of the execution environment for the sensitive code and data, enabling a remote entity to ensure that it was set up correctly. _Remote attestation_(Knauth et al., 2018) allows an SGX enclave to prove its identity to a challenger using the private key embedded in the enclave. HasTEE does not capture attestation at the programming language level since it a property of the system components layout. Nevertheless, remote attestation can facilitate secure communication between multiple enclaves, e.g., a distributed-enclave setting; so it would be interesting to incorporate language-level support for remote attestation. Finally, GHC runtime is extensively optimized for performance. Obtaining a more compact and portable runtime, e.g., by using a restricted set of libc operations, could result in a considerably smaller TCB. A more portable runtime would facilitate HasTEE experiments on other TEE infrastructures such as ARM TrustZone and RISC-V PMP (RISC-V, 2017).
## Acknowledgements
This work was funded by the Swedish Foundation for Strategic Research(SSF) under the project Octopi (Ref. RIT17-0023) and VR. Special thanks to Mary Sheeran, Marco Vassena, Bo Joel Svensson, and Claudio Agustin Mista for their initial reviews, and Facundo Dominguez and Michael Sperber for guiding the published draft.
\begin{table}
\begin{tabular}{|p{113.8pt}||p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Framework & HasTEE & GoTEE & \(J_{E}\) \\ \hline \hline IFC support & Standard declassification & None & Robust declassification \\ \hline Partitioning scheme & Type-based & Process-based & Annotation-based \\ \hline Modified compiler & No & Yes & Yes \\ \hline Modified runtime system & Yes & Yes & No \\ \hline \multirow{3}{*}{Trusted Components} & GHC compiler, GHC runtime, Gramine & GoTEE compiler, GoTEE runtime & Java parser and partitioner, Jif compiler, JVM, SGX-LKL(Priebe et al., 2019) \\ \cline{2-4} & \multirow{2}{*}{Client-server} & Synchronous Message-Passing & Using the object-framework provided by Java \\ \cline{1-1} \cline{3-4} & & & \\ \hline \end{tabular}
\end{table}
Table 1. Comparison of HasTEE, GoTee, and \(J_{E}\). We specify the core components involved in the Trusted Computing Base in all three frameworks. |
2305.05169 | A compact single-shot soft X-ray photon spectrometer for free electron
laser diagnostics | The photon spectrum from free-electron laser (FEL) light sources offers
valuable information in time-resolved experiments and machine optimization in
the spectral and temporal domains. We have developed a compact single-shot
photon spectrometer to diagnose soft X-ray spectra. The spectrometer consists
of an array of off-axis Fresnel zone plates (FZP) that act as
transmission-imaging gratings, a Ce-YAG scintillator, and a microscope
objective to image the scintillation target onto a two-dimensional imaging
detector. This spectrometer operates in an energy range which covers absorption
edges associated with several atomic constituents carbon, nitrogen, oxygen, and
neon. The spectrometer's performance is demonstrated at a repetition rate of
120 Hz, but our detection scheme can be easily extended to 200 kHz spectral
collection by employing a fast complementary metal oxide semiconductor (CMOS)
line-scan camera to detect the light from the scintillator. This compact photon
spectrometer provides an opportunity for monitoring the spectrum downstream of
an endstation in a limited space environment with subelectronvolt energy
resolution. | Kirk A. Larsen, Kurtis Borne, Razib Obaid, Andrei Kamalov, Yusong Liu, Xinxin Cheng, Justin James, Taran Driver, Kenan Li, Yanwei Liu, Anne Sakdinawat, Christian David, Thomas J. A. Wolf, James Cryan, Peter Walter, Ming-Fu Lin | 2023-05-09T04:36:26Z | http://arxiv.org/abs/2305.05169v1 | # A compact single-shot soft X-ray photon spectrometer for free-electron laser diagnostics
###### Abstract
The photon spectrum from free-electron laser (FEL) light sources offers valuable information in time-resolved experiments and machine optimization in the spectral and temporal domains. We have developed a compact single-shot photon spectrometer to diagnose soft X-ray spectra. The spectrometer consists of an array of off-axis Fresnel zone plates (FZP) that act as transmission-imaging gratings, a Ce:YAG scintillator, and a microscope objective to image the scintillation target onto a two-dimensional imaging detector. This spectrometer operates in an energy range which covers absorption edges associated with several atomic constituents: carbon, nitrogen, oxygen, and neon. The spectrometer's performance is demonstrated at a repetition rate of 120 Hz, but our detection scheme can be easily extended to 200 kHz spectral collection by employing a fast complementary metal oxide semiconductor (CMOS) line-scan camera to detect the light from the scintillator. This compact photon spectrometer provides an opportunity for monitoring the spectrum downstream of an endstation in a limited space environment with sub-electronvolt energy resolution.
## 1 Introduction
A photon spectrometer is a crucial asset to any X-ray beamline. Monitoring the broadband photon spectrum from stochastic free-electron laser (FEL) light sources provides essential feedback to optimize the performance of the FEL, such as for sub-femtosecond pulse duration for studying attosecond electron dynamics and nonlinear optics [1, 2, 3]. Collecting the spectra downstream of the interaction point (IP) provides additional information in time-resolved experiments making use of the recently developed spectral-correlation spectroscopy [4, 5] and optical density (OD) measurements. Our photon spectrometer is particularly convenient for transient absorption spectroscopy that compares the transmitted FEL spectra with and without optical excitations (i.e., \(\Delta\)OD measurement), circumventing time-consuming fine scans in photon energy. This is similar to the transient absorption measurement using tabletop high-harmonic generated (HHG) X-ray light sources. In contrast to HHG sources, FEL light sources have much higher pulse energies that allow single-shot spectrum collection, which is essential for the correlation measurements with electrons and ions on a shot-by-shot basis. Accurate measurements of the X-ray photon spectrum is needed to help with selective and unambiguous excitation of the resonant features in the target or to provide reference features to assign to the time-dependent signals in the pump-probe experiments. However, the photon energy calculated from machine parameters (e.g., the electron beam kinetic energy and the undulator gaps of
FEL) frequently deviates from the true value by a few electronvolts (eV) to a few tens of eV, so a fast and reliable energy calibration of the soft X-ray spectrum and its central photon energy are basic experimental requirements.
Here, we show that by using an off-axis Fresnel zone plate (FZP), a Ce:YAG scintillator (Princeton Scientific Corp.) and a microscope (Navitar Inc.), we are able to measure single-shot photon spectra at X-ray energies in the neighborhood of the carbon, nitrogen, oxygen, and neon K-edge absorption features at sub-eV energy resolution. A similar approach has been demonstrated at the SwissFEL facility to measure the spectrum at hard X-ray photon energies [6]. The alignment of the X-ray laser through the off-axis FZP is simple [7], and the diffracted spectrum along the optical axis is measured at the maximum rate of our current available camera. This was sufficient for a single-shot characterization at the 120 Hz repetition rate at which we performed our measurements. The characteristic line-shaped spectrum with a width of a few pixels perpendicular to the spectral axis suggests that a fast line-scan complementary metal oxide semiconductor camera (CMOS) can be used for the future ~100s kHz repetition rate operation of LCLS-II. The compact size of this spectrometer can be integrated into a limited space in existing beamlines where photon spectrum measurements are required.
## 2 Design of the soft X-ray photon spectrometer
The off-axis FZP gratings were made of gold-patterned thin film on a 100 nm thick silicon nitride membrane (Si\({}_{3}\)N\({}_{4}\)) for the carbon, oxygen, and neon absorption K-edges. For the nitrogen edge, we replaced the Si\({}_{3}\)N\({}_{4}\) membrane with a 100 nm aluminum membrane because of the strong absorption of Si\({}_{3}\)N\({}_{4}\) at this photon energy. As a plating base, we evaporated 5 nm Cr and 10 nm Au on membranes. The FZP gratings were then patterned using e-beam lithography on PMMA resist, followed by gold electroplating to ~100 nm. Table 1 shows the design parameters for the FZP gold patterning on top of the substrates. Here, the pitch distance represents the spacing of the outermost zone. The smaller pitch for neon is used to increase the dispersion, however, for the carbon absorption edge a larger pitch is used to reduce the dispersion so that in all cases, the Ce:YAG detector plane is kept at a similar distance away from the X-ray main axis (12 to 17 mm from the zero-order main beam). Each FZP grating has a certain bandwidth that operates in a reasonable energy resolution. The energy resolutions are estimated and measured at several X-ray energy ranges, and the results are shown in section 3.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Elements & **Energy** & **Pitch** & **Radius** & **substrate** & **Pixel-to-eV** & **Full Energy** & **Resolution** \\ & (eV) & (nm) & (mm) & & **factor** & (eV) & **limit E\({}_{\text{exp}}\) (eV)** \\ \hline
**carbon** & 290 & 150 & 12.574 & Si\({}_{3}\)N\({}_{4}\) & 69 & 14.8 & 0.087 \\ \hline
**nitrogen** & 410 & 100 & 13.341 & Aluminum & 49 & 20.9 & 0.082 \\ \hline
**oxygen** & 530 & 100 & 10.319 & Si\({}_{3}\)N\({}_{4}\) & 29 & 35.3 & 0.106 \\ \hline
**neon** & 870 & 70 & 8.928 & Si\({}_{3}\)N\({}_{4}\) & 17 & 60.2 & 0.122 \\ \hline \end{tabular}
\end{table}
Table 1: Pitch and radius for the FZP at the outermost edge with the designed focal distance of 441 mm. All the grating architecture on top of the substrate are made of gold material, while the substrates vary with the absorption edges. The pixel-to-eV factor is used to convert CCD pixel number to electronvolt (e.g., 69 pixels per eV near 290 eV). The full energy range is calculated using the full 1024 pixels of the OPAL-CCD sensor. The resolution limit of FZP is calculated from the limited size of the FZP aperture of 0.5x0.5 mm\({}^{2}\)[7].
Figure 1 shows the schematic of the photon spectrometer design as implemented in the TMO hutch at the LCLS [8, 9]. The schematic includes the upstream existing Kirkpatrick-Baez (KB) mirrors. The nominal focal length of the horizontal (vertical)-focusing KB mirror is 2.4 m (1.6 m). The source point of the FZP is located at the IP, where a gas needle effusive source, a pulsed molecular beam, a gas flow cell, or a solid-state thin film can be used as sample targets. The focal length of the KB mirrors can be adjusted within a range of \(\pm\)100 mm from the nominal focus. The minimum focal spot size at the IP is \(\sim\)1.3x1.1 \(\upmu\)m\({}^{2}\) measured by our wavefront sensor targets using the fractional Talbot effect from a grating [9, 10]. As the comparable size of the horizontal and vertical foci implies, we can use the width along the non-dispersive direction to calculate the resolution along the spectral axis. The designed distance from the IP to the FZP is approximately 1.801 m (g\({}_{0}\)). This distance is greater than the designed distance between the FZP and the Ce:YAG detector (\(\sim\)0.584 m, b\({}_{0}\)). This provides a magnification (M) of the focal spot at the IP, down by a factor of \(\sim\)0.32 at the Ce:YAG detector plane (i.e., focal spot of \(\sim\)0.42x0.35 \(\upmu\)m\({}^{2}\)). The main advantage of this demagnification is to reduce the stretching length of spectra on the detector due to the chromatic aberration of the FZP while scanning a large energy window, for instance, \(\sim\)50 eV at 870 eV photon energy (see Equation 1 and the blue line on the detector in Figure 1). This is particularly convenient for a large energy range scan without moving the Ce:YAG detector inside the vacuum chamber. An energy calibration curve (i.e., the pixel-to-energy mapping and absolute energy offset) can be obtained readily from scanning the FEL photon energy over a broad range while the detector position and the associated microscope imaging system are fixed.
Figure 1: Schematic of the photon spectrometer and upstream Kirkpatrick–Baez (KB) mirrors. The X-ray laser beam propagates toward the left. The interaction point (IP) is the position of the nominal focal point of the KB mirrors and this point is adjustable with a range of \(\pm\) 100 mm. The designed distance from the IP to the FZP is 1.801 m (i.e., g\({}_{0}\)) and the distance from FZP to the center of the Ce:YAG is approximately 0.584 m (i.e., b\({}_{0}\)). A thin foil is used for the X-ray energy
calibration. A gas source highlights the IP location, which is the source point of the spectrometer. The microscope contains a high zoom lens and a 2D CCD camera that runs at the full rate of LCLS. The distance between the front of the microscope lens to the front surface of Ce:YAG is ~32 mm. Note that the microscope lens and camera are located outside the chamber, and the rest of components are in vacuum. The transmitted zero-order main beam through the FZP is projected onto the downstream 2D detector (imager) for alignment purposes. A spectral region of interest is also shown to highlight the spectral axis and non-dispersive direction of a single-shot spectrum. The dimension of the spectrum along the non-dispersive direction can be used to estimate the energy resolution. A layout that describes the X-ray main beam, optical axis and stretching length of spectrum at two different detector angles are shown on the bottom left.
The relation of chromatic aberration, \(\Delta\)b, of the FZP with the X-ray energy E and bandwidth \(\Delta\)E is shown in Equation 1[7].
\[\Delta b=f(1+M)^{2}\frac{\Delta E}{2E}\quad\text{(Eq.~{}1)}\]
\[\frac{1}{f}=\frac{1}{b_{0}}+\frac{1}{g_{0}}\qquad\quad\text{(Eq.~{}2)}\]
, where f is the focal length of the FZP, and M is the magnification of the spectrometer. The magnification is defined as b/g\({}_{0}\), corresponding to 0.32 in the current design. Using Equation 2, this gives a focal length of 441 mm for an FZP at the designed imaging distances and photon energies as shown in Table 1. The demagnification (M<1) of the focal spot implies the detector system should be able to resolve structure at a few-micron scale to enable the highest energy resolution on the detector surface. The Ce:YAG scintillators have been demonstrated to be high quality X-ray detectors, showing < 1 \(\upmu\)m parallax and fluorescence spreading [11]. Although the quantum efficiency is not high for soft X-ray measurements using Ce:YAG, the spatial resolution is the most important consideration when designing the energy resolution of such a device. From our estimations, considering the transmission efficiency of the FZP, a few \(\upmu\)J of pulse energy can provide a sufficient number of photons on the scintillator for a single-shot spectral measurement. In our studies here, we have demonstrated that ~5 \(\upmu\)J is sufficient to provide a single-shot spectrum at 530 eV, however, several of our experiments using the FZP spectrometer in parallel with a photoelectron/ion measurements have been recording up to ~200 \(\upmu\)J without any sign of degradation in resolution over time or membrane rupture due to thermal heating (i.e., average power of ~0.02 W).
The overall magnification of the microscope lens is set to 1.64x in the current experiments. The pixel size of the CCD is 5.5 \(\upmu\)m, therefore each pixel on the CCD sensor represent 3.3 \(\upmu\)m on the Ce:YAG (i.e., 5.5/1.64 = 3.3 \(\upmu\)m), and the field-of-view (FOV) on the Ce:YAG is ~3.4 mm (with a 5.6 mm CCD sensor size, 5.6/1.64 = 3.4 mm). The low magnification of the microscope lens is to maintain the large FOV so that a wide X-ray energy range can be recorded without moving the Ce:YAG detector plane. At such low magnification, the microscope spatial resolution is limited to ~4.9 \(\upmu\)m which corresponds to ~1.5 pixels on the CCD sensor. Note that at the full magnification of the camera lens (i.e., full zoom 9x) a ~1.8 um spatial resolution can be achieved, which will be critical for the future upgrade of the photon spectrometer. In this condition, a larger camera sensor with a spectral axis near ~20 mm is necessary to maintain the FOV for a wide X-ray energy range without moving the detector plane.
We can estimate the signal level of the X-ray photons on our spectrometer system. Since the thickness of the Ce:YAG scintillator (~0.5 mm) is much larger than the attenuation length (hundreds of nanometers) of the soft X-rays from 280 eV to 2 keV, all the X-rays impinging on the scintillator are absorbed [12]. The fluorescence spectrum of the scintillator from the X-ray absorption is centered at ~550 nm with a bandwidth of ~100 nm [13]. Each
absorbed X-ray photon emits 8 to 16 \(\sim\)550 nm photons from the Ce:YAG scintillator per keV of X-ray energy [12]. To estimate a lower bound for the signal level, we assume four fluorescence photons per 530 eV photon absorbed. The X-ray beam is approximately 3 mm (FWHM), which is substantially larger than the open aperture of the FZP window (\(\sim\)0.5x0.5 mm\({}^{2}\)). Cropping of the beam by the FZP reduces the effective number of photons to \(\sim\)2.4 % of the incident beam, assuming the window is centered at a gaussian profile of X-ray. In addition, the thickness of the Si\({}_{3}\)N\({}_{4}\) substrate attenuates the X-rays down to 66% due to absorption. The grating efficiency is assumed to be \(\sim\)10% [7, 14]. Therefore, the overall fluorescence yield of the diffracted X-rays illuminated on the scintillator is \(\sim\)6.3E-3 at 530 eV. The collection efficiency of our microscope is 0.1% using a numerical aperture of 0.066, and our current camera sensor detection efficiency at 550 nm is \(\sim\) 40%. Thus, the overall detection efficiency per incoming X-ray photon through the FZP is \(\sim\)2.5E-6. Assuming the entire dispersive spectrum is spread to 300 pixels, the signal level of each pixel is \(\sim\)8.5E-9 (i.e., spectral axis of 150 pixels by non-dispersive direction of 2 pixels, see Figure 1). For a pulse energy of \(\sim\)1 \(\mu\)J at the photon energy of 530 eV on the FZP grating, this corresponds to 1.2E10 photons per pulse. This suggests that on average we have \(\sim\)100 electrons on each camera pixel (assuming 1 electron per each 550 nm photon detected). This is a factor of \(\sim\)8 over the readout noise of our current camera. Thus, at least a few \(\mu\)J of X-ray pulse energy is sufficient for single-shot spectral measurements using this combination of Ce:YAG and microscope CCD camera. From this estimation, a camera with low readout noise is an important parameter to increase the spectrometer detection sensitivity. Our current OPAL camera (Adimec OPAL-1000) has a noise level of 14 electrons per pixel per readout at the full repetition rate of 120 Hz (i.e., current LCLS-I full rate).
## 3 Single-shot spectra, energy resolution and discussion
We used our spectrometer to measure the single-shot spectrum near the carbon, nitrogen, oxygen, and neon K-edges, as illustrated in Figure 2. Averages of the single-shot measurements are shown above the single-shot data. The estimated X-ray beam size which illuminates on the FZP array is photon energy-dependent, and ranges from \(\sim\)1 to 5 mm (from 860 eV toward 280 eV). Therefore, the total pulse energy varies from \(\sim\)20 \(\mu\)J to \(\sim\)100 \(\mu\)J, in order to keep the same fluence (i.e., a few mJ/cm\({}^{2}\)) on the FZP grating to avoid damages and maintain similar signal levels. These pulse energies provide sufficient signal-to-noise levels to record single-shot spectra at 120 Hz. For each FZP, we can resolve the SASE sub-structure of the individual X-ray pulses, in addition to the shot-to-shot jitter in the X-ray central photon energy (\(\hbar\omega_{c}\)). The average spectrum suggests a \(\sim\)1 % FWHM bandwidth from the FEL (i.e., 0.01\(\hbar\omega_{c}\)). Note that the 2D spectra (a1, b1, c1 and d1) are not plotted in a 1-to-1 aspect ratio. The width along the vertical axis (non-dispersive direction) is an image of the source point (i.e., the IP) and this value can be used to estimate the energy resolution for each FZP.
The spectrometer is calibrated with thin metal foils upstream of the FZP grating (between the IP and the FZP as displayed in Figure 1). The absorption spectra of these target foils are obtained by comparing the averaged transmission spectra with and without the samples in the beam path. The absorbance (i.e., OD) of the sample is calculated using
\[OD=\log_{10}\left(\frac{l_{0}}{I}\right)\qquad\text{(Eq.~{}3)}\]
, where I and I\({}_{0}\) are the transmission spectra with and without the sample, respectively. The absorption was measured from four different targets: a 0.5 \(\upmu\)m carbon film, a 0.9 \(\upmu\)m mylar film, a thin CuO film and a 0.15 \(\upmu\)m nickel film. The carbon, mylar and nickel thin films are all purchased from Lebow Company. The CuO film is made by oxidation of a ~70 nm Cu film (deposited on 100 nm Si\({}_{3}\)N\({}_{4}\)) in an air furnace at ~600 \({}^{\circ}\)C for 2 hours. We also measured the absorption through a nitrogen gas column (~0.07 torr in a ~14 meters long gas attenuator) which is upstream of the TMO endstation. The measured optical density using the FZP (red) is plotted in comparison with measurements from literature (blue) in Figure 3(b)-3(f). The transmission spectra with (green) and without (grey) carbon film are shown in Figure 3(a) to
Figure 2: Single-shot and average spectra of the FZPs at designed photon energies near the carbon, nitrogen, oxygen, and neon absorption K-edges. The pulse energy on the FZP varies from ~20 ul to ~100 ul. The higher X-ray photon energy requires lower pulse energy because of a smaller spatial spot on the FZP grating. (a) Carbon K-edge spectra (b) Nitrogen K-edge spectra (c) Oxygen K-edge spectra and (d) Neon K-edge spectra. The small dip on the averaged spectrum in (C) can be from the oxidized layer of mirror coating along the beamline or the absorption of the Ce:YAG detector that contains oxygen element. Note that the single-shot image shown in (a1, b1, c1 and d1) are not plotted in a 1-to-1 aspect ratio. The widths of the spectra in FWHM along the non-dispersive (vertical) direction are only a few pixels (~2 to 5 pixels). The asterisks exemplify SASE sub-structures in the single-shot spectra in a3, b3, c3 and d3.
demonstrate the OD calculation. The ~1% bandwidth of the FEL average spectrum can cover the entire near-edge resonance features of gaseous and solid systems, which is an advantage for energy calibration during the experiments. Note that the resonant absorption features may not be located at the position of the best resolution in the designed FZP due to the tilted angle of the detector plane relative to the optical axis. However, the resolved SASE sub-structure features suggest that sub-eV energy resolution is achieved across these broad photon energy ranges. In addition, the absorption widths in semiconductors or metals do not represent the actual resolution since the width of the absorption peaks is a convolution between the core-hole lifetime and the broad transitions from the core-level electron to the conduction bands. The sharp peaks seen in the nitrogen gas, however, provide the direct resolution measurement of ~0.21 eV.
Two measurements are performed to quantify the energy resolution of the spectrometer. The first measurement was scanning the source point of the X-ray and monitoring the vertical width change of the single-shot spectrum. The second measurement was a scan of photon energy and was performed to find the optimal energy resolution at the current tilted detector plane (~9 degree relative to the optical axis). Both measurements used the widths along the non-dispersive direction of the spectrum. This width is an image of the source point, s, from the KB foci at the IP (see Figure 1), and this axis provides information on the maximum resolution achievable with a FZP based spectrometer. Equations 4-10 below are used to calculate the final energy resolution of a FZP at the current geometry of the photon spectrometer design. In Equation 4, the \(\Delta x_{0}\) represents the nominal X-ray focal size on the detector with the source (g\({}_{0}\)) and image (b\({}_{0}\)) distances at the designed values.
\[\Delta x_{0}=s(\frac{b_{0}}{g_{o}})\] (Eq. 4)
The image size on the Ce:YAG detector, \(\Delta x_{t}\), is found by combining the nominal image size with two broadening contributions as shown in Equation 5. The first part is the size of the nominal image size due to demagnification of the source point. The second part is the width
Figure 3: (a-b) Photoabsorption spectrum measurement at the carbon K-edge using a carbon film (0.5 μm), (c) nitrogen K-edge using nitrogen gas, (d-e) oxygen K edge using CuO and mylar (0.9 μm) thin films and (f) absorption region near neon K-edges using a nickel thin film (0.15 μm) calibration, respectively. The transmission spectra with and without samples are shown in (a) for the carbon K-edge absorption as an example of the optical density calculation using Eq. 3. The absorption spectra shown in (b-f) are measured by the FZP, and are displayed in red, while reference spectra are shown in blue [15, 16, 17, 18]. The peaks observed in the N\({}_{2}\) absorption spectrum originate from N 1s \(\rightarrow\) Rydberg state excitations [15].
when the KB focus is at the distance upstream or downstream of the IP (\(\mathrm{g}\neq\mathrm{g}_{0}\) and \(\mathrm{b}\neq\mathrm{b}_{0}\)). The \(\Delta\bar{b}\) is defined as (\(\mathrm{b}\)-\(\mathrm{b}_{0}\)), which describes the focal distance offset of the FZP due to the change in the KB foci away from the designed value \(\mathrm{g}_{0}\). The third part is the spot size broadening due to the chromatic aberration [7]. This provides the magnitude of the spot broadening due to the energy offset between the photon energy and the designed X-ray energy of a specific FZP (i.e., \(\Delta b\neq 0\) when \(\Delta E\neq 0\)). \(\Delta b\) is the chromatic aberration described in Equation 1. The \(\mathrm{w}_{r}\) is the open aperture of the FZP (see Figure 1).
\[\Delta x_{1}=\sqrt{(\Delta x_{0})^{2}+(\Delta\bar{b}\,\frac{\mathrm{w}_{r}}{b} )^{2}+(\Delta b\,\frac{\mathrm{w}_{r}}{b})^{2}}\] (Eq. 5)
The \(\Delta x\) in Equation 6 represents the final focal-spot width on the camera sensor with the consideration of the current microscope resolution, \(\Delta x_{2}\) (\(\sim\)4.9 \(\mu\)m), and the width change, \(\Delta x_{1}\), as described in Equation 5.
\[\Delta x=\sqrt{\Delta x_{1}^{2}+\Delta x_{2}^{2}}\;;\Delta x_{2}=4.9\;\mu m \qquad\text{(Eq. 6)}\]
The spatial broadening described up until in Equation 6 is complete for the estimation of the expected width along the non-dispersive direction for comparison to data.
The spatial widths along the non-dispersive direction from Equation 6 and from experimental measurements are both converted to units of energy by considering two factors: (1) the spreading of the diffracted light along the detector plane due to the detector's tilt and (2) the pixel-to-eV factor. The stretch width (i.e., footprint) on the tilted detector is calculated using Equation 7,
\[\Delta x^{\prime}\approx\Delta x/\sin(\theta+\alpha)\] (Eq. 7)
where \(\theta\) is the diffraction angle between the zero-order X-ray and the optical axis, and \(\alpha\) is the tilted angle of the detector relative to the optical axis (see Figure 1). The angle \(\theta\) and \(\alpha\) are approximately \(\sim\)0.33 and 9 degrees in the current design for 530 eV.
The pixel-to-eV factor (pte), shown in Equation 8, can be obtained from the energy scan while recording the central pixel of the average spectrum (see Table 1).
\[\Delta E^{\prime}=\Delta x^{\prime}/pte\] (Eq. 8)
Due to the limited size of the FZP aperture, the energy resolution is restricted to \(\sim\)0.1 eV (i.e., \(\mathrm{E}_{zp}\)) as calculated from Equation 9 [7]. This value is further converted to the final resolution using the geometric mean shown in Equation 10. The calculated energy resolution (as a function of KB focal distances, photon energies and FZP choices) is compared to experiments in Figures 4 and 5.
\[E_{fzp}\approx\frac{pitch}{w_{r}}E\] (Eq. 9)
\[\Delta E=\sqrt{\Delta E^{\prime 2}+E_{fzp}^{2}}\] (Eq. 10)
In the first measurements, we adjust the KB focal length near the designed interaction/source point to locate the minimum width in the non-dispersive direction so that the spectral resolution can be optimized for each designed FZP. The widths at several locations away from the designed focal point are recorded. Figure 4 displays the predicted results as dot-dashed and solid curves, respectively, for \(\Delta\)b=0 and \(\Delta\)b\(\neq\)0 (i.e., \(\Delta\)E=0 and \(\Delta\)E\(\neq\)0). The measured values are shown as circles with error bars for three different photon energies. The measured minima in Figure 4 are higher than the dot-dashed predicted values by \(\sim\)20% to 50 %. This suggests that the X-ray energies used in the measurement are a few eV different from the X-ray energies for which the FZPs were designed. This explanation is supported by using sharp resonant absorption features of nitrogen gas, where we measured a resolution of \(\sim\)0.21 eV at 406 eV, shown as the red square in Figure 4(a). The difference between the two measurements using nitrogen absorption and the width of the non-dispersive direction indeed originates from the strong variation in the focal width as a function of X-ray energy at the tilted detector (i.e., 409 eV for the foci scan and 406 eV for the absorption measurement), which is supported by the estimation using \(\sim\)409 eV as shown as red solid line. This also indicates that an energy difference of \(\sim\)1 % at the nitrogen K-edge has an impact on the energy resolution. Similarly, the discrepancy between measurements and dot-dashed prediction in Figures 4(b-c) can also be explained by difference of energies. A small energy offset in the estimation, shown in solid lines, can describe experimental results better (i.e., 3 eV and 2 eV for oxygen and neon cases). The effect of the tilted detector that leads to resolution loss is shown in Figure 5.
Figure 4: The spectral resolution measurements as a function of KB focal distance for three photon energies at the (a) nitrogen (409 eV), (b) oxygen (526 eV) and (c) neon (872 eV) K-edges (red, blue, and green circles, respectively). The resolution is calculated using the FWHM width along the non-dispersive direction (the y-axis of the camera images shown in Figure 2), and those widths are converted to eV. The dot-dashed (solid) lines are the estimated resolution dependence using Equations 4-10 with \(\Delta\)b=0 (\(\Delta\)b\(\neq\)0). The resolution difference between dot-dashed and solid lines originates from the energy difference (\(\Delta\)_b and \(\Delta\)E_\(\neq\) 0). The error bars represent 0.5 degree uncertainty in the angle of the Ce:YAG detector. The red square in (a) represents the obtained resolution from the nitrogen gas absorption spectrum at 406 eV and it agrees with the prediction shown in the dashed curve.
For the second experiment, we scanned the photon energy to test the resolution. Although tilting the detector encompasses a larger X-ray energy range within the CCD sensor, the energy resolution degrades when the photon energy deviates from the designed X-ray photon energy (see Table 1). In principle, this can be mitigated by reducing the tilt angle between the detector plane and the optical axis in order to simultaneously achieve a good focal condition for a broad X-ray energy range. However, this requires the whole detector plane to lie nearly parallel to the optical axis, and it inevitably leads to a long image length in the spectral axis and covers a very small energy range by the limited CCD sensor size. In the current spectrometer design, the tilt angle is approximately 9 degrees, which is a compromise to achieve a \(\sim\)2% energy range, 0.02\(\hbar\omega_{c}\), that gives a resolving power of >1000. Figure 5 shows the resolution variation of the FZPs near the oxygen and neon K-edges at the source distance g\({}_{0}\) (\(\Delta\bar{b}\)\(\sim\)0). Similarly, we again use the width along the non-dispersive direction to estimate the resolution as a function of a broad X-ray energy range. The solid lines are the predicted values using Equations 4-10. The predictions and measurements agree well with each other. The best resolution for the current oxygen FZP is approximately 0.45 eV at 524 eV which is slightly higher than the X-ray photon energy at 526 eV in Figure 4(b). As described above, this explains, in part, the discrepancy between the measured minimum value and prediction in Figure 4(b). This also suggests that a \(\sim\)1% energy deviation from the designed photon energy leads to a \(\sim\)10% degradation in energy resolution.
The current limitations on energy resolution are the microscope spatial resolution, tilted angle of the detector and the aperture size of FZP windows. These limitations can be mitigated by incorporating a line-scan camera that is \(\sim\)20 mm along the spectral axis and \(\sim\)20 \(\mu\)m along the non-dispersive direction. A longer 1D sensor enables a higher magnification value and smaller tilted angle while maintaining the same FOV, improving the energy resolution of the spectrometer. In addition, increasing the FZP window size from the current 0.5x0.5 mm\({}^{2}\) to 1.0x1.0 mm\({}^{2}\) will also increase the resolution, while also increasing the risk of film rupture due to mechanical vibration and air flow during the vacuum pump down or venting process.
With a tilted detector in the current FZP spectrometer, the stretching length of the spectrum is significantly reduced (see red and blue lines on the detector in Figure 1) so a larger X-ray energy range can be covered within a given CCD sensor. This is especially convenient for an energy scan with a fixed detector location so the experimental apparatus can remain untouched. However, this inevitably reduces the energy resolution when the incident X-ray energy is set away from the designed photon energy. By reducing the tilt angle from 9 degrees to 4.5 degrees, our FZP can cover \(\sim\)4% X-ray bandwidth (0.04\(\hbar\omega_{c}\)) while still providing a resolving power of better than 1000. The \(\sim\)4% window should also cover all the important absorption features in static and pump-probe experiments. For instance, the valence hole states for gas or solid systems are generally \(\sim\)1 to 2 % below the absorption pre-edges [19, 20]. These transient features can be measured at high resolution when the right FZP is chosen. For a larger bandwidth scan, the constant optimal resolution can be achieved by using a combination of FZPs with \(\sim\)4 % energy difference to cover a wider energy range near that absorption edge. This implies that the FZPs can be fabricated to have energies centered at 500 eV, 520 eV and 540 eV for the measurements near the oxygen K-edge and a resolving power of 1000 or higher can be maintained.
## 4 Conclusion and outlook
We have demonstrated that a FZP can be used to characterize single-shot FEL spectra and demonstrated the energy calibration of the device using multiple sample targets. These measurements suggest sufficient energy resolution to resolve the SASE sub-features of FEL pulses and resonant absorption lines of solid and gas molecular systems, despite the small size of the instrument. This spectrometer has been recently employed by LCLS Users as a spectral diagnostic for FEL tuning and experiments which exploit the correlation between the incident spectrum and measurements made at the IP [21]. We also demonstrated that with the \(\sim\)2% energy bandwidth of the FEL average spectra (\(\sim\)10 eV bandwidth at 520 eV X-ray), the resolving power can be maintained between 1000 and 1500.
Looking towards the future, the current spectrometer design will require upgrades since the upcoming LCLS-II light source operates at a high repetition rate up to 1 MHz. The average power will be increased by \(\sim\)3 orders of magnitude to several watts that can easily damage the FZP grating due to the thermal heat deposited on the thin film. In order to mitigate the heat on the FZP, one option is to reduce the pulse energy by 2 to 3 orders of magnitude after the IP and to increase the sensitivity of the detection system. This detection system can be a direct detection such as X-ray detectors or an indirect measurement using the current design of imaging the fluorescence of Ce:YAG by the microscope. However, the direct measurement using a commercial X-ray CCD or MCP has an acquisition rate that does not meet high repetition rate requirements. A custom-made 1-D line camera can have single-photon sensitivity and can operate at high repetition rate. This technical development is currently ongoing at SLAC National Accelerator Laboratory. For indirect detection, by integrating a line-scan camera into the microscope, which has a fast frame rate, and an intensifier, this may provide a good solution for the overall spectrometer operations at the LCLS-II light source.
## Acknowledgement
Figure 5: Variation of energy resolution for the FZP spectrometer at two different designed photon energies (a) oxygen and (b) Neon K-edges. The energy resolution is calculated from the width of the non-dispersive direction at each photon energy. The predictions shown as solid lines are calculated by considering Equations 4-10. The shaded areas represent a resolving power of 1000 for both the oxygen and neon K edges. The error bars represent 0.5 degree uncertainty in the Ce:YAG tilt angle relative to the optical axis (see Figure 1).
Use of the Linac Coherent Light Source (LCLS), SLAC National Accelerator Laboratory, is supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Contract No. DE-AC02-76SF00515.
|
2306.03048 | From Robustness to Explainability and Back Again | In contrast with ad-hoc methods for eXplainable Artificial Intelligence
(XAI), formal explainability offers important guarantees of rigor. However,
formal explainability is hindered by poor scalability for some families of
classifiers, the most significant being neural networks. As a result, there are
concerns as to whether formal explainability might serve to complement other
approaches in delivering trustworthy AI. This paper addresses the limitation of
scalability of formal explainability, and proposes novel algorithms for
computing formal explanations. The novel algorithm computes explanations by
answering instead a number of robustness queries, and such that the number of
such queries is at most linear on the number of features. Consequently, the
proposed algorithm establishes a direct relationship between the practical
complexity of formal explainability and that of robustness. More importantly,
the paper generalizes the definition of formal explanation, thereby allowing
the use of robustness tools that are based on different distance norms, and
also by reasoning in terms of some target degree of robustness. The experiments
validate the practical efficiency of the proposed approach. | Xuanxiang Huang, Joao Marques-Silva | 2023-06-05T17:21:05Z | http://arxiv.org/abs/2306.03048v2 | # From Robustness to Explainability and Back Again
###### Abstract
In contrast with ad-hoc methods for eXplainable Artificial Intelligence (XAI), formal explainability offers important guarantees of rigor. However, formal explainability is hindered by poor scalability for some families of classifiers, the most significant being neural networks. As a result, there are concerns as to whether formal explainability might serve to complement other approaches in delivering trustworthy AI. This paper addresses the limitation of scalability of formal explainability, and proposes novel algorithms for computing formal explanations. The novel algorithm computes explanations by answering instead a number of robustness queries, and such that the number of such queries is at most linear on the number of features. Consequently, the proposed algorithm establishes a direct relationship between the practical complexity of formal explainability and that of robustness. More importantly, the paper generalizes the definition of formal explanation, thereby allowing the use of robustness tools that are based on different distance norms, and also by reasoning in terms of some target degree of robustness. The experiments validate the practical efficiency of the proposed approach.
## 1 Introduction
The importance of rigorous analysis of complex ML models is widely acknowledged [59; 42], and critical for the deployment of systems of artificial intelligence (AI) in safety-critical domains. Hence, rigorous analysis of complex ML models is paramount when analyzing their robustness, fairness and explainability.
Recent years witnessed a large body work on the assessment of robustness, e.g. [48; 33; 26; 13; 18; 40; 4; 69; 55; 20; 63; 62; 68; 14; 34; 70; 56; 22; 24; 8; 61; 49; 1; 72; 35; 54; 15; 10; 71; 11; 38; 12; 36; 57; 45; 67; 47; 17; 19; 75; 65], with significant progress observed. Analysis of robustness is motivated by the existence of adversarial examples in complex ML models [64]. More recently, there have been massive efforts regarding the explainability of ML models. Unfortunately, most explainability approaches offer no guarantees of rigor [7; 51; 39; 52; 58]. Indeed, there is practical evidence that explanations computed with well-known non-formal methods can produce incorrect results [27]. A recent alternative are formal approaches to explainability [60; 41]. These approaches propose logic-based definitions of explanations, which are model-precise, and so ensure absolute guarantees of rigor.
Despite the observed progress, formal XAI still faces critical challenges. Among these, the most significant is arguably the inability of these methods for explaining (deep) neural networks ((D)NNs), but also a host of other ML classifiers, for which logic encodings have not yet been devised (e.g. support vector machines (SVM's) and \(k\) nearest neighbors classifiers among several others). Unsurprisingly, the inability of formal XAI methods to scale for complex DNNs has been acknowledged over the years [29; 27; 41; 73; 9]. Existing data [29] indicates that formal explanations can be computed for DNNs containing a few tens of ReLU units, being unclear how similar-sized NNs with other
non-ReLU activation units (which are more difficult to represent using logic statements) might be explained.
Another challenge of formal XAI is explanation size [41], given the well-known limitations of human decision makers to reason with a large number of concepts [43]. Recent work addressed this challenge by proposing probabilistic explanations [66; 3]. However, given the complexity of computing such probabilistic explanations, it seems unlikely that practical methods will be devised that will scale in the case of DNNs.
Moreover, the relationship between explanations and robustness remains somewhat elusive, despite some initial results [30], which relate globally defined explanations and so-called counterexamples. One possible justification s that adversarial examples are defined with respect to some target distance \(\epsilon\) of some given norm \(l_{p}\), and this is orthogonal to the definitions of formal explanations, which solely consider sets of features. As a result, the observed progress in assessing robustness has not been exploited in explainability.
In contrast with past work on formal XAI, this paper reveals a rigorous relationship between adversarial examples and formal explanations. This involves generalizing the definitions of formal explanations to account for a value of distance \(\epsilon\) for a given norm \(l_{p}\). More importantly, the generalized definition of formal explanation proposed in this paper allows exploiting a wide range of robustness tools (and in principle any such robustness tool) for the purpose of computing explanations. The immediate consequence of using robustness tools for computing explanations, is that this makes explainability directly dependent on the performance of robustness tools. And these tools have been the subject of impressive performance gains in recent years. The paper also proves a duality result between explanations that take into account the value \(\epsilon\) of distance, thereby enabling the use of other formal explainability algorithms, e.g. enumeration of explanations. One additional result is that the paper shows that the number of explanations is directly related with the distance that is considered when assessing robustness.
The paper is organized as follows. Section 2 introduces the notation and definitions used in the rest of the paper. Section 3 formalizes distance-restricted explanations, and demonstrates the known results about formal explanations can be ported to this more general setting. Section 4 details algorithms for computing explanations using tools for finding adversarial examples as oracles. Section 5 shows that we can also use explainability information to refine the information about adversarial examples. Section 6 summarizes experimental results that demonstrate the performance gains that the generalized definition of formal explanations enable. Section 7 concludes the paper.
## 2 Preliminaries
Minimal hitting sets.Let \(\mathcal{S}\) be a set and \(\mathbb{B}\subseteq 2^{\mathcal{S}}\) be a set of subsets of \(\mathcal{S}\). A hitting set (HS) \(\mathcal{H}\subseteq\mathcal{S}\) of \(\mathbb{B}\) is such that \(\forall(\mathcal{P}\in\mathbb{B}).P\cap\mathcal{H}\neq\emptyset\). A minimal hitting set (MHS) \(\mathcal{Q}\subseteq\mathcal{S}\) is a hitting set of \(\mathbb{B}\) such that no proper subset of \(\mathcal{Q}\) is a hitting set of \(\mathbb{B}\), i.e. \(\forall(\mathcal{R}\subsetneq\mathcal{Q})\exists(\mathcal{P}\in\mathbb{B}). \mathcal{R}\cap\mathcal{P}=\emptyset\). A minimal hitting set is said to be subset-minimal or irreducible.
Norm \(l_{p}\).The distance between two vectors \(\mathbf{v}\) and \(\mathbf{u}\) is denoted by \(||\mathbf{v}-\mathbf{u}||\), and the actual definition depends on the norm being considered. Different norms \(l_{p}\) can be considered. For \(p\geq 1\), the \(p\)-norm is defined as follows [23]:
\[||\mathbf{x}||_{p} := \left(\sum_{i=1}^{m}|x_{i}|^{p}\right)^{\sfrac{1}{p}} \tag{1}\]
Let \(d_{i}=1\) if \(x_{i}\neq 0\), and let \(d_{i}=0\) otherwise. Then, for \(p=0\), we define the 0-norm, \(l_{0}\), as follows [53]:
\[||\mathbf{x}||_{0} := \sum_{i=1}^{m}d_{i} \tag{2}\]
In general, for \(p\geq 1\), \(l_{p}\) denotes the Minkowski distance. Well-known special cases include the Manhattan distance \(l_{1}\), the Euclidean distance \(l_{2}\), and the Chebyshev distance \(l_{\infty}\). \(l_{0}\) denotes the Hamming distance.
Classification problems.Classification problems are defined on a set of features \(\mathcal{F}=\{1,\ldots,m\}\) and a set of classes \(\mathcal{K}=\{c_{1},\ldots,c_{K}\}\). Each feature \(i\) has a domain \(\mathbb{D}_{i}\). Features can be ordinal or categorical. Ordinal features can be discrete or real-valued. Feature space is defined by the cartesian
product of the features' domains: \(\mathbb{F}=\mathbb{D}_{1}\times\cdots\times\mathbb{D}_{m}\). A classifier computes a total function \(\kappa:\mathbb{F}\to\mathcal{K}\). Throughout the paper, a classification problem \(\mathcal{M}\) represents a tuple \(\mathcal{M}=(\mathcal{F},\mathbb{F},\mathcal{K},\kappa)\).
An instance (or a sample) is a pair \((\mathbf{v},c)\), with \(\mathbf{v}\in\mathbb{F}\) and \(c\in\mathcal{K}\). An explanation problem \(\mathcal{E}\) is a tuple \(\mathcal{E}=(\mathcal{M},(\mathbf{v},c))\). The generic purpose of XAI is to find explanations for each given instance. Moreover, when reasoning in terms of robustness, we are also interested in the behavior of a classifier given some instance. Hence, we will also use explanation problems also in the case of robustness.
**Example 1**.: _Throughout the paper, we consider the following classification problem. The features are \(\mathcal{F}=\{1,2,3\}\), all ordinal with domains \(\mathbb{D}_{1}=\mathbb{D}_{2}=\mathbb{D}_{3}=\mathbb{R}\). The set of classes is \(\mathcal{K}=\{0,1\}\). Finally, the classification function is \(\kappa:\mathbb{F}\to\mathcal{K}\), defined as follows:_
\[\kappa(x_{1},x_{2},x_{3})=\left\{\begin{array}{ll}\text{ITE}(x_{1}>0,1,0)& \quad\text{if }4x_{1}\geq(x_{2}+x_{3})\wedge|x_{1}|<2\\ 0&\quad\text{otherwise}\end{array}\right.\]
_Moreover, let the target instance be \((\mathbf{v},c)=((1,1,1),1)\)._
Robustness.Let \(\mathcal{M}=(\mathcal{F},\mathbb{F},\mathcal{K},\kappa)\) be a classification problem. Let \((\mathbf{v},c)\), with \(\mathbf{v}\in\mathbb{F}\) and \(c=\kappa(\mathbf{v})\), be a given instance. Finally, let \(\epsilon>0\) be a value distance for norm \(l_{p}\).
We say that there exists an adversarial example if the following logic statement holds true,
\[\exists(\mathbf{x}\in\mathbb{F}).\left(||\mathbf{x}-\mathbf{v}||_{p}\leq \epsilon\right)\wedge\left(\kappa(\mathbf{x})\neq\kappa(\mathbf{v})\right) \tag{3}\]
(The logic statement above holds true if there exists a point \(\mathbf{x}\) which is less than \(\epsilon\) distance (using norm \(l_{p}\)) from \(\mathbf{v}\), and such that the prediction changes.) If (3) is false, then the classifier is said to be \(\epsilon\)-robust. If (3) is true, then any \(\mathbf{x}\in\mathbb{F}\) that satisfies the constraint1,
Footnote 1: Parameterizations are shown as predicate arguments positioned after ’;’.
\[\text{A}\text{E}\text{x}(\mathbf{x};\mathcal{E}) := \left(||\mathbf{x}-\mathbf{v}||_{p}\leq\epsilon\right)\wedge \left(\kappa(\mathbf{x})\neq\kappa(\mathbf{v})\right) \tag{4}\]
is referred to as an _adversarial example_.
**Example 2**.: _For the classifier from Example 1, for distance \(l_{1}\), and with \(\epsilon=1\), there exist adversarial examples by either setting \(x_{1}=0\) or \(x_{1}=2\)._
Logic-based explanations.In the context of explaining ML models, rigorous, model-based, explanations have been studied since 2018 [60]. We follow recent treatments of the subject [41, 6, 16]. A PI-explanation (also referred to as an abductive explanation (AXp)) is a irreducible subset of the features that is sufficient for the prediction.
A set of features \(\mathcal{X}\subseteq\mathcal{F}\) is sufficient for the prediction if the following logic statement holds true,
\[\forall(\mathbf{x}\in\mathbb{F}).\left[\wedge_{i\in\mathcal{X}}(x_{i}=v_{i}) \right]\to\left(\kappa(\mathbf{x})=\kappa(\mathbf{v})\right) \tag{5}\]
If (5) holds, but \(\mathcal{X}\) is not necessarily irreducible (i.e. it is not subset minimal), then we say that \(\mathcal{X}\) is a weak abductive explanation (WAXp). As a result, we associate a predicate \(\mathsf{WAXp}\) with (5), such that \(\mathsf{WAXp}(\mathcal{X};\mathcal{E})\) holds true iff (5) holds true. An AXp is a weak AXp that is subset-minimal. The predicate \(\mathsf{AXp}(\mathcal{X};\mathcal{E})\) holds true iff set \(\mathcal{X}\) is also an AXp. An AXp answers a _Why?_ question, i.e. why is the prediction \(c\) (given the existing features).
A set of features \(\mathcal{Y}\subseteq\mathcal{F}\) is sufficient for changing the prediction if the following logic statement holds true,
\[\exists(\mathbf{x}\in\mathbb{F}).\left[\wedge_{i\in\mathcal{F}\setminus \mathcal{Y}}(x_{i}=v_{i})\right]\wedge\left(\kappa(\mathbf{x})\neq\kappa( \mathbf{v})\right) \tag{6}\]
If (6) holds, but \(\mathcal{Y}\) is not necessarily irreducible, then we say that \(\mathcal{Y}\) is a weak contrastive explanation (CXp). As a result, we associate a predicate \(\mathsf{WCXp}\) with (6), such that \(\mathsf{WCXp}(\mathcal{Y};\mathcal{E})\) holds true iff (6) holds true. A CXp is a weak CXp that is also subset-minimal. The predicate \(\mathsf{CXp}(\mathcal{Y};\mathcal{E})\) holds true iff set \(\mathcal{Y}\) is a CXp. It is well-known that a CXp answers a _Why Not?_ question [44, 28].
**Example 3**.: _For the classifier of Example 1, and given the target instance, the AXp is \(\{1,2,3\}\). Clearly, if we allow any feature to take any value, then we can change the prediction. Hence, the prediction does not change only if all features are fixed._
Given the above, we define,
\[\mathbb{A}(\mathcal{E}) =\{\mathcal{X}\subseteq\mathcal{F}\,|\,\mathsf{AXp}(\mathcal{X}; \mathcal{E})\} \tag{7}\] \[\mathbb{C}(\mathcal{E}) =\{\mathcal{X}\subseteq\mathcal{F}\,|\,\mathsf{CXp}(\mathcal{X}; \mathcal{E})\} \tag{8}\]
which capture, respectively, the set of all AXp's and the set of all CXp's given an explanation problem \(\mathcal{E}\).
Finally, the following result relating AXp's and CXp's is used extensively in devising explainability algorithms [28].
**Proposition 1** (MHS Duality between AXp's and CXp's).: _Given an explanation problem \(\mathcal{E}\), and norm \(p\) and a value \(\epsilon>0\) then,_
_1. A set \(\mathcal{X}\subseteq\mathcal{F}\) is an AXp iff \(\mathcal{X}\) a minimal hitting set of the CXp's in \(\mathbb{C}(\mathcal{E})\)._
_2. A set \(\mathcal{X}\subseteq\mathcal{F}\) is a CXp iff \(\mathcal{X}\) a minimal hitting set of the AXp's in \(\mathbb{A}(\mathcal{E})\)._
(Proposition 1 is a consequence of an earlier seminal result in model-based diagnosis [50].) Proposition 1 is instrumental for enumerating abductive (but also contrastive) explanations [28]. In contrast with non-formal explainability, the navigation of the space of abductive (or contrastive) explanations, i.e. their enumeration, is a hallmark of formal XAI [5, 41].)
Progress in formal explainability is documented in recent overviews [41], but also recent publications [66, 67, 5, 6, 21, 2, 9, 25, 16, 37].
## 3 Distance-Restricted Explanations
In this paper, we propose a generalized definition of (W)AXp and (W)CXp, that takes into account the \(l_{p}\) distance between \(\mathbf{v}\) and the points that can be considered.
Definitions & properties.We start by defining AXp's/CXp's taking the \(l_{p}\) distance into account. Afterwards, we prove that the MHS duality between AXp's and CXp's extends to the distance-restricted definition of explanations.
**Definition 1** (Distance-restricted (W)AXp, \(\epsilon\)-(W)AXp).: _For a norm \(l_{p}\), a set of features \(\mathcal{X}\subseteq\mathcal{F}\) is a weak abductive explanation (WAXp) for an instance \((\mathbf{v},c)\), within distance \(\epsilon>0\) of \(\mathbf{v}\), if the following predicate holds true,_
\[\mathsf{WAXp}(\mathcal{X};\mathcal{E},\epsilon,p) := \forall(\mathbf{x}\in\mathbb{F}).\left(\bigwedge_{i\in\mathcal{X} }(x_{i}=v_{i})\wedge(||\mathbf{x}-\mathbf{v}||_{p}\leq\epsilon)\right) \rightarrow(\kappa(\mathbf{x})=c) \tag{9}\]
_If a (distance-restricted) weak AXp \(\mathcal{X}\) is irreducible (i.e. it is subset-minimal), then \(\mathcal{X}\) is a (distance-restricted) AXp._
**Definition 2** (Distance-restricted (W)CXp, \(\epsilon\)-(W)CXp).: _For a norm \(l_{p}\), a set of features \(\mathcal{Y}\subseteq\mathcal{F}\) is a weak abductive explanation (WCXp) for an instance \((\mathbf{v},c)\), within distance \(\epsilon>0\) of \(\mathbf{v}\), if the following predicate holds true,_
\[\mathsf{WCXp}(\mathcal{Y};\mathcal{E},\epsilon,p)\;:=\] \[\exists(\mathbf{x}\in\mathbb{F}).\left(\bigwedge_{i\in\mathcal{F} \setminus\mathcal{Y}}(x_{i}=v_{i})\wedge(||\mathbf{x}-\mathbf{v}||_{p}\leq \epsilon)\right)\wedge(\kappa(\mathbf{x})\neq c) \tag{10}\]
_If a (distance-restricted) weak CXp \(\mathcal{Y}\) is irreducible, then \(\mathcal{Y}\) is a (distance-restricted) CXp._
**Example 4**.: _For the classifier of Example 1, let the norm used be \(l_{1}\), with distance value \(\epsilon=1\). From Example 2, we know that there exist adversarial examples, e.g. by setting \(x_{1}=0\) or \(x_{1}=2\). However, if we fix the value of \(x_{1}\) to 1, then any assignment to \(x_{2}\) and \(x_{3}\) with \(|x_{2}-1|+|x_{3}-1|\leq 1\), will not change the prediction. As a result, \(\mathcal{X}=\{1\}\) is a distance-restricted AXp when \(\epsilon=1\). Moreover, by allowing only feature 1 to change value, we are able to change prediction, since we know there exists an adversarial example._
Similar to the distance-unrestricted case, the definitions above serve to define the additional predicates AXp and CXp. When it is important to distinguish distance-restricted (W)AXp's/(W)CXp's, we will also use the notation \(\epsilon\)-(W)AXp's/\(\epsilon\)-(W)CXp's.
Given the above, we generalize (7) and (8), as follows:
\[\mathbb{A}(\mathcal{E},\epsilon;p) =\{\mathcal{X}\subseteq\mathcal{F}\,|\,\mathsf{AXp}(\mathcal{X}; \mathcal{E},\epsilon,p)\} \tag{11}\] \[\mathbb{C}(\mathcal{E},\epsilon;p) =\{\mathcal{X}\subseteq\mathcal{F}\,|\,\mathsf{CXp}(\mathcal{X}; \mathcal{E},\epsilon,p)\} \tag{12}\]
Furthermore, given the generalized definitions above, we have the following results regarding minimal hitting set duality between (W)AXp's and (W)CXp's.
**Proposition 2**.: _Consider an explanation problem \(\mathcal{E}=(\mathcal{M},(\mathbf{v},c))\) and some \(\epsilon>0\) for norm \(l_{p}\). Let \(\mathbf{x}\in\mathbb{F}\) and let \(\mathcal{D}=\{i\in\mathcal{F}\,|\,x_{i}\neq v_{i}\}\). Then, \(\mathsf{AEx}(\mathbf{x})\) holds iff \(\mathsf{WCXp}(\mathcal{D};\mathcal{E},\epsilon,p)\) holds. Alternatively, we could also state that \(\mathsf{AEx}(\mathbf{x})\) holds iff \(\mathsf{WCXp}(\mathcal{D})\) is a hitting set of \(\mathbb{A}(\mathcal{E},\epsilon;p)\)._
Proof.: Clearly, \(\mathsf{WCXp}(\mathcal{D})\) holds iff \(\mathsf{WCXp}(\mathcal{D})\) is a hitting set of \(\mathbb{A}(\mathcal{E},\epsilon;p)\). Hence, it suffices to prove the first claim.
* If \(\mathsf{AEx}(\mathbf{x})\) holds, the the features in \(\mathcal{D}\) are sufficient to change the prediction. Then, by definition \(\mathsf{WCXp}(\mathcal{D})\) must be a WAXp.
* If \(\mathsf{WCXp}(\mathcal{D};\mathcal{E},\epsilon,p)\) holds, then, by definition, the prediction changes value. Picking the point vector \(\mathbf{x}\) that causes the prediction to change, and which only changes values in \(\mathcal{D}\), then \(\mathsf{AEx}(\mathbf{x};\mathcal{E},\epsilon,p)\) holds.
**Proposition 3**.: _Given an explanation problem \(\mathcal{E}\), norm \(l_{p}\), and a value of distance \(\epsilon>0\) then,_
1. _A set_ \(\mathcal{X}\subseteq\mathcal{F}\) _is a (distance-restricted) AXp iff_ \(\mathcal{X}\) _a minimal hitting set of the CXp's in_ \(\mathbb{C}(\mathcal{E},\epsilon;p)\)_._
2. _A set_ \(\mathcal{X}\subseteq\mathcal{F}\) _is a (distance-restricted) CXp iff_ \(\mathcal{X}\) _a minimal hitting set of the AXp's in_ \(\mathbb{A}(\mathcal{E},\epsilon;p)\)_._
Proof sketch.: We prove each claim separately.
1. A set \(\mathcal{X}\subseteq\mathcal{F}\) is a (distance-restricted) AXp iff \(\mathcal{X}\) a minimal hitting set of the CXp's in \(\mathbb{C}(\mathcal{E},\epsilon;p)\). * If \(\mathcal{X}\) is an AXp, then it must be an MHS of \(\mathbb{C}\). Claim 1: \(\mathcal{X}\) must hit all sets in \(\mathbb{C}\). If not, then the set of free features would allow changing the prediction; this is impossible since \(\mathcal{X}\) is an AXp. Claim 2: \(\mathcal{X}\) must be minimal; otherwise there would be a proper subset of \(\mathcal{X}\) that would be a HS of \(\mathbb{C}\) and so a WAXp (see above); a contradiction. * If \(\mathcal{H}\) is MHS of \(\mathbb{C}\), then it must be an AXp. We proceed as follows. From the definition of WAXP, the features in \(\mathcal{H}\) are fixed to the values dictated by \(\mathbf{v}\). Thus, since we fix at least one feature in each set of \(\mathbb{C}\), we are guaranteed not to leave free a set of features representing any CXp. Hence, \(\mathcal{H}\) must be a WAXP. Since \(\mathcal{H}\) is a minimal hitting set, and so irreducible, then it is an AXp.
2. A set \(\mathcal{X}\subseteq\mathcal{F}\) is a (distance-restricted) CXp iff \(\mathcal{X}\) a minimal hitting set of the AXp's in \(\mathbb{A}(\mathcal{E},\epsilon;p)\). The proof mimics the previous case.
Proposition 3 is instrumental for the enumeration of AXp's and CXp's, as shown in earlier work in the case of distance-unrestricted AXp's/CXp's [28].
**Example 5**.: _For the running example, we have that \(\mathbb{A}(\mathcal{E},1;1)=\mathbb{C}(\mathcal{E},1;1)=\{\{1\}\}\)._
The AXp's (resp. CXp's) studied in earlier work [60, 29] represent a specific case of the distance-restricted AXp's (resp. CXp's) introduced in this section.
**Remark 1**.: _Distance unrestricted AXp's (resp. CXp's) correspond to \(m\)-distance AXp's (resp. CXp's) for norm \(l_{0}\)._
Consequences.The definitions of distance-restricted AXp's and CXp's reveal several novel uses for abductive & contrastive explanations.
For a given distance \(\epsilon\), an AXp represents an irreducible sufficient reason for the ML models not to have an adversarial example.
The number of distance-restricted AXp's is non-decreasing with the distance \(\epsilon\).
**Proposition 4**.: _Let \(0<\epsilon<\eta\), then_
\[\mathbb{C}(\mathcal{E},\epsilon;p)\subseteq\mathbb{C}(\mathcal{E},\eta;p)\]
Proof sketch.: For any \(\mathcal{X}\subseteq\mathcal{F}\), if \(\mathsf{WCXp}(\mathcal{X};\mathcal{E},\epsilon,p)\) holds, then it must be the case that \(\mathsf{WCXp}(\mathcal{X};\mathcal{E},\eta,p)\) also holds.
Clearly, with \(0<\epsilon<\eta\), \(\epsilon\)-AXp's are not necessarily \(\eta\)-AXp's. However, each \(\epsilon\)-AXp is included in some \(\eta\)-AXp [74].
The following observation will prove useful in designing efficient algorithms (on the complexity of the oracle for adversarial examples) for finding distance-restricted AXp's/CXp's.
**Proposition 5**.: (9) _and (10) are monotonic and up-closed._
Furthermore, it is apparent that \(\epsilon\)-AXp's and \(\epsilon\)-CXp's offer a rigorous definition of the concept of _local_ explanations studied in non-formal XAI [46].
## 4 From Robustness to Explainability
This section shows that, any tool for finding adversarial examples can be used for computing distance-restricted abductive explanations, provided the tool respects a few properties. We detail two algorithms, both of which build on the monotonicity of entailment. The first one is a fairly standard solution used in explainability, with the main difference being the use of a tool finding adversarial examples. The second algorithm has not been used in formal explainability, and aims to target explanation problems with a large number of features.
Required properties of robustness tool.This section shows that distance-restricted explanations (AXp's and CXp's) can be computed using _any_ existing tool for deciding the existence of adversarial examples, provided such tool respects the following properties:
1. Features can be fixed, i.e. the value of some feature \(i\) can be set to the value dictated by \(\mathbf{v}\), using a constraint on the inputs of the form \(x_{i}=v_{i}\).
2. The robustness tool is sound, i.e. any reported adversarial example \(\mathbf{x}\) respects (4).
3. The robustness tool is complete, i.e. if there exists an adversarial example, i.e. some \(\mathbf{x}\) such that \(\mathsf{AEx}(\mathbf{x})\) holds true, then it will be reported.
We will investigate later the impact of relaxing property 3.
In the rest of the paper, we will use \(\mathsf{FindAdvEx}(\epsilon,\mathcal{S};\mathcal{E},p)\) to represent a call to a robustness oracle that decides the existence of a distance-\(\epsilon\) adversarial example when the features in set \(\mathcal{S}\) are fixed.
Computing distance-restricted AXp's.We now detail two algorithms for computing an \(\epsilon\)-AXp. These algorithms hinge on the property that the definitions of \(\epsilon\)-AXp (and \(\epsilon\)-CXp) are monotonic.
Algorithm 1 describes a fairly standard approach for computing an AXp, adapted to use an oracle for deciding the existence of adversarial examples given norm \(l_{m}\) and value \(\epsilon\) of distance. At each step, one feature is decided not to be fixed. If the oracle answers that an adversarial example exists, then the feature is important and is re-added to the set \(\mathcal{S}\). As can be concluded, the algorithm queries the oracle \(m\) times. Since complex ML models can be defined on a large number of features, Algorithm 2 proposes an alternative solution, which mimics the well-known QuickXplain algorithm for finding
minimal explanations for over-constrained problems [32], and which enables reducing the number of oracle calls when the computed set is much smaller than the starting set.
```
1:Input: Arguments: Distance \(\epsilon\); Parameters: Explanation problem \(\mathcal{E}\), Norm \(p\) Output: One CXp \(\mathcal{S}\)
2:
3:functionFindXpDeq(\(\epsilon;\mathcal{E},p\))
4:ifnotFindAdvEx(\(\epsilon,\emptyset;\mathcal{E},p\))then
5:return\(\emptyset\)
6:\(\mathcal{S}\leftarrow\)QxpRecur(\(\emptyset,\mathcal{F},\textbf{false},\epsilon;\mathcal{E},p\))
7:return\(\mathcal{S}\)
8:functionQxpRecur(\(\mathcal{B},\mathcal{T},\textbf{newB},\epsilon;\mathcal{E},p\))
9:ifnewB and not FindAdvEx(\(\epsilon,\mathcal{B};\mathcal{E},p\))then
10:return\(\emptyset\)
11:if\(|\mathcal{T}|=1\)then
12:return\(\mathcal{T}\)
13:\(\mu=\lfloor\nicefrac{{|\mathcal{T}|}}{{2}}\rfloor\)
14:\((\mathcal{T}_{1},\mathcal{T}_{2})\leftarrow(\mathcal{T}_{1:\mu},\mathcal{T} _{\mu+1:|\mathcal{T}|})\)
15:\(\mathcal{R}_{2}\leftarrow\)QxpRecur(\(\mathcal{B}\cup\mathcal{T}_{1},\mathcal{T}_{2},|\mathcal{T}_{1}|>0, \epsilon;\mathcal{E},p\))
16:\(\mathcal{R}_{1}\leftarrow\)QxpRecur(\(\mathcal{B}\cup\mathcal{R}_{2},\mathcal{T}_{1},|\mathcal{R}_{2}|>0, \epsilon;\mathcal{E},p\))
17:return\(\mathcal{R}_{1}\cup\mathcal{R}_{2}\)
```
**Algorithm 2** QuickXplain-based approach for finding AXp based on adversarial example search
Relaxing property 3.The algorithms described above call an oracle for adversarial examples a number of times. What can be done when the oracle is unable to find an adversarial example, or prove its non-existence within a given timeout? As the experiments confirm, this can happen when the oracle analyzes complex problems. Property 3 can be relaxed as follows. (We consider Algorithm 1, but Algorithm 2 could be adapted in a similar way.) If the oracle for adversarial examples does not terminate within the allocated running time for a given feature \(i\), the feature is declared to yield an adversarial example, and so it is re-added to the set \(\mathcal{S}\). Thus, we are guaranteed not to drop features for which there might exist adversarial examples. Therefore, the final reported set of features, albeit not guaranteed to be an AXp, is certainly a WAXp.
Contrastive explanations & enumeration.Algorithm 3 summarizes one algorithm for finding one \(\epsilon\)-CXp. (The algorithm mimics Algorithm 1 for finding one \(\epsilon\)-AXp.) The differences observed when compared with Algorithm 1 result from the definition of \(\epsilon\)-CXp.
As the previous algorithms demonstrate, one can readily adapt the explainability algorithms studied in recent years. As a result, the enumeration of AXp's and CXp's will simply exploit recent (standard) solutions [41], where the AXp/CXp extractors will find instead \(\epsilon\)-AXp's and \(\epsilon\)-CXp's.
From Explainability Back to Robustness
Although the main motivation for this work is the computation of abductive/contrastive explanations using robustness oracles, thus tapping on the efficiency of robustness tools, Proposition 2 reveals that information about abductive/contrastive explanations can also be relevant when finding adversarial examples.
First, from Proposition 2, the information about explanations can refine the information about adversarial examples. For example, an adversarial example is defined as a point in feature space,. However, such information can be completed with the set of features that need to change value for the prediction to change (i.e. a CXp).
As another example, a human decision maker may select features not to include in explanations (or changed features in an adversarial example), and then assess whether CXp's can still be identified.
Duality between AXp's and CXp's is instrumental for the enumeration of both AXp's and CXp's [28]. Furthermore, the enumeration of AXp's (and CXp's) can be used for understanding which features are involved in one or more adversarial examples.
## 6 Experiments
This section assesses the improvement in the scalability of computing abductive explanations. Instead of computing plain AXp's, for which existing solutions scale up to a few tens (i.e. \(\sim 20\)) ReLU units [29], we compute \(\epsilon\)-AXp's using the Marabou [34] robustness tool. We use the same DNNs are considered in earlier work [31; 33; 34], because of their size (much larger than what formal XAI approaches can explain), but also to show how the achieved performance correlates with the practical efficiency of the robustness tool.
Experimental setup.The evaluation includes 12 publicly available pre-trained ACAS Xu DNNs [31]2. Each of these networks consists of five features and five classes. The DNNs are fully connected, utilizing ReLU activation functions, and comprising 6 hidden layers with a total of 300 ReLU nodes each. For each DNN, we randomly select three points from the normalized feature space and compute one \(\epsilon\)-AXp for each point using Algorithm 1. (The choice of algorithm was based on the number of features of the DNNs.) The timeout for deciding one MAXp predicate was set to 4800 seconds. Additionally, a prototype of the proposed algorithm was implemented in Perl. The robustness tool we utilized for the experiments was Marabou [34], with its default norm. Marabou is an SMT-based tool that can answer queries about a network's properties by transforming these queries into constraint satisfaction problems. The experiments were conducted on a Linux-based machine equipped with a 2-Core Intel(R) Xeon(R) CPU E5-2673 v4 2.30GHz processor and 8 GB of RAM. The machine was running Ubuntu 20.04.
Footnote 2: [https://github.com/NeuralNetworkVerification/Marabou](https://github.com/NeuralNetworkVerification/Marabou)
Results.Table 1 summarizes the results. As Algorithm 1 was used, each feature is tested exactly once. Thus, in our case, we require exactly five calls to compute one \(\epsilon\)-AXp. When the value of \(\epsilon\) is sufficiently small, it is more likely that the DNNs are robust against perturbations, and as a result, there may not exist any AXp. This can be illustrated by considering the case of \(\mathit{ACASXU\_3\_1}\) when \(\epsilon=0.05\) is selected. Moreover, it should be noted that if there is no \(\epsilon\)-AXp for a certain value of \(\epsilon\), it also implies that there is no \(\epsilon\)-AXp for any \(\epsilon^{\prime}<\epsilon\). For example, in the case of \(\mathit{ACASXU\_1\_5}\) and the third point, there is no \(0.1\)-AXp, so there is no \(0.05\)-AXp either. The runtime of computing \(\epsilon\)-AXp is influenced by two factors: 1) the choice of \(\epsilon\), and 2) the number of dropped features during the computation of \(\epsilon\)-AXp. As the value of \(\epsilon\) increases, or as the number of dropped features increases, the runtime of computing one \(\epsilon\)-AXp also increases. This is because larger values of \(\epsilon\) or a larger number of dropped features imply a larger search space for finding adversarial examples, which requires more computational effort. The choice of \(\epsilon\) has a greater impact on the runtime. In most cases, computing one \(0.1\)-AXp consumes much more time compared to computing one \(0.05\)-AXp. However, the impact of the number of dropped features cannot be overlooked. For example, in the case of \(\mathit{ACASXU\_2\_4}\) and the third point, as well as in the case of \(\mathit{ACASXU\_3\_7}\) and the second point, the runtime of computing one \(0.05\)-AXp is significantly larger than that of computing one \(0.1\)-AXp. This highlights the influence of the number of dropped features on the computation time.
## 7 Conclusions
This paper demonstrates the existence of a tight relationship between adversarial examples and (rigorous) abductive (and contrastive) explanations. Furthermore, the paper proposes algorithms for
\begin{table}
\begin{tabular}{c c c c c c|c c c c} \hline \hline DNN & points & \(|\)AXpl & \#Calls & Time & \#TO & \(|\)AXpl & \#Calls & Time & \#TO \\ \hline \multicolumn{1}{c}{} & \multicolumn{3}{c}{\(\epsilon=0.1\)} \\ \hline \multirow{3}{*}{ACASXU\_1\_5} & \#1 & 3 & 5 & 185.9 & 0 & 2 & 5 & 113.8 & 0 \\ & \#2 & 2 & 5 & 273.8 & 0 & 1 & 5 & 33.2 & 0 \\ & \#3 & 0 & 5 & 714.2 & 0 & 0 & 5 & 4.3 & 0 \\ \hline \multirow{3}{*}{ACASXU\_2\_2} & \#1 & 4 & 5 & 10.2 & 0 & 3 & 5 & 5.9 & 0 \\ & \#2 & 1 & 5 & 7413.2 & 1 & 0 & 5 & 401.2 & 0 \\ & \#3 & 2 & 5 & 63.4 & 0 & 1 & 5 & 42.1 & 0 \\ \hline \multirow{3}{*}{ACASXU\_2\_4} & \#1 & 3 & 5 & 290.2 & 0 & 4 & 5 & 132.0 & 0 \\ & \#2 & 1 & 5 & 5610.4 & 1 & 0 & 5 & 87.9 & 0 \\ & \#3 & 4 & 5 & 33.1 & 0 & 2 & 5 & 83.6 & 0 \\ \hline \multirow{3}{*}{ACASXU\_3\_1} & \#1 & 0 & 5 & 2219.3 & 0 & 0 & 5 & 14.2 & 0 \\ & \#2 & 2 & 5 & 4263.5 & 1 & 0 & 5 & 1853.1 & 0 \\ & \#3 & 1 & 5 & 581.8 & 0 & 0 & 5 & 355.9 & 0 \\ \hline \multirow{3}{*}{ACASXU\_3\_2} & \#1 & 3 & 5 & 13739.3 & 2 & 1 & 5 & 6890.1 & 1 \\ & \#2 & 3 & 5 & 226.4 & 0 & 2 & 5 & 125.1 & 0 \\ & \#3 & 2 & 5 & 1740.6 & 0 & 2 & 5 & 173.6 & 0 \\ \hline \multirow{3}{*}{ACASXU\_3\_3} & \#1 & 0 & 5 & 342.4 & 0 & 0 & 5 & 17.6 & 0 \\ & \#2 & 1 & 5 & 4047.8 & 1 & 0 & 5 & 21.3 & 0 \\ & \#3 & 3 & 5 & 286.2 & 0 & 2 & 5 & 32.5 & 0 \\ \hline \multirow{3}{*}{ACASXU\_3\_4} & \#1 & 2 & 5 & 2276.5 & 0 & 4 & 5 & 29.2 & 0 \\ & \#2 & 1 & 5 & 5188.7 & 1 & 0 & 5 & 166.3 & 0 \\ & \#3 & 2 & 5 & 7444.5 & 1 & 0 & 5 & 48.6 & 0 \\ \hline \multirow{3}{*}{ACASXU\_3\_5} & \#1 & 4 & 5 & 43.6 & 0 & 2 & 5 & 59.4 & 0 \\ & \#2 & 3 & 5 & 5039.4 & 0 & 2 & 5 & 4303.8 & 1 \\ & \#3 & 2 & 5 & 5574.9 & 1 & 2 & 5 & 2660.3 & 0 \\ \hline \multirow{3}{*}{ACASXU\_3\_6} & \#1 & 1 & 5 & 6225.0 & 1 & 0 & 5 & 51.0 & 0 \\ & \#2 & 3 & 5 & 4957.2 & 1 & 2 & 5 & 1897.3 & 0 \\ & \#3 & 1 & 5 & 196.1 & 0 & 1 & 5 & 919.2 & 0 \\ \hline \multirow{3}{*}{ACASXU\_3\_7} & \#1 & 3 & 5 & 6256.2 & 0 & 4 & 5 & 26.9 & 0 \\ & \#2 & 4 & 5 & 311.3 & 0 & 1 & 5 & 6958.6 & 1 \\ & \#3 & 2 & 5 & 7756.5 & 1 & 1 & 5 & 7807.6 & 1 \\ \hline \multirow{3}{*}{ACASXU\_4\_1} & \#1 & 2 & 5 & 12413.0 & 2 & 1 & 5 & 5090.5 & 1 \\ & \#2 & 1 & 5 & 5035.1 & 1 & 0 & 5 & 2335.6 & 0 \\ & \#3 & 4 & 5 & 1237.3 & 0 & 4 & 5 & 1143.4 & 0 \\ \hline \multirow{3}{*}{ACASXU\_4\_2} & \#1 & 4 & 5 & 15.9 & 0 & 4 & 5 & 12.1 & 0 \\ & \#2 & 3 & 5 & 1507.6 & 0 & 1 & 5 & 111.3 & 0 \\ & \#3 & 2 & 5 & 5641.6 & 2 & 0 & 5 & 1639.1 & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The table presents statistics for computing \(\epsilon\)-AXp with \(\epsilon=\{0.1,0.05\}\). The column \(|\)AXpl indicates the size of AXp, while the column \(\#\)Calls represents the number of calls made to the robustness tool (Marabou). The column Time reports the runtime required to compute one \(\epsilon\)-AXp. The column \(\#\)TO indicates the number of timeouts that occurred during the process of deciding WAXp predicate using the robustness tool.
computing abductive (or contrastive) explanations that use an oracle for adversarial examples. These results hinge on a generalized definitions of formal explanations to account for bounds on norm \(l_{p}\) distance when computing explanations.
The experiments illustrate the gains in scalability that are obtained by using existing tools for finding adversarial examples. Whereas the current state of the art for rigorously explaining neural networks is bounded by a few tens of ReLU units [29, 9], this paper shows that distance-restricted abductive explanations can be computed for neural networks with several hundreds of ReLU units, i.e. more than an order of magnitude improvement.
A number of lines of research can be envisioned, most related with tapping on the vast number of existing robustness tools, and assessing their scalability in the context of explainability. For example, since some robustness tools offer probabilistic guarantees, it will be important to assess whether such tools can also be exploited in computing explanations. A number of explainability queries have been studied in recent years [5]. More efficient reasoning will enable answering such queries in practice.
Acknowledgments.This work was supported by the AI Interdisciplinary Institute ANITI, funded by the French program "Investing for the Future - PIA3" under Grant agreement no. ANR-19-PI3A-0004, and by the H2020-ICT38 project COALA "Cognitive Assisted agile manufacturing for a Labor force supported by trustworthy Artificial intelligence". The work was also influenced by discussions with several colleagues, including the researchers and collaborators of ANITI's DeepLever Chair. N. Narodytska spotted a typo in the pseudo-code of the deletion-based algorithms. JMS also acknowledges the incentive provided by the ERC who, by not funding this research nor a handful of other grant applications between 2012 and 2022, has had a lasting impact in framing the research presented in this paper.
|
2301.01820 | InPars-v2: Large Language Models as Efficient Dataset Generators for
Information Retrieval | Recently, InPars introduced a method to efficiently use large language models
(LLMs) in information retrieval tasks: via few-shot examples, an LLM is induced
to generate relevant queries for documents. These synthetic query-document
pairs can then be used to train a retriever. However, InPars and, more
recently, Promptagator, rely on proprietary LLMs such as GPT-3 and FLAN to
generate such datasets. In this work we introduce InPars-v2, a dataset
generator that uses open-source LLMs and existing powerful rerankers to select
synthetic query-document pairs for training. A simple BM25 retrieval pipeline
followed by a monoT5 reranker finetuned on InPars-v2 data achieves new
state-of-the-art results on the BEIR benchmark. To allow researchers to further
improve our method, we open source the code, synthetic data, and finetuned
models: https://github.com/zetaalphavector/inPars/tree/master/tpu | Vitor Jeronymo, Luiz Bonifacio, Hugo Abonizio, Marzieh Fadaee, Roberto Lotufo, Jakub Zavrel, Rodrigo Nogueira | 2023-01-04T20:58:43Z | http://arxiv.org/abs/2301.01820v4 | # InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval
###### Abstract
Recently, InPars introduced a method to efficiently use large language models (LLMs) in information retrieval tasks: via few-shot examples, an LLM is induced to generate relevant queries for documents. These synthetic query-document pairs can then be used to train a retriever. However, InPars and, more recently, Promptagator, rely on proprietary LLMs such as GPT-3 and FLAN to generate such datasets. In this work we introduce InPars-v2, a dataset generator that uses open-source LLMs and existing powerful rerankers to select synthetic query-document pairs for training. A simple BM25 retrieval pipeline followed by a monoT5 reranker finetuned on InPars-v2 data achieves new state-of-the-art results on the BEIR benchmark. To allow researchers to further improve our method, we open source the code, synthetic data, and finetuned models: [https://github.com/zetaalphavector/inPars/tree/master/legacy/inpars-v2](https://github.com/zetaalphavector/inPars/tree/master/legacy/inpars-v2)
## 1 Introduction and Background
Data augmentation has been a reliable tool to improve the effectiveness of AI models in the face of the scarcity of high-quality in-domain training data, which is a common problem in practical applications. Previous work by Bonifacio et al. [1] and Dai et al. [2] successfully leveraged the few-shot capabilities of LLMs to generate reliable synthetic training data for information retrieval models. These training data helped their models achieve state-of-the-art (SOTA) results on the BEIR benchmark [6].
Bonifacio et al. [1] propose InPars where they generate queries from documents in the corpus using LLMs. Similarly to Bonifacio et al. [1], the recently published Promptagator [2] model also feeds prompts to LLMs in order to generate alternative queries for a given document in an unsupervised manner. It differs primarily from InPars in that it uses dataset-specific prompts, a larger LLM to generate queries, and a fully trainable retrieval pipeline with smaller models.
This work extends the method of Bonifacio et al. [1] by using a reranker as a filtering mechanism to select the best synthetically generated examples and further improving retrieval effectiveness
on BEIR. We also use an open-source query generator as opposed to the proprietary one used by Bonifacio et al. and provide the source code and data to reproduce our results on TPUs. We refer to Bonifacio et al. [1] model as Inpars-v1 and the model presented in this paper as Inpars-v2.
## 2 Methodology
In this section, we explain the experiments we performed and how they differ from InPars-v1 [1].
To generate synthetic queries, we use the open-source GPT-J [8] with 6B parameters to replace OpenAI's curie model used in InPars-v1. For each dataset in the BEIR benchmark, we sample 100k documents from its corpus and generate one synthetic query per document using GPT-J prompted with 3 examples from MS MARCO. We use greedy decoding and the "gbq" prompt template from InPars-v1. Some corpora in BEIR such as ArguAna [7] have less than 100k documents. In these cases, we generate as many synthetic queries as there are documents in the corpus. It takes on average 30 hours on an A100 GPU to generate 100k queries.
Once the synthetic queries are generated, we apply a filtering step to select query-document pairs that are more likely to be relevant to each other. In InInPars-v1, this filtering step consisted of selecting the top 10k query-document pairs with the highest log probabilities of generating a query given the 3-shot examples and the document as input. InInPars-v2, we use monoT5-3B [4] already finetuned on MS MARCO for one epoch1 to estimate a relevancy score for each of the 100k query-document pairs. Then, we keep only the top 10k pairs with the highest scores as our positive query-document pairs for training. It takes approximately 1.5 hours to score 100k query-document pairs on a TPU v3-8. It should take twice as much on a A100.
Footnote 1: [https://huggingface.co/castorini/monot5-3b-msmarco-10k](https://huggingface.co/castorini/monot5-3b-msmarco-10k)
To obtain negatives (i.e., non-relevant) query-document pairs, we randomly sample one document from the top 1000 retrieved by BM25 when issued the synthetic query. Thus, our training set consists of 10k positive query-document pairs and 10k negative query-document pairs.
The rerankers are finetuned in the same manner as in InPars-v1: monoT5-3B is finetuned on MS MARCO for one epoch and then further finetuned for one epoch on the synthetic data. We use the Adafactor optimizer [5] with a constant learning rate of 1e-3. Each batch has 64 positive and 64 negative query-document pairs randomly sampled from the training dataset. We finetune one model on each synthetic dataset from BEIR, that is, we end up with 18 different rerankers, one per dataset, which are then evaluated on the corresponding test sets. Finetuning on each synthetic dataset takes less than 10 minutes on a TPU v3-8.
Evaluation is performed using the following pipeline: first we use Pyserini's [3] flat indexes2 to retrieve a thousand documents for each query using BM25 with default parameters (k1=0.9, b=0.4), for each dataset. Then we use the finetuned monoT5-3B models to rerank these documents.
Footnote 2: As opposed to the multifield index.
## 3 Results
Table 1 presents results for BM25 (2nd column), monoT5-3B finetuned on MS MARCO (3rd column), monoT5-3b finetuned on MS MARCO and further finetuned on InPars-v1 (4th column), and monoT5-3B finetuned on MS MARCO and then finetuned on InPars-v2 data (5th column). Compared to InPars-v1, our approach is substantially better on TREC-News, Climate-FEVER, Robust and Touche. Additionally, we compare our method with Promptagator [2] and RankT5 [10]. Taking into account the average of all BEIR datasets, these results represent a new state of the art on BEIR.
Promptagator and RankT5 strive on datasets that monoT5 and InPars-v2 cannot even surpass BM25, such as Touche and ArguAna. Note that these datasets focus on argument retrieval, which is slightly different from other datasets in the BEIR benchmark. As a result, they benefit from using custom prompts.3 Promptagator does this without using supervised data from MS MARCO and using smaller T5 models with 110M parameters for the retrieval and reranking steps.
Footnote 3: In preliminary experiments, we also observed an improvement of more than 10 nDCG@10 points on ArguAna by using a dataset-specific prompt to generate synthetic queries. More details and results on the full BEIR benchmark will appear in an upcoming paper.
Promptagator uses a proprietary model, FLAN [9], to generate synthetic queries. The RankT5 model is a modified version of the monoT5 reranker, but its checkpoint and code are not published. In this work, we make the code, models, and data open-source and publicly available.
## 4 Conclusion
In this work, we presented InPars-v2, an improved version of InPars [1] that uses a publicly available language model to generate queries and a better query-document pair selection process. Our results show that we achieve effectiveness on par with the state of the art on BEIR. The synthetic data and finetuned models were publicly released.
## Acknowledgments
This research was partially supported by Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP) (project id 2022/01640-2). We also thank Centro Nacional de Processamento de Alto Desempenho (CENAPAD-SP) and Google Cloud for computing credits.
|
2301.07289 | Impact of symmetry inheritance on conformally flat spacetime | The goal of this research paper is to investigate curvature inheritance
symmetry in conformally flat spacetime. Curvature inheritance symmetry in
conformally flat spacetime is shown to be a conformal motion. We have proven
that a conformally flat spacetime reduces to Einstein spacetime if admits
curvature inheritance symmetry. A few results on conformally flat spacetimes
that obey Einstein's field equation with or without a cosmological constant, if
admits the curvature inheritance symmetry. The energy-momentum tensor is to be
covariantly constant in a 4-dimensional relativistic perfect fluid spacetime
which is also conformally flat spacetime, admits curvature inheritance, and
obeys Einstein's field equations in the presence of a cosmological constant.
Moreover, it is also obtained that such spacetimes with perfect fluid satisfy
the the vacuum-like equation of state consecutively it is dark matter. Finally,
in the third part of the article, the case compatible with all Theorems from
Theorem \ref{Th2.1} to Theorem \ref{Th2.5n} is shown. On the other hand, it has
also been emphasized that it is an example of de Sitter spacetime. It has been
demonstrated that this spacetime also has a conformal killing vector. | Musavvir Ali, Mohammad Salman, Sezgin Altay Demirbag | 2023-01-18T03:39:34Z | http://arxiv.org/abs/2301.07289v2 | # A conformally flat spacetime admitting curvature inheritance symmetry
###### Abstract.
This research investigates the curvature inheritance in conformally flat spacetimes. We have shown that curvature inheritance symmetry reduces to conformal motion in conformally flat spacetime. We have established every conformally flat spacetime that admits curvature inheritance symmetry is that of Einstein spacetime. We have also found certain results conformally flat spacetimes admitting the curvature inheritance symmetry and obeying Einstein's field equation with a cosmological term. Moreover, it is observed that such spacetimes with perfect fluid satisfies the vacuum-like equation of state consecutively it is dark matter.
Key words and phrases:Curvature inheritance symmetry, conformal curvature tensor, conformally flat spacetime, cosmological models 2020 Mathematics Subject Classification: 53C15; 53C25; 53B20; 53C80; 83C05
## 1. **Introduction and Preliminaries**
This study examines whether (non-trivial) curvature inheritance symmetry occurs in conformally flat spacetime. There are undoubtedly several spacetimes, conformally flat spacetime that admit these symmetries in all circumstances. In spacetime \(V_{4}\) the curvature tensor associated to Lorentzian metric \(g\) of signature \((-,+,+,+)\) through the Levi-Civita connection, denoted by \(R^{h}_{ijk}\), the components of Ricci tensor \(R_{ij}=R^{h}_{ihj}\) and the Ricci scalar are given by \(R=g^{ij}R_{ij}.\) The symbols \((;)\), \((,)\) and \(\pounds_{\xi}\) represent the covariant, partial, and Lie derivatives, respectively. The standard symmetrization and skew-symmetrization are indicated by round and square brackets, respectively. If the curvature tensor does not vanish over a non-empty open subset of \(V_{4}\), it is assumed that \(V_{4}\) is not flat.
The curvature tensor is a significant concept in differential geometry ([2], [4], [21], [25]) and, more specifically in general relativity theory ([6]-[9]). As a result, the study of its symmetries, namely the inheritance of curvature, is a significant subject that has recently concentrated a lot of attention. The present paper is a further attempt at symmetries admitted by the conformally flat spacetime, and we hope that it will present an effective contribution to the
## 1. Introduction
The theory of relativity is a very general theory of relativity, and it is a very general theory of relativity. The theory of relativity is a very general theory of relativity, and it is a very general theory of relativity. The theory of relativity is a very general theory of relativity, and it is a very general theory of relativity. The theory of relativity is a very general theory of relativity, and it is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is a very general theory of relativity. The theory of relativity is very general theory of relativity. The theory of relativity is a very general theory of relativity.
In 1992, K.L. Duggal [13] introduced the concept of curvature inheritance (CI), which is the generalization of curvature collineation (CC), and many other collineations were defined by Katzin in 1969 [11] and also by Duggal [26]. Most recently, Ali et al. ( [16]- [18]) investigated symmetry inheritance for conharmonic curvature tensor and curvature inheritance in Ricci flat spacetimes and A. A. Sheikh et al. have studied CI in [20] for M-projectively flat spacetime. Also, lots of authors have studied the curvature symmetry in the context of general relativity ( [6], [23], [31]- [32]). In 2004, G.S. Hall published a book on the field of symmetries, and curvature structure in general relativity [24]. In furthermore, Abdussatar and B. Dwivedi extend the work of symmetries of spacetime for conharmonic curvature tensor [25].
**Definition 1.1**.: _[_13_]_ _A spacetime \((V_{4},g)\) admits curvature inheritance symmetry along a smooth vector \(\xi^{i}\), such that_
\[\pounds_{\xi}R^{h}_{ijk}=2\varOmega R^{h}_{ijk}, \tag{1.5}\]
_where \(\varOmega=\varOmega(x^{i})\) represents inheriting factor or inheritance function._
If \(\varOmega=0\), then \(\pounds_{\xi}R^{h}_{ijk}=0\) and \(\xi^{i}\) is said to follow a symmetry known as curvature collineation admitted by \(V_{4}\). The CI equation (1.5) can also be written as
\[R^{h}_{ijk;l}\xi^{l}-R^{l}_{ijk}\xi^{h}_{;l}+R^{h}_{ljk}\xi^{l}_{;i}+R^{h}_{ilk }\xi^{l}_{;j}+R^{h}_{ijl}\xi^{l}_{;k}=2\varOmega R^{h}_{ijk}. \tag{1.6}\]
**Definition 1.2**.: _[_13_]_ _A spacetime \((V_{4},\ g)\) admits Ricci inheritance (RI) symmetry along a smooth vector \(\xi^{i}\), such that_
\[\pounds_{\xi}R_{ij}=2\varOmega R_{ij}. \tag{1.7}\]
Contraction of equation (1.5) implies to equation (1.7). Thus, in general, every curvature inheritance is Ricci inheritance symmetry (i.e., CI \(\Rightarrow\) RI), but the converse may not hold. Particularly, if \(\varOmega=0\), then RI reduces to RC; otherwise, if \(\varOmega\neq 0\), then called proper RI. [1] The conformal curvature tensor is expressed as follows
\[C^{h}_{ijk}=R^{h}_{ijk}+\frac{1}{2}(\delta^{h}_{j}R_{ik}-\delta^{h}_{k}R_{ij} +R^{h}_{j}g_{ik}-R^{h}_{k}g_{ij})+\frac{R}{6}(g_{ij}\delta^{h}_{k}-g_{ik} \delta^{h}_{j}). \tag{1.8}\]
**Definition 1.3**.: _A Riemannian space is conformally flat [1] iff_
\[C^{h}_{ijk}=0,\ (n>3). \tag{1.9}\]
**Definition 1.4**.: **Einstein - like Spacetime**_. If the Ricci tensor of a spacetime \(\ V_{4}\) has the following form, it is known to be an Einstein spacetime [2],_
\[R_{ij}=\mu g_{ij}, \tag{1.10}\]
_where \(\mu=\dfrac{R}{4}\) is a scalar and R is the scalar curvature. It is referred to as Einstein-like spacetime if the Ricci tensor is simply proportional to the metric tensor._
The plan of the present paper is as follows: After the Section 1 introduction and preliminaries, we study curvature inheritance in a conformally flat spacetime. Section 2 deals with conformally flat perfect fluid spacetime admitting curvature inheritance symmetry and examination of some properties of such a spacetime. Furthermore, we establish the results on Killing and conformal Killing vectors satisfying Einstein's field equations and also admit curvature inheritance symmetry in the case of conformally flat spacetime. Also, we proved that the pressure and density of conformally flat spacetime admitting curvature inheritance symmetry and obeying EFEs with the cosmological term are constant. Finally, we obtained some interesting results about the conformally flat spacetime admitting curvature inheritance symmetry and obeying EFEs with the cosmological term.
## 2. **Main Results**
In a conformally flat spacetime the conformal curvature tensor is given by
\[R^{l}_{ijk}=-\dfrac{1}{2}(\delta^{l}_{j}R_{ik}-\delta^{l}_{k}R_{ij}+R^{l}_{j}g _{ik}-R^{l}_{k}g_{ij})-\dfrac{R}{6}(g_{ij}\delta^{l}_{k}-g_{ik}\delta^{l}_{j}). \tag{2.1}\]
A necessary conditions for a spacetime \((V_{4},g)\) to admit curvature inheritance symmetry is derived in ([13], equation (2.2)) and may be expressed as follows
\[P_{lik}=R^{a}_{ijk}\hbar_{al}+R^{a}_{ljk}\hbar_{ai}=0, \tag{2.2}\]
here
\[\hbar_{ij}=\pounds_{\xi}g_{ij}\ \text{and}\ h=\hbar_{ij}g^{ij}. \tag{2.3}\]
In special case that \(V_{4}\) is a conformally flat spacetime, we substitute \(R^{l}_{ijk}\) as given by (2.1) into (2.2) and we get
\[P_{ijk} = \hbar_{lj}R_{ik}-\hbar_{lk}R_{ij}+\hbar_{ij}R_{lk}-\hbar_{ik}R_{lj }-g_{lj}S_{ik}+g_{lk}S_{ij}-g_{ij}S_{lk} \tag{2.4}\] \[+g_{ik}S_{lj}+\dfrac{R}{3}(g_{lj}\hbar_{ik}-g_{lk}\hbar_{ij}+g_{ ij}\hbar_{lk}-g_{ik}\hbar_{lj})=0,\]
where \(S_{ij}=\hbar_{ik}R_{j}^{k}\) and \(S=g^{ij}S_{ij}\). We multiply (2.4) by \(g^{lj}\) and sum to obtain
\[3S_{ij}+S_{ji}=Sg_{ij}+\frac{R}{3}\hbar_{ij}+h(R_{ij}-\frac{R}{3}g_{ij}). \tag{2.5}\]
In (2.5) we interchange the indices i and j and subtract the resulting equation from (2.5), we find \(S_{ij}=S_{ji}.\) Thus, equation (2.5) reduces to
\[S_{ij}=\frac{1}{4}[Sg_{ij}+\frac{R}{3}\hbar_{ij}+h(R_{ij}-\frac{R}{3}g_{ij})]. \tag{2.6}\]
Next, we frame \(P_{lijk}^{\prime}=P_{lijk}+P_{jkli}=0\), substituting \(P_{lijk}\) from (2.4) then using \(S_{ij}\) as given by (2.6), we thus obtain
\[P_{lijk}^{\prime}=E_{ik}P_{lj}-P_{ik}E_{lj}=0, \tag{2.7}\]
where \(P_{ij}=\hbar_{ij}-\frac{h}{4}g_{ij}\) and \(E_{ij}=R_{ij}-\frac{R}{4}g_{ij}\). If \(E_{ij}=0\), the conformally flat space is an Einstein space. If a conformally flat spacetime is also an Einstein spacetime such that \(R=0\), then the spacetime is flat. Hence, we rule out the possibility of having a conformally flat spacetime as an Einstein spacetime with \(R=0\) and consider this outcome. From ([13], Theorem 7), we have that every proper CI in an Einstein space with (\(R\neq 0\)) is a proper conf M with conformal function. Since we aim to find conformally flat spacetime which admits proper CI, we must also rule out the possibility of having a conformally flat spacetime as an Einstein spacetime with \(R\neq 0\). Thus, we shall assume that spacetime is conformally flat but not Einstein spacetime.
We proceed by choosing \(E_{ij}\neq 0\) in (2.7), \(P_{ij}=\lambda E_{ij}\), and here we consider \(\lambda\) an arbitrary scalar. Substitution of \(P_{ij}\) and \(E_{ij}\) in equation \(P_{ij}=\lambda E_{ij}\), leads to
\[\hbar_{ij}=\pounds_{\xi}g_{ij}=2\phi g_{ij}+\lambda R_{ij}, \tag{2.8}\]
where \(2\phi=\frac{1}{4}(h-\lambda R)\). We observe that if \(\lambda=0\), then equation (2.8) reduces to conformal motion along a vector field \(\xi\). Since we are searching for proper CI, we exclude conformal motion.
Let \((V_{4}^{\prime},g)\) denotes a conformally flat spacetime admitting curvature inheritance symmetry. Next we substitute \(S_{ij}\) and \(S\) in equation (2.6). In the resulting equation we use (2.3) and (2.8) with \(\lambda\neq 0\) to substitute \(\hbar_{ij}\) and \(h\) and find that a \((V_{4}^{\prime})\) must satisfy the condition
\[R_{j}^{k}R_{ik}=\alpha g_{ij}+\beta R_{ij}, \tag{2.9}\]
where
\[\alpha=\frac{1}{4}[R^{\prime}-\frac{R^{2}}{3}],\ \beta=\frac{R}{3},\ R^{\prime}=R_ {j}^{i}R_{i}^{j}. \tag{2.10}\]
Now, taking Lie derivatives of equation (2.1) and use of equations (1.5), (1.7) and (2.3), leads to
\[\pounds_{\xi}R^{l}_{ijk}=\pounds_{\xi}[-\frac{1}{2}(\delta^{l}_{j}R_{ik}-\delta^{l }_{k}R_{ij}+R^{l}_{j}g_{ik}-R^{l}_{k}g_{ij})-\frac{R}{6}(g_{ij}\delta^{l}_{k}-g _{ik}\delta^{l}_{j})], \tag{2.11}\]
\[\pounds_{\xi}R^{l}_{ijk} = -\frac{1}{2}[\delta^{l}_{j}\pounds_{\xi}R_{ik}-\delta^{l}_{k} \pounds_{\xi}R_{ij}+(\pounds_{\xi}R^{l}_{j})g_{ik}+R^{l}_{j}(\pounds_{\xi}g_{ ik})-(\pounds_{\xi}R^{l}_{k})g_{ij}-R^{l}_{k}(\pounds_{\xi}g_{ij})] \tag{2.12}\] \[- \frac{1}{6}[(\pounds_{\xi}R)(g_{ij}\delta^{l}_{k}-g_{ik}\delta^{ l}_{j})+R(\delta^{l}_{k}\pounds_{\xi}g_{ij}-\delta^{l}_{j}\pounds_{\xi}g_{ ik})],\]
\[\pounds_{\xi}R^{l}_{ijk} = -\frac{1}{2}[2\Omega(\delta^{l}_{j}R_{ik}-\delta^{l}_{k}R_{ij})+( 2\Omega R^{l}_{j}-R^{x}_{j}\hbar^{l}_{x})g_{ik}+R^{l}_{j}\hbar_{ik}-(2\Omega R ^{l}_{k}-R^{y}_{k}\hbar^{l}_{y})g_{ij} \tag{2.13}\] \[- R^{l}_{k}\hbar_{ij}]-\frac{1}{6}[(2\Omega R-R^{\prime})(g_{ij} \delta^{l}_{k}-g_{ik}\delta^{l}_{j})+R(\delta^{l}_{k}\hbar_{ij}-\delta^{l}_{j }\hbar_{ik})],\]
\[\pounds_{\xi}R^{l}_{ijk} = 2\Omega[-\frac{1}{2}(\delta^{l}_{j}R_{ik}-\delta^{l}_{k}R_{ij}+ R^{l}_{j}g_{ik}-R^{l}_{k}g_{ij})-\frac{R}{6}(g_{ij}\delta^{l}_{k}-g_{ik} \delta^{l}_{j})]+\frac{1}{2}(-R^{x}_{j}\hbar^{l}_{x}g_{ik} \tag{2.14}\] \[- R^{y}_{k}\hbar^{l}_{y}g_{ij}-R^{l}_{k}\hbar_{ij})-\frac{1}{6}[(- R^{\prime})(g_{ij}\delta^{l}_{k}-g_{ik}\delta^{l}_{j})+R(\delta^{l}_{k}\hbar_{ ij}-\delta^{l}_{j}\hbar_{ik})].\]
From equation (2.1), we observe that (2.2) reduces to
\[\pounds_{\xi}R^{l}_{ijk} = \pounds_{\xi}R^{l}_{ijk}+\frac{1}{2}(-R^{x}_{j}\hbar^{l}_{x}g_{ ik}-R^{y}_{k}\hbar^{l}_{y}g_{ij}-R^{l}_{k}\hbar_{ij}) \tag{2.15}\] \[- \frac{1}{6}[(-R^{\prime})(g_{ij}\delta^{l}_{k}-g_{ik}\delta^{l}_ {j})+R(\delta^{l}_{k}\hbar_{ij}-\delta^{l}_{j}\hbar_{ik})],\]
\[3(R^{x}_{j}\hbar^{l}_{x}g_{ik}+R^{y}_{k}\hbar^{l}_{y}g_{ij}+R^{l}_{k}\hbar_{ij })+(-R^{\prime})(g_{ij}\delta^{l}_{k}-g_{ik}\delta^{l}_{j})+R(\delta^{l}_{k} \hbar_{ij}-\delta^{l}_{j}\hbar_{ik})=0, \tag{2.16}\]
\[3[2\phi(R^{l}_{j}g_{ik}+2R^{l}_{k}g_{ij})+\lambda(R^{x}_{j}R^{l}_{x}g_{ik}+R^{ y}_{k}R^{l}_{y}g_{ij}+R^{l}_{k}R_{ij})]\] \[-\lambda R_{ij}R^{ij}(\delta^{l}_{k}g_{ij}-\delta^{l}_{j}g_{ik})+ \lambda R(\delta^{l}_{k}R_{ij}-\delta^{l}_{j}R_{ik})=0, \tag{2.17}\]
\[3[2\phi(R^{l}_{j}g_{ik}+2R^{l}_{k}g_{ij})+\lambda(R^{x}_{j}R^{l}_{x}g_{ik}+R^{ y}_{k}R^{l}_{y}g_{ij}+R^{l}_{k}R_{ij})]\] \[-\lambda R_{ij}R^{ij}(\delta^{l}_{k}g_{ij}-\delta^{l}_{j}g_{ik})+ \lambda R(\delta^{l}_{k}R_{ij}-\delta^{l}_{j}R_{ik})=0, \tag{2.18}\]
\[3[2\phi(R^{l}_{j}g_{ik}+2R^{l}_{k}g_{ij})+\lambda(R^{x}_{j}R^{l}_{x}g_{ik}+R^{ y}_{k}R^{l}_{y}g_{ij}+R^{l}_{k}R_{ij})]\] \[-\lambda R_{ij}R^{ij}(\delta^{l}_{k}g_{ij}-\delta^{l}_{j}g_{ik})+ \lambda R(\delta^{l}_{k}R_{ij}-\delta^{l}_{j}R_{ik})=0.\]
Contraction on \(l\) and \(j\) in above equation gives that
\[2[\phi R+\lambda R^{2}+\alpha]g_{ik}+[4\phi+2\beta-\lambda R]R_{ik}=0, \tag{2.19}\]
\[R_{ik}=-\frac{2(\phi R+\lambda R^{2}+\alpha)}{4\phi+2\beta-\lambda R}g_{ik}, \tag{2.20}\]
where \(R^{2}=R_{y}^{x}R_{x}^{y}=R_{xy}R^{xy}\). If multiplying equation (2.20) by \(g^{ik}\) we have
\[-\frac{2(\phi R+\lambda R^{2}+\alpha)}{4\phi+2\beta-\lambda R}=\frac{R}{4}. \tag{2.21}\]
By using equation (2.20) and (2.21) we get
\[R_{ik}=\frac{R}{4}g_{ik}. \tag{2.22}\]
**Remark 2.1**.: Equation (2.22), indicates that conformally flat spacetime reduces to Einstien spacetime if it admits curvature inheritance symmetry.
Since \(E_{ik}=R_{ik}-\frac{R}{4}g_{ik}=0\) and \(P_{ik}=\lambda E_{ik}\) we find
\[\hbar_{ik}=\pounds_{\xi}g_{ik}=\frac{h}{4}g_{ik},\ (h=8\psi). \tag{2.23}\]
In equation (2.22), \(R\neq 0\), otherwise this spacetime is flat. We summaries the above analysis by stating,
**Theorem 2.1**.: _In a conformally flat spacetime, curvature inheritance symmetry reduces to proper conformal motion along a conformal Killing vector \(\xi\)._
Theorem 2.1 can also be proved by another method. Theorem 7 in [13] "Every proper CI in an Einstein space \(R\neq 0\) is a proper conformal motion with conformal function \(\psi\)." By using the Theorem 7 in [13] and equation (2.22) we get the Theorem 2.1. From (2.22), we have another result,
**Theorem 2.2**.: _A conformally flat spacetime reduces to an Einstein spacetime if it admits curvature inheritance symmetry._
**Corollary 2.1**.: _Every proper CI in a conformally flat spacetime is a conformal collineation._
Riemannian manifold with harmonic curvature
\[(divR)(X,Y,Z)=(\nabla_{X}S)(Y,Z)-(\nabla_{Y}S)(X,Z), \tag{2.24}\]
here \(R(X,Y,Z)\) and \(S(X,Y)\) denotes Riemannian curvature tensor and Ricci tensor respectively.
**Definition 2.1**.: _[_3_]_ _If \((divR)(X,Y,Z)=0\), an algebraic curvature tensor R is considered harmonic. If a Riemannian manifold M curvature tensor field R is harmonic, the manifold is said to be R-harmonic._
Since,
\[\nabla_{h}R^{h}_{ijk}=\nabla_{k}R_{ij}-\nabla_{j}R_{ik}. \tag{2.25}\]
From Theorem 2.2, we have \(R_{ij}=\frac{R}{4}g_{ij}\). Thus, we have
\[\nabla_{h}R^{h}_{ijk}=\nabla_{k}(\frac{R}{4}g_{ij})-\nabla_{j}(\frac{R}{4}g_{ ik})=0\ (\text{R = constant}). \tag{2.26}\]
This implies that spacetime is R-harmonic. Thus, we have
**Corollary 2.2**.: _Every CI admitting conformally flat is of harmonic curvature._
**Theorem 2.3**.: _If a conformally flat spacetime admits CI then this spacetime is of constant curvature._
Proof.: From Theorem 2.1, this spacetime is Einstein spacetime. Remembering that this spacetime is conformally flat the proof is clear.
**Remark 2.2**.: In relativity, spaces with constant curvature are significant. Cosmology is extremely familiar with the importance of spaces with constant curvature. The most basic cosmological model assumes that the universe is isotropic and homogeneous. According to the cosmological principle, it is impossible to distinguish between two places in the universe because all spatial directions are equal. The Robertson-Walker metrics are cosmological solutions to the Einstein field equations with a three-dimensional space-like surface with constant curvature. In contrast, the de Sitter universe model has a four-dimensional space with a constant curvature [30].
We have the following result by applying Theorem 2.1
**Theorem 2.4**.: _Every CI in a conformally flat spacetime must be a curvature collineation._
Proof.: From Theorem 2.2, this spacetime is Einstein. By using Theorem 2.2 and equation (2.11) the proof is completed.
**Theorem 2.5**.: _Every CI in a conformally flat spacetime is motion._
Proof.: Theorem 9 in [13], every curvature collineation (CC) in an Einstein space (\(R\neq 0,n>2\)) is a motion. From Theorem 2.2 and Theorem 2.4, in a conformally spacetime admitting every curvature inheritance symmetry reduces to motion.
The matter collineation, which is determined by the well-known symmetry of the energy momentum tensor,
\[\pounds_{\xi}T_{ij}=0. \tag{2.27}\]
**Theorem 2.6**.: _If a conformally flat spacetime admits CI, then spacetime admits matter collineation._
Proof.: In view of (2.22), equation (1.1) yields
\[(\wedge-\frac{R}{4})g_{ij}=\kappa T_{ij}. \tag{2.28}\]
Operating the Lie derivative on equation (2.28) and using Theorem 2.2, we have
\[(\wedge-\frac{R}{4})\pounds_{\xi}g_{ij}=\kappa\pounds_{\xi}T_{ij}. \tag{2.29}\]
From Theorem 2.5, \(\xi\) is a Killing vector field, (2.29) we have
\[\pounds_{\xi}g_{ij}=0. \tag{2.30}\]
Using equations (2.29) and (2.30), we write
\[\pounds_{\xi}T_{ij}=0. \tag{2.31}\]
The proof is completed.
**Theorem 2.7**.: _If a spacetime admits CI with Einstein's field equations, then dark matter exists there._
Proof.: From (1.2) we have,
\[T=(3p-\mu). \tag{2.32}\]
Multiplying equation (2.28) by \(g_{ij}\) we get
\[\kappa T=4\wedge-R. \tag{2.33}\]
By using the equation (2.32) and (2.33) we obtain
\[R=4\wedge-(3p-\mu)\kappa. \tag{2.34}\]
By using equations (1.1), (1.2) and Theorem 2.1 we find
\[(\wedge-\frac{R}{4})g_{ij}=\kappa T_{ij}. \tag{2.35}\]
Multiplying the above equation by \(u^{i}\) and \(u^{j}\) gives
\[R=4\wedge+4k\mu. \tag{2.36}\]
From equation (2.34) and (2.36) we obtain
\[\mu+p=0. \tag{2.37}\]
Equation (2.37) says that, the vacuum-like equation of state is satisfied by spacetime \((V_{4},g)\).
**Theorem 2.8**.: _If a conformally flat spacetime admits CI and obeys Einstein's field equation with the cosmological constant, then this spacetime satisfies the vacuum-like equation of state._
**Theorem 2.9**.: _If a conformally flat spacetime admits CI and obeys Einstein's field equation with the cosmological constant, it is an ACMD model._
Proof.: A simplified relationship between the isotropic pressure \(p\) and the energy density \(\mu\) of the perfect fluid is \(p=\omega\mu\), where \(\omega\) is the equation of state parameter. Experimentally, the equation of state parameter is connected to the evolution of the energy density and the universe's expansion. In [5], we obtained \(\omega=-1\), which corresponds to the vacuum energy, using equation (2.37), which is the ACMD model.
By using equation (2.34) and (2.37) we get
\[p=\frac{1}{\kappa}(\wedge-\frac{R}{4}),\ \mu=\frac{1}{\kappa}(\frac{R}{4}- \wedge). \tag{2.38}\]
Consequently, we can summarise the result as follows.
**Theorem 2.10**.: _Energy density \(\mu\) and isotropic pressure \(p\) are constants if a conformally flat spacetime admits CI and obeys Einstein's field equation with the cosmological constant._
**Theorem 2.11**.: _The energy momentum tensor is Killing if a conformally flat spacetime admits CI and obeys Einstein's field equation with the cosmological constant._
Proof.: From Theorem 2.2, this spacetime is Einstein. Thus, we have
\[R_{ij,kh}+R_{jk,ih}+R_{ki,jh}=0, \tag{2.39}\]
i.e., the Ricci tensor is Killing. From equations (2.39), (2.28), and (2.22) it is clear that, the energy-momentum tensor is Killing.
From equation (2.35) and Theorem 2.2 we have,
**Corollary 2.3**.: _The energy momentum tensor is covariantly constant in a conformally flat spacetime admits CI that obeys Einstein's field equation with the cosmological constant._
**Theorem 2.12**.: _The vorticity does not necessarily vanish and it has vanishing expansion scalar and shear tensor and geodesic flow, if a conformally flat spacetime following Einstein's field equation admits CI._
Proof.: Sharma and Ghosh stated that a result in [14] as "Let (M,g) be a perfect fluid space-time such that its energy-momentum tensor is Killing. Then (i) M is expansion-free and shear-free, and its flow is geodesic, however, not necessarily vorticity-free, and (ii) its energy-density and pressure are constant on M." By using this statement and Theorem 2.11 the proof is clear.
**Theorem 2.13**.: _If CI admitting a conformally flat spacetime that satisfies Einstein's field equation, then this spacetime represents inflation, and the fluid exhibits cosmological constant behaviour._
Proof.: Theorem 2.7 states that \(\mu+p=0\). This indicates that the fluid behaves as a cosmological constant [27]. This is also known as the **Phantom Barrier** according to [29].
**Theorem 2.14**.: _The velocity vector field U is infinitesimally spatially isotropic with respect to the unit time like vector field U if a conformally flat spacetime admits CI and satisfies Einstein's field equation._
Proof.: From equations (1.8) and (2.22) we have
\[R^{h}_{ijh}=\frac{R}{12}(g_{hh}g_{ij}-g_{hj}g_{ih}). \tag{2.40}\]
A Lorentzian manifold is said to be infinitesimally spatially isotropic with respect to a time-like unit vector field, according to Karcher [33], if its curvature tensor R satisfies the relation,
\[R(X,Y)Z=K(g(Y,Z)X-g(X,Z)Y), \tag{2.41}\]
for all X, Y, Z \(\in U^{\perp}\) and \(R(X,U)U=mX\) for X\(\in U^{\perp}\), where K and m are real valued function on the spacetime. From equations (2.41) and (2.40) we get
\[K=\frac{R}{12}\text{ and }m=-\frac{R}{12}. \tag{2.42}\]
Thus, the proof is completed.
**Theorem 2.15**.: _The sectional curvature determined by two vectors \(X,Y\in U^{\perp}\) and the curvature by two vectors X and U are equivalent, if a conformally flat spacetime admits CI with perfect fluid._
Proof.: Let \(X,Y\in U^{\perp}\), \(\kappa_{1}\) signify the sectional curvature of the spacetime defined by X, Y, and \(\kappa_{2}\) denote the sectional curvature of the spacetime by X, U. Then
\[\kappa_{1}=\frac{g(R(X,Y)Y,X)}{g(X,X)g(Y,Y)-(g(X,Y))^{2}}=\frac{R}{12}, \tag{2.43}\]
and
\[\kappa_{2}=\frac{g(R(X,U)U,X)}{g(X,X)g(U,U)-(g(X,U))^{2}}=\frac{R}{12}. \tag{2.44}\]
The proof is completed.
## 3. **Discussion**
In this work, we have established key results of a curvature inheritance symmetry in studying conformally flat spacetime in general relativity. A specific instance of the general solution \(\hbar_{ij}=2\phi g_{ij}+\lambda K_{ij}\) is the motivation derived from the equation (2.8) for the vector \(\xi\), where \(K_{ij}\) is a symmetric tensor of second order. The curvature identity of the CI symmetry is given by the expression \((R_{ih}R_{jkl}^{h}+R_{jh}R_{ikl}^{h}=0)\). In this article, we assert that using a CI symmetry will help to improve earlier results on the geometry and physics of conformally flat spacetime. To clarify our claim, we first show that our Theorem 2.2 is applicable to a different type of spacetime. Recall that from [31], the existence of covariant constant scalar curvature must preclude some spacetime, including the situation of conformally flat spacetime, from the physical application of our Theorem 2.2.
The equation \(\mu+p=0\) suggests that the fluid behaves like a cosmological constant ( [27]). According to Mazumdar ( [29]), this is also referred to as the phantom barrier. The same result obtained in cosmology, known as inflation, which is the word used to describe the rapid expansion of spacetime ( [28]).
The fact that the collection of CKVs constitutes a finite-dimensional Lie algebra structure is widely known. Contrary to this, smooth curvature symmetry vectors' Lie algebraic structure is not always finite-dimensional. We lose the Lie algebra structure of CIVs if the differentiability requirement is less than smooth (see Remark on page 151 by K.L. Duggal and R. Sharma in [15]). Thus, a relationship between metric and curvature symmetries is preferred to identify exactly between the various categories of spaces having a finite-dimensional Lie algebra. The metrics of the conformally flat generalised plane wave spacetime are of the form [31]
\[ds^{2}=-A(u)(y^{2}+z^{2})du^{2}-2dudv+dy^{2}+dz^{2}. \tag{3.1}\]
A finite-dimensional Lie algebra [32] formed by the set of all CKV is indicated by the symbol \(\mathcal{C}\). The algebra of [22] CKV on \(V_{4}\) can have a maximum dimension of 15, and this is possible if \(V_{4}\) is conformally flat. These spacetimes support homothetic Lie algebra in seven dimensions. It follows that there are often eight CKVs that are all appropriate since the spacetime of the form admits at most minuscule a seven-dimensional inherited algebra. As a result, according to [31], the conformally flat generalised plane wave spacetime have an inherited algebra with a maximum dimension of eight. We deduce that the set of all CIV in conformally flat forms is a finite-dimensional Lie algebra from Theorem 2.2.
|
2307.15503 | The Applicability of Federated Learning to Official Statistics | This work investigates the potential of Federated Learning (FL) for official
statistics and shows how well the performance of FL models can keep up with
centralized learning methods.F L is particularly interesting for official
statistics because its utilization can safeguard the privacy of data holders,
thus facilitating access to a broader range of data. By simulating three
different use cases, important insights on the applicability of the technology
are gained. The use cases are based on a medical insurance data set, a fine
dust pollution data set and a mobile radio coverage data set - all of which are
from domains close to official statistics. We provide a detailed analysis of
the results, including a comparison of centralized and FL algorithm
performances for each simulation. In all three use cases, we were able to train
models via FL which reach a performance very close to the centralized model
benchmarks. Our key observations and their implications for transferring the
simulations into practice are summarized. We arrive at the conclusion that FL
has the potential to emerge as a pivotal technology in future use cases of
official statistics. | Joshua Stock, Oliver Hauke, Julius Weißmann, Hannes Federrath | 2023-07-28T11:58:26Z | http://arxiv.org/abs/2307.15503v2 | # The Applicability of Federated Learning to Official Statistics
###### Abstract
This work investigates the potential of Federated Learning (FL) for official statistics and shows how well the performance of FL models can keep up with centralized learning methods. FL is particularly interesting for official statistics because its utilization can safeguard the privacy of data holders, thus facilitating access to a broader range of data. By simulating three different use cases, important insights on the applicability of the technology are gained. The use cases are based on a medical insurance data set, a fine dust pollution data set and a mobile radio coverage data set - all of which are from domains close to official statistics. We provide a detailed analysis of the results, including a comparison of centralized and FL algorithm performances for each simulation. In all three use cases, we were able to train models via FL which reach a performance very close to the centralized model benchmarks. Our key observations and their implications for transferring the simulations into practice are summarized. We arrive at the conclusion that FL has the potential to emerge as a pivotal technology in future use cases of official statistics.
1
Footnote 1: text{*}}\) Université Hamburg, Hamburg, Germany
2
Footnote 2: text{*}}\) Federal Statistical Office (Destatis), Wiesbaden, Germany
## 1 Introduction
The aim of national statistical offices (NSOs) is to develop, produce and disseminate high-quality official statistics that can be considered a reliable portrayal of reality [15]. In order to effectively capture our rapidly changing world, NSOs are currently undergoing a process of modernization, leveraging new data sources, methodologies and technologies.
NSOs have effectively extracted information from new data sources, such as scanner data3 or Mobile Network Operator (MNO) data4. However, the potential of numerous other data sources, including privately held data5 or data from certain official entities, remains largely untapped. Legal frameworks, which are fundamental to official statistics, only adapt slowly to changing data needs and currently hinder access to valuable new data sources. Cooperation with potential data donors faces restrictions due to concerns about privacy, confidentiality, or disclosing individual business interests.
Footnote 3: text{*}}\) Guidance on private sector data sharing [https://digital-strategy.ec.europa.eu/en/policies/private-sector-data-sharing](https://digital-strategy.ec.europa.eu/en/policies/private-sector-data-sharing), accessed on July 17, 2023
In the meantime, the methodology employed by NSOs is evolving, with machine learning (ML) gaining substantial popularity and, as a result, undergoing a process of establishment. ML has been applied in various areas of official statistics (e.g. [1, 2, 1]), and new frameworks such as [15] address the need to measure the quality of ML.
Within official statistics, ML tools have proven effective in processing new data sources, such as text and images, or enabling the automation of statistical production tasks, including classifying information or predicting not (yet) available data.
**Federated learning (FL)** is an emerging approach within ML that provides immense unexplored potential for official statistics. It addresses the challenge of extracting and exchanging valuable global information from new data sources without compromising the privacy of individual data owners. Introduced in [16], FL enables collaborative model training across distributed data sources while preserving data privacy by keeping the data localized. In scenarios where external partners are unwilling to share individual-level information due to regulatory or strategic considerations, but still aim to analyze or disseminate global insights in their field of application, NSOs can offer trustworthy solutions by utilizing FL. In return, FL empowers contributing NSOs to integrate new data sources into statistical production.
Although FL has been successfully applied to many domains, to the best of our knowledge, besides our work only one currently presented study investigates the applicability of FL to the field of official statistics. In a proof of concept (PoC) by the United Nations (UN), FL is applied to estimate human activity based on data collected from smart and wearable devices [15, 16]. The PoC emphasizes operative aspects of FL coordinating multiple NSOs and benefits of additional privacy enhancing technologies.
The main contribution of this paper lies in presenting three additional applications of FL that address current data need representative for official statistics. Complementary, we emphasize measuring the numerical predictive performance and reproducibility by openly sharing our code, which, in two instances, is applied to publicly available data. In the first simulation related to health, individual healthcare costs are predicted utilizing tools for regression. In the second simulation related to **sustainability**, current fine dust pollution levels are classified based on meteorological data. In the third simulation related to **mobility**, the daily range of movement of mobile phone users are classified by MNO data. The first two simulations focus on assessing the estimation performance achieved by FL in comparison to centralized models that have complete access to all available data. The third application presents valuable insights and lessons learned from the implementation of FL, involving the active participation of a real external partner. We draw conclusions on the applicability of FL in NSOs in section 5, which are summarized in section 6.
## 2 Background
Before presenting the simulated use cases in section 3, this section provides an overview of FL and privacy challenges with ML.
### Federated Learning
In FL, a centralized server (or aggregator, in our case a NSO) coordinates the process of training a ML model (mainly deep neural networks) by initializing a global model and forwarding it to the data owners (clients). In each training round, each client trains the model with their private data and sends the resulting model back to the central server. The central server uses a FL method to aggregate the updates of the participants into the next iteration of the global model and starts the next round by distributing the updated model to the clients. This process is repeated to improve the performance of the model.
NSOs primarily strive to generate global models that accurately represent the available data, which, in our setting, is distributed among multiple clients. Thus, we compare the performance of FL to models with access to the combined data of all clients. Alternatively, if upcoming applications seek to supply each client with an optimized individual model by leveraging information from the other clients, _personalized_ FL can be used. This approach is not covered in this paper but can be found in [17, 18].
### Privacy Challenges with Machine Learning
When training data for a ML model is distributed among multiple parties, the data traditionally needs to be combined on a central server prior to training an ML model. FL has become a popular alternative to this approach, as it allows to train a model in a distributed way from the start, without the need to aggregate training data first. Thus, using FL has the privacy advantage that there is no need to exchange private training data. Instead, data holders can train a global model collaboratively in a distributed fashion, without transferring any data record.
But although FL makes sharing private training data obsolete, there are other privacy challenges inherent to ML which have also been observed for FL. While ML models are always trained to fulfill a dedicated task, often more information than strictly necessary for fulfilling the task is extracted into the model weights during training [19]. This excessive, and potentially private, information in the model weights is called privacy leakage. In general, this leakage can be leveraged by any party who has full access to a model and its trained weights.
One concrete example of such a privacy attack is _training data extraction_[19], which allows extracting data records from a trained model. Another known attack is _model inversion_[18], where repeated requests to the model are used to reconstruct class representatives. _Membership
inference_[21] aims at individual training data records: the attack's target is to decide whether a specific data record was part of the training data. Building on the original proposal, other works have transferred membership inference attacks to the FL scenario [17]. Last but not least, _property inference_ attacks [18] allow to deduce statistical properties of the target model's training data. This is especially relevant in FL scenarios, where the characteristics of each client's local data set can be highly sensitive, e.g., in medical domains.
The applicability of these attacks depends on the concrete use case, the type of model and other factors. Concerning attacker models, i.e., the scenario in which an attack is executed, some FL-specific attacks rely on a malicious aggregator. Nonetheless, all attacks mentioned above also work in an environment where not the aggregator, but one of the FL clients is the attacker. Hence, even if the aggregator can be trusted, e.g., because the aggregator's role is assumed by a NSO, these attacks can still be executed by other FL clients. Analyzing the individual privacy leakage of the simulated use cases in this paper are out of scope. Nonetheless, raising awareness to these issues, e.g., by communicating potential risks to clients in an FL scenario, should not be neglected. Beyond this, strategies under the umbrella term _privacy-preserving machine learning_ (PPML) can help to mitigate these risks [19].
### Frameworks
In our simulations, we use the frameworks TensorFlow6 for neural networks and TensorFlow Federated7 for FL. We use PyCaret8 for automizing benchmark experiments in the centralized settings and scikit-learn9 for data processing.
Footnote 6: TensorFlow [https://www.tensorflow.org/](https://www.tensorflow.org/), accessed on July 17, 2023
Footnote 7: TensorFlow Federated: Machine learning on decentralized data [https://www.tensorflow.org/federated](https://www.tensorflow.org/federated), accessed on July 17, 2023
Footnote 8: PyCaret [https://pycaret.org/](https://pycaret.org/), accessed on July 17, 2023
Footnote 9: scikit-learn, Machine Learning in Python [https://scikit-learn.org/](https://scikit-learn.org/), accessed on July 17, 2023
The code we have written for this work is openly available on GitHub10.
Footnote 10: Code repository for this paper: [https://www.github.com/joshua-stock/fl-official-statistics](https://www.github.com/joshua-stock/fl-official-statistics), accessed on July 17, 2023. Note that for the mobile radio coverage simulation, the code has only been executed locally on the private data set, hence it is not included in the repository.
## 3 Simulations
Most relevant for NSOs is _cross-silo_ FL, where a few reliable clients train a model, e.g. official authorities. In contrast, _cross-device_ FL uses numerous clients, e.g. smartphones, to train a model. To analyze the potential of cross-silo FL for official statistics, we run simulations with three different data sets. For each use case, we first compute benchmarks by evaluating centralized ML models, i.e., models which are trained on the whole data set. Afterwards, we split the data set and assign the parts to (simulated) FL clients for the FL simulation. This way, we have a basis for interpreting the performance of the model resulting from the FL training simulation. The performance metrics of the trained ML models (including coefficient of determination \(\text{R}^{2}\) or accuracy) are computed on test sets of each data set.
### Medical insurance data
The demand for timely and reliable information on public health is steadily increasing. The COVID-19 pandemic has significantly accelerated this trend, raising questions about the financial feasibility of our healthcare system and the availability of medical supplies.
Thus, our first experiment focuses on modeling a regression problem related to healthcare by considering the following question: Given an individual's health status characteristics, what is the magnitude of their insurance _charges_? We aim to address two primary questions. Firstly, we explore the suitability of neural networks in comparison to other models for the regression task. Secondly, we assess the feasibility of utilizing a simulated decentralized data set in an FL setting to tackle the problem.
Data setThe given data set links medical insurance premium _charges_ to related individual attributes11. Considered are the six features _age_, _sex_, _bmi_ (body mass index), _children_ (count), _smoker_ (yes/no) and four _regions_. In our studies, the feature _region_ was excluded during FL training and solely utilized for partitioning the data within the FL setting. In total, the data set consists of 1338 complete records, i.e. there are no missing or undefined values. Also, the data set is highly balanced: The values in _age_ are evenly dispersed, just as the distribution of male and female records is about 50/50 (attribute _gender_) and each _region_ is represented nearly equally often. The origin of the data is unknown, however its homogeneity and integrity suggest that it has been created artificially.
Footnote 11: US health insurance dataset [https://www.kaggle.com/datasets/teertha/ushealthinsurancedatas](https://www.kaggle.com/datasets/teertha/ushealthinsurancedatas) et, accessed on July 17, 2023
Data preprocessingWe encode the binary attributes _sex_ and _smoker_ into a numeric form (0 or 1). The attributes _age_, _bmi_ and _children_ are scaled to a range from 0 to 1. In the centralized benchmarks, the attribute _region_ is one-hot-encoded.
SetupWe aim to investigate the suitability of neural networks for estimating insurance _charges_ and explore the extent to which this problem can be addressed using a FL approach. To achieve this, we compare different models and evaluate their performance.
A basic fully connected neural network architecture, that takes five input features, is utilized. The network consists of three hidden layers with 40, 40, and 20 units in each respective layer. Following each layer, a Rectified Linear Unit (ReLU) activation function is applied. The final output layer comprises a single neuron. To optimize the network, the Adam optimizer with a learning rate of 0.05 is employed. In the federated setting, we utilize the same initial model but integrate FedAdam for server updates. This decision is based on previous research [2], which emphasizes the benefits of adaptive server optimization techniques for achieving improved convergence.
In the centralized approach, we allocate a training budget of 100 epochs. In contrast, the federated approach incorporates 50 rounds of communication between the client and server during training. Each round involves clients individually training the model for 50 epochs. To track the running training, 10% evaluation data is used by each client in the FL setting and 20% is used in the centralized scenario. It is neglected in calculating the final test performance. The remaining shallow learning models undergo hyperparameter optimization using a random search approach with a budget of 100 iterations. We evaluate all models using 5-fold cross validation.
ResultsWe conduct a performance comparison of the models based on their 5-fold cross-validation R\({}^{2}\) scores and consider their standard deviation (see Table 1). The random forest model achieves the highest performance with an R\({}^{2}\) of 84.5 %, closely followed by XGBoost and Decision Tree, which scores 0.2 and 0.5 percentage points lower, respectively.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Model** & **R\({}^{2}\)(\(\pm\) std)** & **Rel. loss (\%)** \\ \hline neural network & 81.5(4.01) & 3.5 \\ neural network (federated) & 78.4(3.13) & 7.2 \\ random forest & 84.5(4.73) & 0.0 \\ XGBoost & 84.3 (3.96) & 0.2 \\ decision tree & 84.1 (4.23) & 0.5 \\ k-nearest neighbors & 74.4 (5.53) & 12.0 \\ linear regression & 72.8 (6.07) & 13.8 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance comparison of different prediction models for the medical insurance use case. The performance is quantified using R\({}^{2}\) in %, along with the corresponding standard deviation (std). Additionally, the relative loss to the best centralized model (rel. loss) is reported.
The neural network model achieves an R2 of 81.5 %, indicating a performance 3.5 % worse than the best model. However, it still provides a reasonable result compared to K-Nearest Neighbors (KNN) and Linear Regression, which obtain significantly lower R2 scores of 12 % and 13.8 %, respectively.
The Federated neural network demonstrates an R2 of 78.4 %, slightly lower than the centralized neural network but 7.2 % worse than the random forest model. Notably, the Federated neural network exhibits a lower standard deviation of 3.99 compared to the centralized neural network (4.92) and also outperforms the random forest model (4.73) in this regard.
DiscussionBased on the research questions, we can draw clear conclusions from the findings presented in Table 1. Initially, we compared the performance of different models, including a simple neural network. Although the random forest model outperformed others, its performance was only 3.5 % higher, distinguishing it significantly from models such as KNN and linear regression, which performed 12 % and 13.8 % worse than the random forest, respectively.
The observed performance decrease from 81.5 % to 78.4 % in the FL approach can be attributed to the training process and falls within a reasonable range. Considering the privacy advantages of FL, the 7.2 % accuracy loss compared to the best model is acceptable, particularly when taking into account the reduction in standard deviation from 4.92 to 3.99.
Although this example is hypothetical, it highlights the potential benefits and importance of FL in official statistics. It showcases how FL provides access to crucial data sets for ML while maintaining nearly negligible loss in accuracy compared to a centralized data set.
### Fine dust pollution
Reducing air pollution is a significant part of the Sustainable Development Goals (SDGs) established by the United Nations12. To measure progress toward achieving SDGs, NGOs and other data producing organizations developed a set of 231 internationally comparable indicators, including _annual mean levels of fine particulate matter (e.g. PM\({}_{2.5}\) and PM\({}_{10}\))_. [Hu+18] showed that personalized FL can be used to extract timely high frequent information on air pollution more accurately than models using centralized data.
Footnote 12: Air quality and health [https://www.who.int/teams/environment-climate-change-and-health/air-quality-and-health/policy-progress/sustainable-development-goals-air-pollution](https://www.who.int/teams/environment-climate-change-and-health/air-quality-and-health/policy-progress/sustainable-development-goals-air-pollution), accessed on July 17, 2023
In our second use case, we provide a comparison between centralized and FL models (without personalization) and make the developed code and methods accessible. It should be noted that we utilize a slightly different data set and methodology compared to [Hu+18], which we explain at the end of this section. We model a classification task in which the current fine dust pollution is
Figure 1: Location of meteorological stations for the fine dust pollution simulation on a map of Beijing, China. 12 of the 13 stations are included in the public data set which we have used for our simulations. The dashed lines mark _regions_ of the “Region-Learning” approach in [Hu+18]. Image source: [Hu+18].
inferred based on meteorological input data. More precisely, 48 consecutive hourly measurements are used to make a prediction for the current PM\({}_{2.5}\) pollution (the total weight of particles smaller than 2.5 um in one m\({}^{3}\)). The output of the predictor is one of the three classes _low_, _medium_ or _high_. The thresholds for each class are chosen in a way such that the samples of the whole data set are distributed evenly among the three classes.
Data setThe data set we use is a multi-feature air quality and weather data set [16] which is publicly available online13. It consists of hourly measurements of 12 meteorological stations in Beijing, recorded over a time span of 4 years (2013-2017). Figure 1 depicts the locations of the 12 stations in Beijing. In total, more than 420 000 data records are included in the data set. Although some attributes are missing for some data records, most records have data for all the 17 attributes. An example plot for the two attributes PM\({}_{2.5}\) and temperature is shown in Figure 2.
Footnote 13: Beijing multi-site air-quality data set [https://www.kaggle.com/datasets/sid321axn/beijing-mulisite-airquality-data-set](https://www.kaggle.com/datasets/sid321axn/beijing-mulisite-airquality-data-set), accessed on July 17, 2023
Data preprocessingTo complete the missing data records, we use linear interpolation. We apply one-hot encoding to the wind direction attribute. All other features are scaled to a range from 0 to 1. For the attributes PM\({}_{10}\), SO\({}_{2}\), NO\({}_{2}\), CO and O\({}_{3}\), we observe a high correlation with the target attribute and thus exclude them from training. 80% of the data are used as training data, the rest is used as test data.
SetupAs in the first use case, we implement a centralized learning benchmark and compare it with a FL approach. We model one FL client per meteorological station and split the data accordingly, while the benchmark model is trained with data from all 12 stations. In both settings, we use neural networks with LSTM (long-short term memory) layers and apply 5-fold cross validation. The architecture of the neural networks is similar across both settings and has been manually tuned to reach a good performance: The input layer is followed by a 10-neuron LSTM layer, a dropout layer with a dropout rate of 25%, a 5-neuron LSTM layer, another dropout layer with a dropout rate of 35% and a 3-neuron dense layer for the classification output. For the same reasons as in the first use case, we use the Adam optimizer and apply a learning rate of 0.05 on the server and 0.005 on the client. The client learning rate is decreased every 64 epochs by a factor of 10 to facilitate fine-tuning in later stages of the training. The total training budget we have allocated is 10 epochs for centralized learning and 200 epochs for FL (with a single round of local training per epoch).
Figure 2: Example plot for the data of one meteorological station and the two features PM\({}_{2.5}\) and temperature. The four-year time span is clearly visible by the temperature wave, due to hot summers and cold winters.
ResultsA summary of our results for the fine dust pollution use case is provided in Table 2. Depicted are the means of our 5-fold cross validation experiments.
The centralized learning benchmark reaches a mean classification accuracy of 72.4%, with similarly high numbers for precision and recall (72.8%, respectively 72.3%). In comparison, the FL classifier reaches a performance of both an accuracy and a recall of 68.0% and a precision of 67.9%. The relative standard deviation is higher in the FL scenario for all three metrics, reaching from +2.67 percentage points (accuracy) to +2.9 percentage points (both precision and recall).
An exemplary confusion matrix for one of the five resulting models of the centralized learning is depicted in Table 3. Most misclassifications are made for the _medium_ class. The same could be observed for the other models (both in centralized and federated learning).
DiscussionCompared to the first use case, the training database is significantly larger. With 12 clients, there are also four times as many participants in the FL scenario as in the first use case. Still, the performance decrease is small, with an accuracy of 68.0% (FL) compared to 72.4% in the centralized training scenario.
Apart from preprocessing the data set, another time-consuming part of the engineering was tuning the hyperparameters of the FL training. Tools for automatic FL hyperparameter optimization were out of scope for this work, thus it was necessary to manually trigger different trial runs with varying hyperparameters.
Comparison with literatureThe authors of [14] compare the results of their personalized FL strategy "Region-Learning" to a centralized learning baseline and standard FL. Although according to the authors, their personalized FL approach outperforms the other two approaches (averaged over the regions by 5 percentage points compared to standard FL), we want to stress that Region-Learning has another goal than standard FL - namely multiple specialized models, and not one global model as in standard FL and most use cases for official statistics (also see subsection 2.1).
Furthermore, Hu et al. have not provided sufficient information to retrace their experiments. Especially the number of classes for PM\({}_{2.5}\) classification and information on the features used for training the classifiers are missing, so that their results are hard to compare to ours. For example, setting the number of classes to 2 and using all features of the data set (including the other pollution attributes PM\({}_{10}\), SO\({}_{2}\) etc.) would significantly ease the estimation task. Also, we have no information on whether cross validation was applied in the work of Hu et al. Two more hints in the paper [14] suggest that they have used a slightly different data set than we have: The data set they describe includes "more than 100 000" data records from 13 meteorological stations in Beijing, while our data set contains more than 420 000 records from 12 stations.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Model** & **Accuracy (\(\pm\) std)** & **Precision (\(\pm\) std)** & **Recall (\(\pm\) std)** & **Rel. loss (\%)** \\ \hline neural network & 72.4\% (4.92) & 72.8\% (8.66) & 72.3\% (8.10) & 0.0 \\ neural network (fed.) & 68.0\% (7.59) & 67.9\% (10.05) & 68.0\% (9.59) & 5.9-6.7 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance in the fine dust pollution simulation. The span of the relative loss refers to all three metrics.
\begin{table}
\begin{tabular}{l c c c} \hline \hline true class / **predicted class** & **low** & **medium** & **high** \\ \hline low & 114 450 & 23 712 & 5769 \\ medium & 23 635 & 80 718 & 33 754 \\ high & 1944 & 24 629 & 111 581 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Exemplary confusion matrix for one of the five models in the cross-validation training of the centralized model for the fine dust pollution use case.
One consistency across both their work and ours is the accuracy drop from centralized learning to FL, with 4 percentage points in [Hu+18] and 4.4 percentage points in our work.
### Mobile radio (LTE)
Mobile Network Operator (MNO) data is a valuable source for obtaining high-frequency and spacial insights in various fields, including population structure, mobility and the socio-economic impact of policy interventions. However, a lack of legal frameworks permitting access to data of all providers, as seen in cases like Germany, constrain the quality of analysis [SBH22]. Accessing only data of selected providers introduces biases, making FL an attractive solution to enhance the representativeness by enabling the aggregation of insights from multiple major MNOs.
Thus, our third use case is based on private MNO data owned by the company umlaut SE14. Different from the first two use cases, we had no direct access to the data, just as the aggregation party in realistic FL settings. While this allows for practical insights, it also comes with constricted resources in the private sector. Hence, the focus of this use case is more on practical engineering issues of FL and less on optimal results.
Footnote 14: umlaut website [https://www.umlaut.com/](https://www.umlaut.com/), accessed on July 17, 2023
The data set contains mobile communication network coverage data, including latency and speed tests, each linked to the mobile LTE devices of individual users and a specific timestamp. The data records are also associated with GPS coordinates, such that a daily "radius of action" can be computed for each user. This radius describes how far a user has moved from their home base within one day. The user home bases have also been computed on the available data - a home base is defined as the place where most data records have been recorded. The ML task we model in this use case is to estimate the daily radius of action for a user, given different LTE metrics of one particular day (see below).
Data setThe whole data set originally contains 286 329 137 data records. The following features of the data set have been aggregated for each day and user: _radius of action_ in meters, _share of data records with Wi-Fi connection_ and the variance and mean values for each of the following LTE metrics: _RSRQ_, _RSRP_, _RSSNR_ and _RSSI_. The date has been encoded into three numeric features (_calendar week_, _day of the week_ and _month_) and the boolean feature _weekend_.
Data preprocessingWe set a specific time frame of six months and a geofence around the German state of North Rhine-Westphalia. All other records are excluded - leaving 2 718 416 records in the data set. Additionally, we apply a filtering strategy to clean our data: each user in the database needs to have data for at least 20 different days (within the time span of six months) and 10 records on each of these days. Otherwise, all records of this user are discarded. After the second filtering step, there are 1 508 102 data records in the data set. We scale each feature to a range from 0 to 1 and then use for training, validating and testing our models.
60% of the data are used as training data, 20% are used as validation data and the remaining 20% as test data. For FL, we have divided the data set according to the mobile network operators (MNOs) of the users. Since more than 99.6% of the data records are associated with three major providers, the other 0.4% of the data records (belonging to 29 other MNOs) are eliminated from the data set.
SetupWe use two centralized learning benchmarks: a random forest regressor and a neural network, which have both been subject to a hyperparameter search prior to their training. The network architecture for both the centralized benchmark neural network and the FL training process is the same: The first layer consists of 28 dense-neurons and the second layer consists of 14 dense-neurons, which lead to the single-neuron output layer. All dense layers except for the output layer use the ReLU activation function. For FL, we use the SGD optimizer with a server learning rate of 3.0, a client learning rate of 0.8 and a batch size of 2.
ResultsThe benchmarks of the centralized learning regressors are R\({}^{2}\) values of 0.158 (random forest), 0.13 (neural network) and 0.13 (linear regression). For the neural network trained in the FL scenario, we achieve a slightly lower R\({}^{2}\) value of 0.114 (see Table 4).
DiscussionThe reasons behind the weak performance of the benchmark models (R\({}^{2}\) of 0.158 and 0.13) are not clear. The hyperparameters might not be optimal, since we were not able to spend many resources on hyperparameter tuning due to time constraints of the data owner. Another reason might be the that the modeled task (estimating the radius of action based on LTE connection data) is inherently hard to learn. With an R\({}^{2}\) of 0.114, we were able to reproduce this performance in the FL setting.
Since the private data set in this use case has not left company premises, there are important lessons to be learned from a practical perspective:
1. Even if the data set is not directly available during the model engineering process, it is crucial to get basic insights on the features and statistical properties before starting the training. Essential decisions, such as the type of model to be trained, can be made based on this.
2. A thorough hyperparameter optimization is needed to obtain useful results. It might take a lot of time and computational resources to find hyperparameters which are suited for the task.
3. Technical difficulties while creating the necessary APIs and setting up the chosen ML framework at the FL clients can slow down the process even more. Without access to the database, it might be hard to reproduce technical errors.
While all points mentioned above were encountered in the third simulation, there was only _one_ party who held all data. In real FL scenarios with multiple data holders, the process might get much more complicated.
## 4 Key Observations
Our simulations lead to the following key observations:
Models trained via FL can reach a performance very close to models trained with centralized ML approaches, as we have shown in all three use cases. While the performance gap itself is not surprising (since the FL model has been exposed to the complete data set only indirectly), we want to stress that without FL, many ML scenarios might not be possible due to privacy concerns, trade secrets, or similar reasons. This is especially true for health care data, i.e., the domain of our first simulation.
While the random forest regressor has demonstrated superior performance compared to other centralized learning benchmarks in all three simulations, exploring the potential of tree-based models within a FL context [1, 11, 12] could be a promising avenue for further investigation. The improved interpretability and explainability over many other models, e.g., neural networks, is another advantage of tree-based models.
On the other hand, random forest regressors are not suitable if tasks get more complicated. Also, their architecture, i.e., many decision trees which may be individually overfitted to parts of the training data, can facilitate the extraction of sensitive information of the training data and thus pose an additional privacy risk.
Choosing the right hyperparameters is crucial for any ML model. Since automatic HPO is still an open problem for FL algorithms, (manually) finding the right settings can be a time-consuming process. Developing a suitable framework for automated HPO for FL would be important future work - although for official statistics, other issues might be more pressing at the moment (see section 5).
In our third simulation (mobile radio data), we did not have access to the training and test data set, just like in a real-world scenario. This means both HPO and technical debugging needed to be performed remotely, without access to the data. Although this was already challenging, we
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Model** & **R\({}^{2}\)** & **Rel. loss (\%)** \\ \hline neural network & 0.130 & 17.7 \\ neural network (federated) & 0.114 & 27.8 \\ random forest & 0.158 & 0.0 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance in the mobile radio simulation.
believe that in scenarios with multiple data holders and possibly heterogeneous data sets, these tasks will be even harder.
All FL simulations were performed on the machine which also had access to the complete data set. In a real-world application, where each client runs on a distinct machine, other settings and other frameworks might be more practical than TensorFlow Federated.
Last but not least, we want to emphasize that FL, despite its privacy-enhancing character, may still be vulnerable to some ML privacy issues (see subsection 2.2). Hence, analyzing and communicating these risks is an important step before an application is rolled out in practice.
## 5 Implications for Official Statistics
In this work, we have demonstrated how FL can enable NSOs to address pressing data needs in fields that are relevant to policymakers and society. Official statistics are characterized by high accuracy while underlying strict standards in confidentiality and privacy. Accuracy, explainability, reproducibility, timeliness, and cost-effectiveness are essential quality dimensions for statistical algorithms [27]. In this setting, our findings indicate that FL bears significant potential to support statistical production and improve data quality.
We have shown that FL can empower NSOs to generate reliable models that accurately capture global relations. In each of our use cases, the FL-generated models exhibited nearly identical predictive performance compared to a model created by combining all available data. Each model architecture that performed well on centralized or local data could be easily adapted to a FL training process with a similar level of predictive performance only using distributed data.
If upcoming applications require to optimize an individual model for each participating party, personalized FL can be used to generate potentially improved models tailored to individual clients. This increases the interest to cooperate for each participating party, as it offers to enhance the analytic potential for each client and the server. However, it is important to note that this customization may come at the cost of global predictive performance.
FL provides the main advantage of not needing to exchange sensitive data (see subsection 2.2). Additionally, there is no need to store or process the complete data set centralized in the NSOs.
NSOs can be empowered to appraise novel data sources sans the need for new legislation. In cases where legislative changes prove impractical, FL provides a crucial pathway to assess and prepare for regulations' modernization. By showcasing the advantages and implications of accessing new data sources before legal frameworks permit, FL not only significantly accelerates and relieves statistics production but also occasionally enables it.
To ensure successful future implementations of FL in NSOs, it is essential to focus on further advancements. Specifically, improvements in communication frequency are crucial to enable high-speed and efficient exchanges. Our observations indicate that FL generally requires a greater number of epochs (distributed across communication rounds) compared to centralized training to achieve similar performance levels. In our use cases, even with small datasets, we found that at least 50 rounds of communication were necessary. In real-world applications, this would result in high delay and cost. Therefore, the development of infrastructure for seamless sending and receiving ML models is necessary. Addressing this challenge, we discovered that the implementation of adaptive server optimization techniques reduced the training rounds and contributed to training stability. As a result, we recommend the use of adaptive optimizers to help minimize communication costs and enhance the efficiency of FL processes. By incorporating such adaptive optimization methods, NSOs can optimize the performance and effectiveness of FL while reducing the burden of communication overhead.
Additionally, it is crucial to provide partners with the necessary tools to update models effectively. This requires coordination of the server and expertise from all participating parties. In practice, real-world applications of FL often involve the challenge of harmonizing client data without directly accessing it. Achieving an optimized model architecture uniformly across all clients also necessitates the knowledge and collaborative efforts of the clients themselves. Providing comprehensive tools and resources to partners enables them to actively contribute to the model updating process while maintaining data privacy and security.
FL is evolving rapidly and both industry and research will continue to improve the field in the coming years. The performance and efficiency of practical FL frameworks is expected to be further optimized. Similarly, we expect the development of more usable PPML algorithms including the ones based on Secure Multi-Party Computation (SMPC) and Homomorphic Encryption (HE)
allowing for provably secure collaborative ML. Although such PPML methods have been proposed and frameworks exist, their performance today is often far from acceptable for many practical applications. With more standardization and simpler, respectively more efficient, applications, FL will become even more beneficial to official statistics.
In summary, FL should indeed be recognized as an important technology that can facilitate the modernization of legal frameworks for official statistics. It enables NSOs to safely use publicly relevant information that is not expected to be accessed by future legal frameworks, ultimately enhancing the quality and relevance of official statistics. However, further development is still required to fully realize the potential of FL in this context.
## 6 Conclusion
In scenarios where external partners are unwilling to share individual-level information but still aim to analyze or disseminate global insights in their field of application, FL can help to overcome these issues. We have shown across a range of three simulated use cases that FL can reach a very similar performance to centralized learning algorithms. Hence, our results indicate that if classic (centralized) ML techniques work sufficiently well, FL can possibly produce models with a similar performance.
One of the next steps to transfer FL into the practice of official statistics could be to conduct practical pilot studies. These could further showcase both the applicability and challenges of FL beyond a simulated context. Another focus of future work in this area could be the analysis of privacy risks in FL scenarios of official statistics and potential mitigation strategies. This would be an important stepping stone in ensuring the privacy protection of involved parties, on top of the privacy enhancement by using FL. Just as in countless other domains, we expect FL to become a relevant technology for official statistics in the near future.
|
2305.14064 | A vertical gate-defined double quantum dot in a strained germanium
double quantum well | Gate-defined quantum dots in silicon-germanium heterostructures have become a
compelling platform for quantum computation and simulation. Thus far,
developments have been limited to quantum dots defined in a single plane. Here,
we propose to advance beyond planar systems by exploiting heterostructures with
multiple quantum wells. We demonstrate the operation of a gate-defined vertical
double quantum dot in a strained germanium double quantum well. In quantum
transport measurements we observe stability diagrams corresponding to a double
quantum dot system. We analyze the capacitive coupling to the nearby gates and
find two quantum dots accumulated under the central plunger gate. We extract
the position and estimated size, from which we conclude that the double quantum
dots are vertically stacked in the two quantum wells. We discuss challenges and
opportunities and outline potential applications in quantum computing and
quantum simulation. | Hanifa Tidjani, Alberto Tosato, Alexander Ivlev, Corentin Déprez, Stefan Oosterhout, Lucas Stehouwer, Amir Sammak, Giordano Scappucci, Menno Veldhorst | 2023-05-23T13:42:36Z | http://arxiv.org/abs/2305.14064v2 | # A vertical gate-defined double quantum dot in a strained germanium double quantum well
###### Abstract
Gate-defined quantum dots in silicon-germanium heterostructures have become a compelling platform for quantum computation and simulation. Thus far, developments have been limited to quantum dots defined in a single plane. Here, we propose to advance beyond planar systems by exploiting heterostructures with multiple quantum wells. We demonstrate the operation of a gate-defined vertical double quantum dot in a strained germanium double quantum well. In quantum transport measurements we observe stability diagrams corresponding to a double quantum dot system. We analyze the capacitive coupling to the nearby gates and find two quantum dots accumulated under the central plunger gate. We extract the position and estimated size, from which we conclude that the double quantum dots are vertically stacked in the two quantum wells. We discuss challenges and opportunities and outline potential applications in quantum computing and quantum simulation.
+
Footnote †: These authors contributed equally
+
Footnote †: These authors contributed equally
## Introduction
Semiconductor heterostructures composed of silicon and germanium have become the leading material platform for building quantum dot qubits [1; 2; 3]. Developments in their fabrication and operation have enabled demonstrations of high-fidelity single and two-qubit logic, multi-qubit operation, and rudimentary quantum error correction [1; 2; 4; 5; 6; 7; 8; 9]. Efforts in scaling quantum dots have led to the operation of a crossbar array comprising 16 quantum dots [10]. Furthermore, long-range quantum links may enable to interconnect modules of quantum dot arrays [11; 12; 13; 14]. These developments in gate-defined quantum dots have been restricted to quantum dots defined in a single plane, however, the versatile nature of silicon-germanium heterostructures allows for further exploration. In particular, structures with multiple quantum wells can be grown, and double quantum wells of germanium [15] and silicon [16] have been realized. An open question is thus whether multi-layer heterostructures can become a relevant platform for quantum information. Here, we motivate potential applications and experimentally explore quantum dots in stacked quantum wells.
Heterostructures with parallel quantum wells may support integration of important functionalities for spin qubit based quantum processors, as depicted in Fig.1b. Precise control over the growth of individual layers allows the engineering of inter and intralayer properties. When charges residing in separate quantum wells are capacitively coupled, but have (almost) no tunnel coupling, charge sensors could be integrated into separate layers from the qubits they sense. In an intermediate regime where tunnel coupling is in the order of one to a few tens of gigahertz, coherent spin shuttling between the wells could be realized. Consequently, one layer may serve as quantum link for qubits defined in the other layer, for example by offering shuttling lanes that connect remote qubits [17; 14]. In this regime the second layer can also host dedicated ancilla qubits that aid in spin-to-charge conversion for initialization and readout. Tunnel-coupled quantum wells may also be used to develop novel qubit implementations such as vertical singlet-triplet qubits or flopping mode spin qubits [18]. Moreover, thickness and atomic composition may be tuned to optimize g-tensors[19] and spin-orbit interactions in each quantum well [20], to provide dedicated functionality.
Quantum dots in multiple quantum wells may also present new opportunities for analog quantum simulation. While planar two-dimensional quantum dot arrays may be used to simulate correlated physics such as the resonating valence bond [21], quantum dots in a double quantum well may simulate even more exotic systems. For example, exciton condensation may be induced by Coulomb interactions in a regime where in one layer the quantum dot occupancy is tuned almost empty (electron layer) and the occupancy of the quantum dots in the other layer is tuned to almost filled (hole layer), where individual parameters can be controlled and studied as opposed to quantum transport implementations [22; 23]. Quantum dots in multi-layer structures comprised of three or more quantum wells could also be envisioned. The confinement of quantum dots in three layers potentially supports artificial superconductivity. Attractive Coulomb interaction in quantum dot systems has been observed in planar systems [24; 25] and integration of such interactions into a three quantum well system may provide a route toward tunable and controllable superconducting condensation.
These motivations warrant the study of quantum dots defined in multilayer heterostructures for quantum infor
mation. However, there are also many challenges in the fabrication, design, and operation that need to be understood and overcome. In particular, how to address individual quantum dots and tune their inter and intra layer coupling needs further exploration. We take a first step and demonstrate a vertical double quantum dot in a strained germanium double quantum well heterostructure. Through quantum transport measurements, we obtain charge stability diagrams consistent with a double quantum dot. We characterise the capacitive interaction of the quantum dots to the surrounding gates and determine their location. The size of one of the quantum dots is estimated through bias-spectroscopy. Together, these findings point to the formation of a double quantum dot vertically aligned under the same plunger gate.
## I Results
An undoped and compressively-strained Ge/SiGe double quantum well heterostructure is epitaxially grown on a \(100\,\mathrm{mm}\) Si(100) substrate. A \(55\,\mathrm{nm}\) thick Si\({}_{0.2}\)Ge\({}_{0.8}\) spacer separates the bilayer system from the gate stack. The top and bottom quantum wells in the bilayer are \(10\,\mathrm{nm}\) and \(16\,\mathrm{nm}\) thick respectively, and are separated by a \(4\,\mathrm{nm}\) Si\({}_{0.2}\)Ge\({}_{0.8}\) barrier (Fig. 1a). We perform magnetotransport characterization of a Hall-bar shaped heterostructure field effect transistor to infer the energy spectrum of the hole bilayer. The conductance map in Fig. 1e reveals the emergence of two sets of Landau levels typical of such a bilayer system [15; 26]. At \(V_{G}\approx-320\,\mathrm{mV}\) the longitudinal conductivity \(\sigma_{xx}\) shows the first set of quantized Landau levels, corresponding to the subband localized in the bottom well. At \(V_{G}\approx-400\,\mathrm{mV}\) the conductance curve at zero-magnetic-field (Fig. 1e bottom panel) deviates from a linear increase and flattens out as the sub-band localized in the top well starts being populated. This originates from the electric field screening caused by the accumulation of charge carriers in the top well, while its density is still below the percolation threshold and transport is only available through the bottom well [15]. For more negative voltages, the carriers in the top well start contributing to transport and conductance increases.
We then fabricate gate defined quantum dots (see methods) to probe the properties of electrostatically confined holes in this bilayer system. A 3D schematic depicting the heterostructure and gate stack, and a scanning electron microscopy (SEM) image of the device are respectively shown in Fig. 1a,c. The central plunger gate \(P\) is negatively biased to accumulate holes beneath it, while the barrier gates _B\({}_{W}\)_ and _B\({}_{E}\)_ are used primarily to tune the tunnel barrier to the ohmic contacts (S and D). The gates _B\({}_{N}\)_ and _B\({}_{S}\)_ further shape the potential landscape without significantly affecting the tunnel barrier to the ohmics. We measure the transport through the device
Figure 1: **Gate defined vertical double quantum dot in a bilayer heterostructure.****a** Schematic of the heterostructure and gate stack. **b** Vision of a larger bilayer device, with different use-cases depicted, such as shuttling (white), sensing (green) and vertical 2-qubit gates (yellow). **c** False coloured SEM image of a similar device to the one used in this work. Quantum dots are defined under the plunger gate \(P\) (pink) and measured in transport using the ohmic contacts source (S) and drain (D) (blue). The coupling between the quantum dot and ohmic is tuned by _B\({}_{E}\)_ and _B\({}_{W}\)_ (green). The potential landscape is further shaped by the gates _B\({}_{N}\)_ and _B\({}_{S}\)_ (green). The experiments presented in this work are performed on a section of a larger device (see Appendix A). **d** Conductance trace as a function of the plunger gate voltage through S-D at _V_SD=1 mV (blue line), and differential conductance trace at _V_AC = \(17\,\mathrm{\SIUnitSymbolMicro V}\). **e** Colour map of the conductivity \(\sigma_{xx}\) as a function of gate voltage \(V_{G}\) and the inverse magnetic field \(1/B\). Dark regions correspond to filled Landau levels with vanishing \(\sigma_{xx}\) and correspondingly quantized \(\sigma_{xy}\). Lower Panel: Linecut at \(B=0\) T showing the zero field conductance trace.
with DC and standard low-frequency lock-in techniques (see Methods). Similarly to the 2D transport measurement, at high source-drain DC bias (\(\,V_{SD}\)= 1 mV, Fig. 1d blue line), the conductance trace starts to increase as \(P\) is lowered, and flattens out at P = \(-\)1.2 V, before increasing again as a second transport channel opens. The differential conductance (\(dI/dV\)) at zero DC bias (pink line), reveals the emergence of Coulomb peaks, and the formation of quantum dots. Interestingly, we observe that the Coulomb oscillation amplitude significantly decreases around the plateau. These observations are consistent with the Hall bar experiments, where first the bottom quantum well is being populated, followed by the population of the upper quantum well, which initially does not contribute to transport.
To further investigate the nature of the quantum dots in this bilayer system we map the charge stability diagram as a function of the gate voltages applied on \(B_{N}\) and \(P\) (Fig. 2). A distinct honeycomb pattern emerges, indicating the presence of a double quantum dot system [27]. Unlike a typical double-quantum dot charge stability diagram [27] where each quantum dot predominantly responds to its own dedicated plunger gate, all transition lines in this diagram have a similar slope, indicating that both quantum dots have similar capacitance response to the gates. This observation is consistent with a vertically stacked quantum dot system, where both quantum dots are expected to have a similar capacitance to the surrounding gates. We further map the charge stability diagram as a function of \(B_{i}\) (\(i=W,S,E\)) and \(P\) for the remaining barrier gates (see Appendix Fig.2) and similarly, the transition lines indicate a comparable capacitive coupling of the surrounding gates to the two quantum dots labelled as \(D_{bot}\) and \(D_{top}\). The absence of transition lines strongly coupled to the gates \(B_{i}\) confirms that both quantum dots are centrally located under the plunger gate \(P\), rather than originating from spurious quantum dots positioned under one of the \(B_{i}\) gates.
To distinguish the inter-dot transitions from the reservoir transitions in the charge stability diagram of Fig. 2a, we measure the differential conductance as a function of \(\,V_{\mathrm{SD}}\) and \(P\). The bottom panel of Fig. 2b shows a set of Coulomb diamonds taken in the regime where \(B_{N}=-0.15\,\mathrm{V}\) (magenta line-cut shown in the top panel). The transition lines which fall on this line-cut correspond to \(\,V_{\mathrm{SD}}\)=0 in the lower panel, and in both panels are indicated by the four coloured arrows. The transition lines indicated by the blue and orange arrows correspond to the edges of the Coulomb diamonds, and thus to dot-reservoir transitions. In contrast, the transition line indicated by the green arrow lies in the middle of a Coulomb diamond, and a conductance peak is barely visible. This is the expected behaviour for an inter-dot transition, where transport is via co-tunneling processes and which results in a weak conductance signal only when the two quantum dots are in resonance. Using these four transition lines as a starting point, we assign to each line in the charge stability diagrams the respective transition
Figure 2: **Charge stability diagram of the vertical double quantum dot.****a** By sweeping the gates \(P\) vs \(B_{N}\), a honeycomb pattern emerges as the gates induce transport resonances. The dashed coloured lines overlain on the data correspond to the electrochemical simulation of the double-quantum dot system. **b** A zoom-in of the data presented in panel a with coloured arrows corresponding to different transitions (the scale bar remains the same as in **a**). **c** Bias spectroscopy across a line-cut of panel b top (magenta) at \(B_{N}=-0.15\mathrm{V}\), the coloured arrows correspond to the transitions highlighted in the top panel. Measurement of the orbital energy for the first Coulomb diamond is indicated by the white lines and is extracted to be approximately \(260\,\mathrm{\SIUnitSymbolMicro eV}\).
type.
From the slopes of the reservoir transitions in each charge stability diagram we extract the capacitive couplings for the two quantum dots to the barrier gates \(B_{i}\) relative to the plunger gate \(P\) defined as \(\alpha_{B_{i},D}/\alpha_{P,D}\), where \(\alpha_{B_{i},D}\) is the lever arm of the barrier gate \(B_{i}\) to quantum dot \(D\). Fig. 3a shows a box-plot of the relative coupling for the two quantum dots for each barrier gate \(B_{i}\), with the colours matching the two sets of reservoir transitions identified in the charge stability diagrams of Fig. 2. We observe that the coupling of the two quantum dots to all barrier gates is lower than their coupling to the plunger gate. This is consistent with both quantum dots being located under the plunger gate. Furthermore the relative coupling of quantum dot \(D_{bot}\) is larger than the coupling of \(D_{top}\) for all barrier gates which is evidence of the two quantum dots being vertically stacked under the plunger gate. If the two quantum dots were not vertically stacked and instead lay in the same plane, one quantum dot would exhibit a larger relative coupling to one or two adjacent barrier gates, while the other quantum dot would show a larger relative coupling to the remaining barrier gates.
We further support this interpretation by estimating the position of both quantum dots using an electrostatic finite element method (FEM) simulation in Ansys Q3D[28], in which the heterostructure, gate-layers and insulating layers are included (details can be found in Appendix E). The quantum dots are simulated individually in the different layers, to reflect their possible locations (Fig.3d). Each quantum dot is modelled as a metallic disk as thick as the quantum well it is located in. The radius and position of this simulated metallic quantum dot is varied to analyze its effect on the capacitance. The geometric capacitance between this quantum dot and the gates is determined, and assumed to be directly proportional to the lever arm \(\alpha_{G,D}\). By comparing the simulated capacitance with the extracted relative coupling \(\alpha_{B_{i},D}/\alpha_{P,D}\), the position of a single quantum dot within either Ge layer is triangulated. The positions of the quantum dots best matching the experimental data are relatively close to each other, under the plunger gate, and both positioned towards \(B_{S}\) (Fig. 3d) as indicated by the cross. The centre-centre distance between the quantum dots has an upper bound of \(30\,\mathrm{nm}\) with \(1\sigma\) standard deviation, independent of which layer the simulation is
Figure 3: **Measured and simulated relative lever arms of the surrounding gates to the quantum dots, and the triangulation of their centre points.****a** Boxplot of the relative lever arm (\(\alpha_{B_{i},D_{i}}/\alpha_{P,D_{i}}\)) of each barrier gate \(B_{i}\) to the plunger gate \(P\) for the top and bottom quantum dot (\(D_{i}\)). These couplings are extracted from the slopes of different reservoir transition in the charge stability diagrams as a function of \(B_{i}\) and \(P\) (see Appendix B). **b** Simulated absolute lever arm for the plunger gate \(P\) and barrier gate B to the top and bottom quantum dot calculated from the Schrödinger-Poisson simulation described in Appendix D.**c** Simulated relative lever arm \(\alpha_{B_{i},D_{i}}/\alpha_{P,D_{i}}\) for the first 20 orbital states, plotted against the orbital energy (\(E_{orb}\)). Data are extracted from the Schrödinger-Poisson simulation described in Appendix D. **d** Triangulation of the position of the quantum dots based on the coupling in **a** and the capacitive simulation (Appendix E). The cross indicates the centre point of each quantum dot, and the coloured area represents the \(1\sigma\) standard deviation of this value. In the ‘Bilayer’ panel the quantum dots are simulated in separate wells, with the orange quantum dot and blue quantum dot placed in the top and bottom well respectively. In the ‘Top Well’ and ‘Bottom Well’ panel the quantum dots are simulated in the same layer.
performed on, as indicated by the coloured area. The standard deviation is based on the spread of the relative couplings that is extracted from each charge-stability diagram. The prohibitively close proximity of the center of these two quantum dots suggests that two quantum dots are located in the two different wells.
Furthermore, from the orbital energy \(E_{orb}=260\,\mathrm{\SIUnitSymbolMicro eV}\) extracted in 2b (white lines) for the quantum dot corresponding to the blue transitions, we estimate the dot size. Assumimg a harmonic in-plane potential \(V_{xy}=\frac{1}{2}m^{*}\omega^{2}(x^{2}+y^{2})\), where \(\hbar\omega=E_{orb}\) and \(m^{*}=0.055m_{e}\), this gives a quantum dot \(d\) diameter of about \(d=\sqrt{\hbar/(\omega m^{*})}=137\,\mathrm{nm}\), comparable to the plunger gate size of \(150\mathrm{nm}\). Based on the size approximation of this quantum dot and their mutual proximity, we conclude that the quantum dots cannot coexist in a single layer without coalescing. Overall, the interpretation of the measured relative capacitive couplings, along with the results from the FEM simulation and the estimates of the quantum dot size from the Coulomb diamonds, provide strong arguments for the quantum dots being vertically stacked under the plunger gate.
To gain further insight into this vertical double-quantum dot system we perform a 2D Schrodinger-Poisson simulation and present the results in Fig. 3 b,c. Fig. 3c shows the relative capacitive coupling of the quantum dots formed in the top quantum dot (\(D_{top}\)) and bottom quantum dot (\(D_{bot}\)) for different orbitals as a function of orbital energy. These relative couplings are calculated from the absolute capacitive couplings of \(D_{bot}\) and \(D_{top}\) to the barrier gate \(B\) and the plunger gate \(P\) shown in panel Fig. 3b. While the absolute coupling of the plunger gate to the top and bottom quantum dot (\(\alpha_{P-top}\) and \(\alpha_{P-bot}\)) remains approximately constant with increasing orbital number, the barrier gate lever arm to both quantum dots varies significantly with the orbital number. This is because an increasing orbital number corresponds to an increase of the wavefunction radius. As a result the distance to the barrier gate gets smaller and the coupling to the barrier gate increases. Panel b shows that a relatively larger lever arm of the barrier gates is expected when the quantum dot is located in the bottom quantum well. We find this for all the simulated orbitals, suggesting that in Fig. 3a \(D_{top}\) corresponds to a quantum dot located in the upper quantum well and \(D_{bot}\) to a quantum dot located in the bottom quantum well.
To confirm the position of the quantum dots suggested by the Schrodinger-Poisson simulation, we look at the inter-dot transitions in the charge stability diagrams of Appendix Fig. 2. In all diagrams the inter-dot line strongly couples to the plunger gate, with a hole being transported from \(D_{bot}\) (blue transition lines) to the \(D_{top}\) (orange transitions lines) at more negative voltages. This provides evidence that \(D_{top}\) is localized in the top well, with a hole being attracted from the bottom quantum dot towards the plunger gate on top.
## II Discussion and outlook
In this work we demonstrate that a vertical double quantum dot can be formed and controlled in double quantum well heterostructure. A single gate can be used to simultaneously populate quantum dots in two quantum wells whilst the charge occupation can be tuned using one of the surrounding gates. This provides prospects for quantum dot arrays in multiple quantum wells. Integration of charge sensors may allow to tune to the single-hole regime. The separation of the quantum wells may be used as a coarse parameter to tune the interlayer coupling between the dots, while the observation of different lever arms corresponding to the wells suggest that gate voltages may be used for further tuning. Establishing quantum dot arrays beyond planar arrays may provide new means for quantum computation and simulation with quantum dots.
## III Acknowledgements
We acknowledge useful discussions with members of the Veldhorst, Scappucci and Vandersypen group. We thank S. G. J. Philips and S. De Snoo for the development of the software stack used to aquire data in this experiment.
## IV Data availability
The raw data and analysis supporting the findings of this study are openly available in a Zenodo repository: [https://zenodo.org/record/7962368](https://zenodo.org/record/7962368).
## V Funding
We acknowledge support through an NWO ENW grant and an ERC Starting Grant.
## VI Competing interests
We declare there are no competing interests.
## VII Methods
The device is fabricated on a Si\({}_{x}\)Ge\({}_{1\,-\,x}\)/Ge/Si\({}_{x}\)Ge\({}_{1\,-\,x}\)/Ge/Si\({}_{x}\)Ge\({}_{1\,-\,x}\) heterostructure, where \(x=0.2\), grown by reduced pressure chemical vapour deposition. The virtual substrate upon which the heterostructure is grown consists of a silicon substrate, upon which there is a \(1.6\mathrm{\SIUnitSymbolMicro m}\) relaxed Ge layer; a \(1\mathrm{\SIUnitSymbolMicro m}\) graded Si\({}_{x}\)Ge\({}_{1\,-\,x}\) layer, with final Ge composition of x = 0.2. On top of the SiGe virtual substrate, the
bilayer system comprises of a 16 nm thick bottom Ge quantum well, a 4 nm thick SiGe barrier, a 10 nm thick top Ge quantum well, and a final 55 nm thick SiGe spacer. At the top of the stack a sacrificial Si cap is grown to provide a native SiOx oxide layer. We define ohmic contacts using electron beam lithography and buffered oxide etch of the silicon cap layer. We then evaporate a 30 nm Platinum layer and using a 10 minute rapid thermal anneal at 400\({}^{\circ}\) C, contact the quantum well. The ohmic layer is isolated using a 7 nm layer of Al\({}_{2}\)O\({}_{3}\) grown by atomic layer deposition. Electrostatic gates used to define the quantum dots are defined in two layers, (3/17 nm and 3/37 nm of Ti/Pd.) and are separated by a 5 nm layer of Al\({}_{2}\)O\({}_{3}\).
Devices are screened at 4 K using standard dipstick measurements. Experiments reported in this paper are carried out in a Bluefors LD400 dilution refrigerator with a base temperature of 10 mK. The electrical properties are investigated through two terminal AC and DC measurements. There is a tunable DC voltage component \(V_{\mathrm{SD}}\) used for biasing the device, and an oscillating AC voltage is applied when taking measurements with a lock-in amplifier. The differential conductance \(dI/dV_{\mathrm{SD}}\) is measured using standard lock-in techniques with a typical frequency of 70 Hz, and an amplitude of 17 \(\mathrm{\SIUnitSymbolMicro}\).
|
2306.05439 | Contrastive Representation Disentanglement for Clustering | Clustering continues to be a significant and challenging task. Recent studies
have demonstrated impressive results by applying clustering to feature
representations acquired through self-supervised learning, particularly on
small datasets. However, when dealing with datasets containing a large number
of clusters, such as ImageNet, current methods struggle to achieve satisfactory
clustering performance. In this paper, we introduce a novel method called
Contrastive representation Disentanglement for Clustering (CDC) that leverages
contrastive learning to directly disentangle the feature representation for
clustering. In CDC, we decompose the representation into two distinct
components: one component encodes categorical information under an
equipartition constraint, and the other component captures instance-specific
factors. To train our model, we propose a contrastive loss that effectively
utilizes both components of the representation. We conduct a theoretical
analysis of the proposed loss and highlight how it assigns different weights to
negative samples during the process of disentangling the feature
representation. Further analysis of the gradients reveals that larger weights
emphasize a stronger focus on hard negative samples. As a result, the proposed
loss exhibits strong expressiveness, enabling efficient disentanglement of
categorical information. Through experimental evaluation on various benchmark
datasets, our method demonstrates either state-of-the-art or highly competitive
clustering performance. Notably, on the complete ImageNet dataset, we achieve
an accuracy of 53.4%, surpassing existing methods by a substantial margin of
+10.2%. | Fei Ding, Dan Zhang, Yin Yang, Venkat Krovi, Feng Luo | 2023-06-08T07:15:13Z | http://arxiv.org/abs/2306.05439v2 | # Contrastive Representation Disentanglement for Clustering
###### Abstract
Clustering continues to be a significant and challenging task. Recent studies have demonstrated impressive results by applying clustering to feature representations acquired through self-supervised learning, particularly on small datasets. However, when dealing with datasets containing a large number of clusters, such as ImageNet, current methods struggle to achieve satisfactory clustering performance. In this paper, we introduce a novel method called Contrastive representation Disentanglement for Clustering (CDC) that leverages contrastive learning to directly disentangle the feature representation for clustering. In CDC, we decompose the representation into two distinct components: one component encodes categorical information under an equipartition constraint, and the other component captures instance-specific factors. To train our model, we propose a contrastive loss that effectively utilizes both components of the representation. We conduct a theoretical analysis of the proposed loss and highlight how it assigns different weights to negative samples during the process of disentangling the feature representation. Further analysis of the gradients reveals that larger weights emphasize a stronger focus on hard negative samples. As a result, the proposed loss exhibits strong expressiveness, enabling efficient disentanglement of categorical information. Through experimental evaluation on various benchmark datasets, our method demonstrates either state-of-the-art or highly competitive clustering performance. Notably, on the complete ImageNet dataset, we achieve an accuracy of 53.4%, surpassing existing methods by a substantial margin of +10.2%.
Clustering, Neural nets, Representation disentanglement, Representation learning, Computer vision, Machine learning
## 1 Introduction
Image clustering has been widely used in many computer vision, such as image segmentation [1] and visual features learning [2, 3]. Since images are usually high-semantic and high-dimensional, it is difficult to achieve better performance when clustering on large-scale datasets. Earlier clustering studies [1, 4, 5, 6] focus on end-to-end training solutions. For example, Information Maximizing Self-Augmented Training (IMSAT) [6] and Invariant Information Clustering (IIC) [1] develop clustering methods from a mutual information maximization perspective, and Deep Embedded Clustering (DEC) [4] and Deep Clustering Network (DCN) [5] perform clustering on initial features obtained from autoencoders. Since these methods rely on network initialization and are likely to focus on low-level non-semantic features [7], such as color and texture, they are prone to cluster degeneracy solutions.
The recently developed clustering methods usually consist of two key steps: representation learning and cluster assignment. Representation learning aims to learn semantically meaningful features, _i.e._, samples from the same category are projected to similar features so that all samples are linearly separable. One popular representation learning is self-supervised contrastive learning [8, 9, 10, 11] that greatly improves the learned representation. To obtain categorical information without labels, an additional step such as K-means clustering [2, 4, 12] or training of a classifier [7] is required for cluster assignment. K-means clustering [2, 4, 12] is widely used for clustering on learned features. It requires the proper selection of distance measures, thus suffering from the uneven assignment of clusters and leading to a degenerate solution [12]. Semantic Clustering by Adopting Nearest Neighbors (SCAN) [7] proposes a novel objective function to train a classifier instead of using K-means. Its performance relies heavily on the feature quality such that nearest neighbors of each sample in the feature space belong to the same category. Due to the presence of noisy nearest neighbors, there is still room for improvement in clustering performance on large-scale datasets.
In this paper, we propose Contrastive representation Disentanglement based Clustering (CDC), a novel clustering method that directly disentangles the categorical information from the feature representation via contrastive learning. Specifically, we formulate contrastive learning as a proxy task to disentangle the feature representation, which enables us to take advantage of the powerful contrastive learning frameworks. Figure 1 shows an illustration of CDC. First, each representation is decomposed into two parts: \(\mathbf{z}^{c}\) and \(\mathbf{z}^{n}\), where \(\mathbf{z}^{c}\) represents categorical information (logits) and \(\mathbf{z}^{n}\) is used for capturing instance-wise factors. Then, \(\mathbf{z}^{c}\) and \(\mathbf{z}^{n}\) are concatenated together for the training of typical contrastive learning. To avoid the collapse of \(\mathbf{z}^{c}\), we apply the equipartition constraint on it to ensure that the clusters are evenly assigned. We demonstrate that this constraint plays a crucial role in disentangling the representation into two distinct parts and ensuring that \(\mathbf{z}^{c}\) effectively encodes categorical information. By considering \(\mathbf{z}^{c}\) as part of the
representation, which can handle well a large number of clusters, we achieve the efficient disentanglement for cluster assignment through contrastive learning.
Our method can be interpreted as instance-wise contrastive learning with self-adjusting weights, which learns to set different weights to distinguish different negative samples. Typical contrastive learning methods aim to force positive pairs to achieve high similarity (dot product) and negative pairs to achieve low similarity on \(\mathbf{z}^{n}\). Our method adjusts the order of magnitude corresponding to each dimension of \(\mathbf{z}^{c}\) to distinguish between intra-class and inter-class samples. For example, positive pairs yield high similarity scores because they are from the same instance, and negative pairs from different categories yield low similarity scores due to different semantics. Note that negative pairs from the same category yield moderate similarity scores during the instance-wise setting. We demonstrate that the similarity of \(\mathbf{z}^{c}\) provides a mechanism to adjust different weights for negative samples and improves the uniformity property of negative samples in the representation space, therefore beneficial for representation learning. Our contributions can be summarized as follows:
* We propose CDC, a contrastive representation disentanglement-based clustering method that separates categorical information from the feature representation. It considers contrastive learning as a proxy task to efficiently learn disentangled representation.
* We apply the equipartition constraint on part of the representation to enforce representation disentanglement. The proposed objective function plays a key role in disentangling the representation into two parts: categorical and instance-wise information.
* We provide a theoretical analysis to demonstrate that CDC adjusts different weights for negative samples through learning cluster assignments. With a gradient analysis, we show that the larger weights tend to concentrate more on hard negative samples.
* The clustering experiments show that CDC outperforms existing methods in multiple benchmarks, and in particular, achieves significant improvements on ImageNet. CDC also contributes to better representation learning results.
## 2 Method
Our goal is to disentangle the feature representation for clustering via contrastive learning. There are several representation learning methods [2, 3, 12, 13] that jointly learn clustering and feature representation. For example, SwAV [3] proposes an online codes assignment to learn feature representation by comparing prototypes corresponding to multiple views. These methods either require representation learning and clustering to be performed alternately or require learning additional prototypes, which may be not efficient enough for cluster assignment without labels. It's still necessary to propose a novel objective function to directly obtain cluster assignment. Our method learns cluster assignment via self-adjusting contrastive representation disentanglement and improves the clustering performance on the large-scale dataset. The proposed disentanglement loss can be combined with many contrastive learning methods for cluster assignment. Our method demonstrates that contrastive learning can not only achieve remarkable performance in representation learning, but also high efficiency for representation disentanglement.
### _Weighted Instance-wise Contrastive Learning_
Given a training set \(\mathbf{X}=\{\mathbf{x}_{1},\ldots,\mathbf{x}_{N}\}\), contrastive learning aims to map \(\mathbf{X}\) to \(\mathbf{Z}=\{\mathbf{z}_{1},\ldots,\mathbf{z}_{N}\}\) with \(\mathbf{z}_{i}=h(f(\mathbf{x}_{i}))\), such that \(\mathbf{z}_{i}\) can represent \(\mathbf{x}_{i}\) in representation space. \(f(\cdot)\) denotes a feature encoder backbone and the projection head \(h(\cdot)\) usually is a multi-layer perceptron. Given the similarity
\(s_{i,j}\), it can be written as: \(s_{i,j}=\mathbf{z}_{i}\cdot\mathbf{z}_{j}=\mathbf{z}_{i}^{c}\cdot\mathbf{z}_{j}^{ c}+\mathbf{z}_{i}^{n}\cdot\mathbf{z}_{j}^{n}\). Let \(s_{i,j}^{c}=\mathbf{z}_{i}^{c}\cdot\mathbf{z}_{j}^{c}\), \(s_{i,j}^{n}=\mathbf{z}_{i}^{n}\cdot\mathbf{z}_{j}^{n}\), then \(s_{i,j}=s_{i,j}^{c}+s_{i,j}^{n}\) and \(\exp(s_{i,j}/\tau)=\exp(s_{i,j}^{c}/\tau)\cdot\exp(s_{i,j}^{n}/\tau)\). The objective function of contrastive learning, such as InfoNCE [10, 11, 14], can be formulated as:
\[\mathcal{L}_{\mathrm{InfoNCE}}\left(\mathbf{x}_{i}\right) \tag{1}\] \[=-\log\frac{\exp\left(s_{i,i}/\tau\right)}{\sum_{k}\exp\left(s_{i,k}/\tau\right)+\exp\left(s_{i,i}/\tau\right)}=\] (2) \[-\log\frac{\exp(s_{i,i}^{n}/\tau)}{\sum_{k}(\exp((s_{i,k}^{c}-s_{ i,i}^{c})/\tau)\cdot\exp(s_{i,k}^{n}/\tau))+\exp(s_{i,i}^{n}/\tau)}, \tag{3}\]
where \(\tau\) is a temperature hyper-parameter, the positive similarity \(s_{i,i}\) is calculated by two augmented versions of the same image, and the negative similarity \(s_{i,j}(j\neq i)\) compares different images. In our settings, \(\mathbf{z}_{i}\) consists of \(\mathbf{z}_{i}^{c}\) and \(\mathbf{z}_{i}^{n}\). The similarity \(s_{i,j}\) can be written as: \(s_{i,j}=\mathbf{z}_{i}\cdot\mathbf{z}_{j}=\mathbf{z}_{i}^{c}\cdot\mathbf{z}_{ j}^{c}+\mathbf{z}_{i}^{n}\cdot\mathbf{z}_{j}^{n}\). Let \(s_{i,j}^{c}=\mathbf{z}_{i}^{c}\cdot\mathbf{z}_{j}^{c}\), \(s_{i,j}^{n}=\mathbf{z}_{i}^{n}\cdot\mathbf{z}_{j}^{n}\), then we can re-write the standard contrastive loss (equation 2) as equation 3.
Note that \(s_{i,i}^{n}\) can be considered as instance-wise positive similarity, \(s_{i,k}^{n}\) can be considered as instance-wise negative similarity, and \(\exp((s_{i,k}^{c}-s_{i,i}^{c})/\tau)\) is a learnable coefficient for each negative sample. Thus, we obtain a more expressive contrastive loss. Considering that \(\tau\) is a hyperparameter, the value of this coefficient is mainly determined by \(s_{i,k}^{c}\) and \(s_{i,i}^{c}\), which are further calculated by \(\mathbf{z}_{i}^{c}\) and \(\mathbf{z}_{k}^{c}\). This coefficient learns to set different weights for each negative sample, _i.e._, larger weight on hard negative (intra-class) samples and smaller weight on inter-class negative samples. In this way, the part of representation \(\mathbf{z}^{c}\) plays a role in adjusting different penalties on different negative samples. In contrast, the typical contrastive loss sets all negative samples to the same coefficient with a value of 1.
Since \(s_{i,j}^{c}\) and \(s_{i,j}^{n}\) are symmetric in equation 2, the standard contrastive loss can also be written as:
\[\mathcal{L}_{\mathrm{InfoNCE}}\left(\mathbf{x}_{i}\right)=-\log \frac{\exp\left(s_{i,i}/\tau\right)}{\sum_{k}\exp\left(s_{i,k}/\tau\right)+\exp \left(s_{i,i}/\tau\right)} \tag{4}\] \[=-\log\frac{\exp(s_{i,i}^{c}/\tau)}{\sum_{k}(\exp((s_{i,k}^{n}-s_{ i,i}^{n})/\tau)\cdot\exp(s_{i,k}^{c}/\tau))+\exp(s_{i,i}^{c}/\tau)} \tag{5}\]
Then we observe that the coefficient \(\exp((s_{i,k}^{n}-s_{i,i}^{n})/\tau)\) can be considered as a constant because \(\mathbf{z}^{n}\) satisfies the properties of alignment and uniformity, as analyzed in Section 4.3. Thus, equation 5 can be considered as a standard contrastive loss on \(\mathbf{z}^{c}\). \(\mathbf{z}^{c}\) can retain well the categorical information ( shown in Figure 4 (a)), which is beneficial for learning cluster assignment via contrastive learning.
To make the above coefficient work as expected, we need to add some constraints to \(\mathbf{z}^{c}\). First, the constraint should ensure that the value range of coefficients is bounded. This requirement can be satisfied by performing normalization on \(\mathbf{z}^{c}\) and \(\mathbf{z}^{n}\) separately. More importantly, the constraint should satisfy that \(\mathbf{z}^{c}\) does not collapse to the same assignment, otherwise our method will degrade to typical contrastive learning. Here, we introduce the equipartition constraint on \(\mathbf{z}^{c}\) which encourages it to encode the semantic structural information, while also avoiding its collapse problem. \(\mathbf{z}^{c}\) is expected to represent the probability over clusters \(\mathcal{C}=\{1,\dots,K\}\) after softmax function.
### _Equipartition constraint for disentanglement_
Given the logits \(\mathbf{z}^{c}\), we can obtain the categorical probabilities via the softmax function: \(\mathbf{p}\left(y\mid\mathbf{x}_{i}\right)=\mathrm{softmax}\left(\mathbf{z}_{ i}^{c}/t\right)\), where \(t\) is the temperature to rescale the logits score. To avoid degeneracy, we add the equipartition constraint on cluster assignment to enforce that the clusters are evenly assigned in a batch. The pseudo-assignment \(\mathbf{q}\left(y\mid\mathbf{x}_{i}\right)\in\{0,1\}\) is used to describe the even assignment of \(\mathbf{z}^{c}\). We denote \(B\) logits in a batch by \(\mathbf{Z}^{c}=[\mathbf{z}_{1}^{c}/t,\dots,\mathbf{z}_{B}^{c}/t]\), and the pseudo-assignment by \(\mathbf{Q}=[\mathbf{q}_{1},\dots,\mathbf{q}_{B}]\).
The equipartition constraint has been used in previous self-supervised learning studies [3, 13] for representation learning. Asano _et al._[13] propose to solve the matrix \(\mathbf{Q}\) by restricting the transportation polytope on the entire training dataset. SwAV [3] improves the above solution to calculate the online prototypes for contrastive learning, and achieves excellent representation learning results. Unlike SwAV, which uses the similarity of features and prototypes as input to obtain pseudo-assignment, whose assignment can be interpreted as the probability of assigning each feature to a prototype, we consider the representation \(\mathbf{z}^{c}\) as logits to directly obtain assignments without any prototypes. Here, we propose to adopt a similar solution to optimize \(\mathbf{Q}\) directly from the logits matrix \(\mathbf{Z}^{c}\),
\[\max_{\mathbf{Q}\in\mathcal{Q}}\mathrm{Tr}\left(\mathbf{Q}^{\top}\mathbf{Z}^{ c}\right)+\varepsilon H(\mathbf{Q}), \tag{6}\]
where \(\mathcal{Q}=\left\{\mathbf{Q}\in\mathbb{R}_{+}^{K\times B}\mid\mathbf{Q}\mathbf{1 }_{B}=\frac{1}{K}\mathbf{1}_{K},\mathbf{Q}^{\top}\mathbf{1}_{K}=\frac{1}{B} \mathbf{1}_{B}\right\}\) and H denotes the entropy regularization. \(\mathbf{1}_{B}\) and \(\mathbf{1}_{K}\) denote the vector of ones in dimension B and K. These constraints ensure that all samples in each batch are evenly divided into K clusters. We also set \(\varepsilon\) to be small to avoid a trivial solution [3].
The solution of equation 6 can be written as: \(\mathbf{Q}^{*}=\mathrm{Diag}(\mathbf{u})\exp\left(\frac{\mathbf{Z}^{c}}{ \varepsilon}\right)\mathrm{Diag}(\mathbf{v})\), where \(\mathbf{u}\) and \(\mathbf{v}\) are two scaling vectors such that \(\mathbf{Q}\) is a probability matrix [13, 15]. The vectors \(\mathbf{u}\) and \(\mathbf{v}\) can be computed using Sinkhorn-Knopp algorithm [15] through several iterations. In practice, by using GPU, 3 iterations are fast enough and can ensure satisfactory results [3]. Once we obtain the solution \(\mathbf{Q}^{*}\), we directly apply its soft assignment to constrain \(\mathbf{z}^{c}\) by minimizing the following cross-entropy loss for disentanglement:
\[\mathcal{L}_{\mathrm{CE}}\left(\mathbf{z}^{c},\mathbf{q}\right)=-\sum_{k} \mathbf{q}^{(k)}\log\mathbf{p}^{(k)}. \tag{7}\]
### _Gradients Analysis_
Here, we perform a gradient analysis to understand the properties of the proposed contrastive loss. Because the equipartition constraint is not related to negative similarity \(s_{i,j}^{n}(j\neq i)\), for convenience, our analysis focuses on the negative gradients. Considering that the magnitude of positive gradient \(\frac{\partial\mathcal{L}(\mathbf{x}_{i})}{\partial s_{i,k}^{n}}\) is equal to the sum of all negative gradients, we can also indirectly understand the property of the positive gradient through negative gradients. The
gradient with respect to the negative similarity \(s^{n}_{i,j}(j\neq i)\) is formulated as:
\[\frac{\partial\mathcal{L}\left(\mathbf{x}_{i}\right)}{\partial s^{n}_{i,j}}= \frac{\lambda^{c}_{i,j}}{\tau}\cdot\frac{\exp(s^{n}_{i,j}/\tau)}{\sum_{k\neq i }(\lambda^{c}_{i,k}\cdot\exp(s^{n}_{i,k}/\tau))+\exp(s^{n}_{i,i}/\tau)}, \tag{8}\]
where \(\lambda^{c}_{i,j}=\exp((s^{c}_{i,j}-s^{c}_{i,i})/\tau)\). Without the loss of generality, the hyperparameter \(\tau\) can be considered as a constant.
From equation 8, we observe that \(\lambda^{c}_{i,j}\) is proportional to negative gradients. A larger \(\lambda^{c}_{i,j}\) leads to the corresponding sample to receive more attention during the optimization. Since \(\lambda^{c}_{i,j}\) depends mainly on \(\mathbf{z}^{c}_{i}\) and \(\mathbf{z}^{c}_{j}\), we need to analyze them separately according to whether samples belong to the same category or not. Due to the equipartition constraint, \(\mathbf{z}^{c}\) is encouraged to encode the categorical information. Thus, the similarity \(s^{c}_{i,j}(j\neq i)\) of the same category is greater than the similarity of different categories. The intra-class \(\lambda^{c}_{i,j}\) is also greater than the inter-class \(\lambda^{c}_{i,j}\). In other words, the gradient tends to concentrate more on samples of the same category, which are often considered as hard negative samples. In this way, the categorical information of \(\mathbf{z}^{c}\) can contribute to the optimization of \(\mathbf{z}^{n}\) so that all samples tend to be uniformly distributed.
### _CDC objective_
Our overall objective, namely CDC, is defined as
\[\mathcal{L}=\mathcal{L}_{\text{InfoNCE}}+\alpha\mathcal{L}_{\text{CE}}. \tag{9}\]
In addition to the loss weight \(\alpha\), there are two temperature hyperparameters: \(\tau\) and \(t\). We observe that the choice of temperature values has a crucial impact on the clustering performance. In general, the relationship of temperatures satisfies: \(0<t\leq\tau\leq 1\). We refer to Section 4.2 for the concrete analysis.
Because samples with the highly confident prediction (close to 1) can be considered to obtain pseudo labels, our method can optionally include the confidence-based cross-entropy loss [7], which can be gradually added to the overall objective, or be used for fine-tuning on the pre-trained model. SCAN applies this loss to correct for errors introduced by noisy nearest neighbors, while we aim to encourage the model to produce a smooth feature space, thus helping assign proper clusters for boundary samples. We only consider well-classified samples, _i.e._, \(p_{\text{max}}>\) threshold (0.99), and perform strong data augmentation on them. This encourages different augmented samples to output consistent cluster predictions through the cross-entropy loss, also known as self-labeling [7].
Algorithm 1 provides PyTorch-like pseudo-code to describe how we compute the objective (equation 9).
```
#model:includesbase_encoder,momentum_encoderandMLPheads
#Tandt:temperaturesforcontrastivelossandcrossentropyloss
#alpha:weightofthelossterm
#K:dimensionofx(numberofclusters),C:dimensionofnn
forxinloader:#loadaminibatchxwithNsamplesx_q=aug(#)#randomlyaugmentedversionx_k=aug(#)#anotherrandomlyaugmentedversion
#nogradienttok
#model.forward(x_q,x_k)#compettefeatures:Nx(K+C)
xc_q=normalize(q[i;K],dim=1)#normalizezc:NxK=n_q=normalize(q[i;K],dim=1)#normalizezn:NxCq=cat([zc_q,zn_q],dim=1)
xc_k=normalize([x;K],dim=1)#normalizezc:NxK=n_k=normalize([i;K],dim=1)#normalizezn:NxCk=cat([zc_k,zn_k],dim=1)#normalizezn:NxCq#competteassignmentswithsinkhorn-knoppwithtorch_n_n_g=(]:q_x_q=sinkhorn-knopp(xc_q)
#convet logitstoprobabilitiesP_q=Softmax(zc_q)+b_k=Softmax(zc_k/t)
#computethe equipartitionconstraintcrossentropy_loss=-0.5*mean(q_q*log(p_k)+q_k=log(p_q)) loss=contrastive_loss(q,k,T)+alpha*crossentropy_loss
#SODupdate:networkandMLPheads loss,backwardupdate(model.params)
```
**Softmax:Softmax function;cat:concatenation.**
**Algorithm 1** PyTorch-like Pseudo-code for CDC.
## 3 Experiments
In this section, we evaluate CDC on multiple benchmarks, including training models from scratch and using self-supervised pretrained models. We follow the settings in MoCo [10] and choose the same backbone network as the baseline methods, to ensure that our performance gains are from the proposed objective function. We first compare our results to the state-of-the-art clustering methods, where we find that our method is overall the best or highly competitive in many benchmarks. Then, we quantify the representation learned by the proposed contrastive loss, and the results show that it can also improve the representation quality.
### _Experimental setup_
**Datasets.** We perform the experimental evaluation on CIFAR10 [16], CIFAR100-20 [16], STL10 [17] and ImageNet [18]. Some prior works [1, 4, 19] use the full dataset for both training and evaluation. Here, we follow the experimental settings in SCAN [7], which trains and evaluates the model using train and val split respectively. This helps us to understand the generalizability of our method on unseen samples. All datasets are processed using the same augmentations in MoCo v2 [20]. We report the results with the mean and standard deviation from 5 different runs.
**Implementation details.** We apply the standard ResNet [21] backbones (ResNet-18 and ResNet-50) each with a MLP projection head. The dimensionality of \(\mathbf{z}^{c}\) is determined by the number of clusters, and the dimensionality of \(\mathbf{z}^{n}\) is set to 256 on the ImageNet and 128 on the other datasets.
On the smaller datasets (CIFAR10, CIFAR100-20 and STL10), our implementation is based on the Lighty library [22]. The parameters are trained through the SGD optimizer with a learning rate of 6-2, a momentum of 0.9, and a weight decay of 5e-4. The loss term is set to \(\alpha=5.0\), and two temperature values are set to \(\tau=0.15\) and \(t=0.10\). We adopt the same data augmentation in SimCLR [11] but disable the blur like MoCo v2 [20]. We train the models
from scratch for 1200 epochs using batches of size 512. For the linear classification in the ablation studies, we train the linear classifier via the SGD optimizer with a learning rate of 30 and the cosine scheduler for 100 epochs.
For ImageNet subsets (100 classes and 200 classes), we adopt the implementation from MoCo v2 and follow the same data augmentation settings in MoCo v2. To speed up training, we directly initialize the backbone with the released pretrained weights (800 epochs pretrained) like SCAN, and only train the MLP head. The weights are updated through SGD optimizer with a learning rate of 0.03, a momentum of 0.9, and a weight decay of 1e-4. The three hyperparameters are set to \(\alpha=1.0\), \(\tau=0.40\) and \(t=0.20\). We train the network weights for 400 epochs using a batch size of 256. Although there are advanced data augmentation and training strategies, we adopt the same settings in MoCo v2 for a fair comparison.
**Equipartition constraint.** Most of the Sinkhorn-Knopp implementation are directly from SwAV work [3]. The regularization parameter is set to \(\epsilon=0.05\) and the number of Sinkhorn iterations is set to 3 for all experiments.
### _Comparison with state-of-the-art methods_
We first evaluate CDC's clustering performance on three different benchmarks. We report the results of clustering accuracy (ACC), normalized mutual information (NMI) and adjusted rand index (ARI) in Table I. CDC outperforms other clustering methods in two of the benchmarks and is on par with state-of-the-art performance in another benchmark. Our method further reduces the gap between clustering and supervised learning on CIFAR-10. On CIFAR100-20, the reason why there is still a large gap with supervised learning is due to the ambiguity caused by the superclasses (mapping 100 classes to 20 classes). On the STL10 dataset, we train the model on the train+unlabeled split from scratch due to the small size of the train split. However, the exact number of clusters in the train+unlabeled split is unknown. We choose the number of clusters equal to 10 for the evaluation on the test dataset. Nevertheless, our method still achieves competitive clustering results, which demonstrates that our method is also applicable to datasets with an unknown number of clusters. Note that the results of other state-of-the-art methods are based on the pretrained representations from contrastive learning, while our clustering results are obtained by training the model from scratch in an end-to-end manner. This also confirms that \(\mathbf{z}^{c}\) efficiently encodes the categorical information. In Section 3.4, we further verify that the presence of \(\mathbf{z}^{c}\) enables us to learn better feature representation.
### _ImageNet clustering_
**ImageNet - subset.** We first test our method on ImageNet subsets of 100 and 200 classes, which is consistent with SCAN. All compared methods apply the same pre-trained weights from MoCo v2 [20]. We fix the ResNet-50 (R50) backbone and train the MLP projection head for 400 epochs with the same settings as MoCo v2. Table II shows that our method can further improve the clustering accuracy, where the clustering results of K-means and SCAN are from SCAN. For example, it has achieved a much higher accuracy of 61.4% than SCAN (56.3%) on the subset of 200 classes. This also implies that CDC is applicable to large-scale datasets with a large number of clusters.
**ImageNet - full.** We further consider the clustering evaluation on the full ImageNet dataset. We apply our method to the latest contrastive learning studies, such as MoCo v3 [33], to uncover its potential in clustering tasks. We load the pretrained R50 backbone from MoCo v3 (1000 epochs) and only train two MLP heads for 200 epochs with the same settings as MoCo V3. We use the LARS optimizer with a learning rate of 0.3, a weight decay of 1e-6 and a momentum of 0.9.
Table II compares our method against SCAN on three metrics. CDC consistently outperforms the baseline method in all metrics. In particular, it achieves significant performance improvements in terms of accuracy (53.4%) compared to SCAN (43.2%). Although previous studies [7] find that there may be multiple reasonable ways to cluster images in ImageNet based on their semantics, without a priori knowledge, it's still challenging to cluster images in ImageNet according to their true labels. But CDC still achieves promising clustering results, which demonstrates the advantages of the proposed method. Figure 2 shows the training efficiency of our method, which can converge in a small number of epochs.
### _Linear evaluation_
We follow the same settings as MoCo v2 to enable a reasonable evaluation of the benefits due to the introduction of \(\mathbf{z}^{c}\). We perform the same data augmentation and training strategy to train the model on ImageNet training data for 200 epochs from scratch. Then, we fix the R50 backbone and train a linear classifier to evaluate the learned feature encoder on three datasets: ImageNet, VOC07 [34], and Places205 [35]. Table III shows that the proposed contrastive objective achieves competitive results on these linear classification tasks. Especially, for the transfer learning on VOC07 and Places205, it demonstrates that our method achieves better generalizability to the downstream tasks than other methods. PCL v2 [2] is another method that utilizes clustering to improve representation learning. Although our main purpose of introducing \(\mathbf{z}^{c}\) is for clustering, it can also improve the quality of feature representation. This demonstrates that the expressiveness of the model is improved by the negative coefficients due to the introduction of \(\mathbf{z}^{c}\). We continue to evaluate the pretrained ResNet-50 backbone on object detection tasks in the following Section.
### _Transfer to Object Detection_
We fine-tune the whole network following the experiment settings in detectron2, which are consistent with the other methods [2, 10]. Table IV and Table V show that CDC is overall better than MoCo v2 on COCO and VOC datasets.
### _Ablation studies_
\(\mathbf{z}^{c}\) plays a crucial role in our method. We perform ablation studies on \(\mathbf{z}^{c}\) to understand the importance of each technique. The evaluation results are reported in Table VI, where the training is performed from scratch for 1200 epochs. First, we find that the lack of the equipartition constraint leads to a degenerate solution for cluster assignment, but has almost
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline
**Dataset** & \multicolumn{3}{c}{**CIFARI10**} & \multicolumn{3}{c}{**CIFARI100-20**} & \multicolumn{3}{c}{**STL10**} \\ \cline{2-9}
**Metric** & ACC & NMI & ARI & ACC & NMI & ARI & ACC & NMI & ARI \\ \hline K-means [23] & 22.9 & 8.7 & 4.9 & 13.0 & 8.4 & 2.8 & 19.2 & 12.5 & 6.1 \\ SC [24] & 24.7 & 10.3 & 8.5 & 13.6 & 9.0 & 2.2 & 15.9 & 9.8 & 4.8 \\ Triplets [25] & 20.5 & – & – & 9.94 & – & – & 24.4 & – & – \\ JULE [26] & 27.2 & 19.2 & 13.8 & 13.7 & 10.3 & 3.3 & 27.7 & 18.2 & 16.4 \\ AEVB [27] & 29.1 & 24.5 & 16.8 & 15.2 & 10.8 & 4.0 & 28.2 & 20.0 & 14.6 \\ DAE [28] & 29.7 & 25.1 & 16.3 & 15.1 & 11.1 & 4.6 & 30.2 & 22.4 & 15.2 \\ SWWAE [29] & 28.4 & 23.3 & 16.4 & 14.7 & 10.3 & 3.9 & 27.0 & 19.6 & 13.6 \\ AE [30] & 31.4 & 23.4 & 16.9 & 16.5 & 10.0 & 4.7 & 30.3 & 25.0 & 16.1 \\ GAN [31] & 31.5 & 26.5 & 17.6 & 15.1 & 12.0 & 4.5 & 29.8 & 21.0 & 13.9 \\ DEC [4] & 30.1 & 25.7 & 16.1 & 18.5 & 13.6 & 5.0 & 35.9 & 27.6 & 18.6 \\ ADC [32] & 32.5 & – & – & 16.0 & – & – & 53.0 & – & – \\ DeepCluster [12] & 37.4 & – & – & 18.9 & – & – & 33.4 & – & – \\ DAC [19] & 52.2 & 40.0 & 30.1 & 23.8 & 18.5 & 8.8 & 47.0 & 36.6 & 25.6 \\ IC [1] & 51.1 & 41.1 & 25.7 & 22.5 & 11.7 & 59.6 & 49.6 & 39.7 \\ Pretext [11] + K-means & \(65.9\pm 5.7\) & \(59.8\pm 2.0\) & \(50.9\pm 3.7\) & \(39.5\pm 1.9\) & \(40.2\pm 1.1\) & \(23.9\pm 1.1\) & \(65.8\pm 5.1\) & \(60.4\pm 2.5\) & \(50.6\pm 4.1\) \\ SCAN [7] & 81.8 \(\pm 0.3\) & \(71.2\pm 0.4\) & \(66.5\pm 0.4\) & \(42.2\pm 3.0\) & \(44.1\pm 1.0\) & \(26.7\pm 1.3\) & \(75.5\pm 2.0\) & \(65.4\pm 1.2\) & \(59.0\pm 1.6\) \\ SCAN [7] + selfabel & \(87.6\pm 0.4\) & \(78.7\pm 0.5\) & \(75.8\pm 0.7\) & \(45.9\pm 2.7\) & \(46.8\pm 1.3\) & \(30.1\pm 2.1\) & \(76.7\pm 1.9\) & \(68.0\pm 1.2\) & \(61.6\pm 1.8\) \\ \hline Supervised & 93.8 & 86.2 & 87.0 & 80.0 & 68.0 & 63.2 & 80.6 & 65.9 & 63.1 \\
**CDC** & \(83.1\pm 0.6\) & \(73.4\pm 0.7\) & \(68.7\pm 0.6\) & \(44.0\pm 1.2\) & \(46.3\pm 1.1\) & \(28.2\pm 1.0\) & \(71.2\pm 1.5\) & \(64.3\pm 1.3\) & \(55.3\pm 1.5\) \\
**CDC** + selflabel & \(\mathbf{89.0\pm 0.3}\) & \(\mathbf{80.6\pm 0.4}\) & \(\mathbf{78.4\pm 0.5}\) & \(\mathbf{49.2\pm 0.8}\) & \(\mathbf{50.4\pm 0.5}\) & \(\mathbf{34.6\pm 0.6}\) & \(75.2\pm 0.8\) & \(66.8\pm 0.5\) & \(59.4\pm 0.6\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: State-of-the-art comparison: We report the averaged results (Avg \(\pm\) Std) for 5 different runs after the clustering and self-labeling steps. All the baseline results are from [7]. We train and evaluate the model using the train and val split respectively, which is consistent with the SCAN [7].
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & \begin{tabular}{c} \#pretrain \\ epochs \\ \end{tabular} & \begin{tabular}{c} Dataset \\ ImageNet \\ \end{tabular} & \begin{tabular}{c} VOC07 \\ \end{tabular} &
\begin{tabular}{c} Places205 \\ \end{tabular} \\ \hline Supervised & R50 & - & 76.5 & 87.5 & 53.2 \\ \hline SimCLR [11] & R50-MLP & 200 & 61.9 & – & – \\ MoCo v2 [10] & R50-MLP & 200 & 67.5 & 84.0 & 50.1 \\ PCL v2 [12] & R50-MLP & 200 & 67.6 & 85.4 & 50.3 \\ CDC (Ours) & R50-MLP & 200 & **68.0** & **91.8** & **52.0** \\ \hline \hline \end{tabular}
\end{table} TABLE III: Image classification with linear classifiers. We report the top-1 accuracy for ImageNet and Places205, mAP for VOC dataset. All the baseline results are from [2] and [3]. Our results are obtained by directly applying the evaluation code [36] on the pretrained R50 backbone.
Fig. 3: Comparison of clustering accuracy at different temperatures on full ImageNet dataset.
Fig. 2: Clustering results of CDC (\(t=0.1\)) on full ImageNet dataset (1000 classes) up to 200 epochs.
no effect on the training of representation learning. Second, we avoid the use of normalization on \(\mathbf{z}^{c}\) and consider it as a regular logit like in classification. Our experiment shows that the loss becomes man and the training fails, so the normalized \(\mathbf{z}^{c}\) enables the corresponding dot product is bounded. Finally, we set the temperature \(t\) to 1.0 (without \(t\)) and find that its value has an important impact on the clustering performance, which is analyzed in detail in Section 4.2. We also train the model only using the equipartition constraint, and find that the model can not achieve optimal clustering results without the weighted InfoNCE loss. It also indicates that the weighted InfoNCE loss adjusts different weights of negatives to encourage \(\mathbf{z}^{c}\) to capture more categorical information.
### _Temperature analysis_
Our method involves two temperatures: \(\tau\) and \(t\). Previous work [37] shows that temperature \(\tau\) plays an important role in controlling the penalty strength of negative samples. Here, we consider \(\tau\) as a constant and focus on the effect of \(t\) on the clustering results, as shown in Figure 3. Since \(t\) plays a role in the scaling of logits in the calculation of cross-entropy loss, a small \(t\) reduces the difficulty of matching pseudo-assignment, and achieves better clustering results faster. In contrast, a larger \(t\) value leads to the training of \(\mathbf{z}^{c}\) becoming difficult. It is the presence of \(t\) that balances the degree of difficulty between the instance-level discrimination and cluster assignment tasks. \(t\) also allows \(\mathbf{z}^{c}\) to be converted into proper softmax probabilities, as verified in the self-labeling experiments in Section 3.2. The results demonstrate that similar clustering results can be achieved for a range of \(t\), such as 0.1 or 0.2.
### _Latent space analysis_
Table VII shows the statistics of similarity scores (dot product), where MoCo v2 has only \(\mathbf{z}\) to compute the contrastive loss, and our method has \(\mathbf{z}^{c}\) and \(\mathbf{z}^{n}\). Both methods satisfy the requirements of instance-wise contrastive learning well, where positive samples (Augmented) have high similarity scores of \(\mathbf{z}\) / \(\mathbf{z}^{n}\) and negative samples (Same or Different) have low similarity scores of \(\mathbf{z}\) / \(\mathbf{z}^{n}\). In CDC, since \(\mathbf{z}^{c}\) encodes the categorical information, the similarity score of \(\mathbf{z}^{c}\) is also able to distinguish well between samples from different categories. As analyzed in Section 2.1, it provides a weight adjustment mechanism for different negative samples so that it can handle negative samples of different hardness well. In contrast, MoCo sets the weight of all negative samples to \(1\), which tends to learn larger dot products for positive samples. The previous study [38] summarizes two key properties of contrastive loss: alignment and uniformity. In other words, MoCo is more concerned with alignment for the optimization purpose.
We further analyze the uniformity properties of \(\mathbf{z}\) / \(\mathbf{z}^{n}\) of MoCo and CDC using t-SNE [39] and the results are shown in Figure 4. Compared to \(\mathbf{z}\) learned by MoCo, our method tends to uniformly distribute points over the latent space without preserving any category-related information. Although MoCo achieves instance-level differentiation, we can still observe that points of the same category are clustered together. The main reason is that the typical contrastive loss cannot deal with the hard negative problem well and samples of the same category aren't distributed evenly. And the proposed method enables us to learn a uniformly distributed space due to the mechanism of self-adjusting negative weights. This also shows that the MLP projection head in our method works well as the role of transforming in two representation spaces. Therefore, we decompose the original \(\mathbf{z}\) into two separate parts: \(\mathbf{z}^{c}\) related to the categorical information, thus focusing on clustering, and \(\mathbf{z}^{n}\) related to instance-wise information, thus focusing on alignment
\begin{table}
\begin{tabular}{l c c c} \hline \hline Settings & Clustering (\%) & Linear evaluation (\%) \\ \hline Without constraint & 27.3 & 86.0 \\ Without norm & - & - \\ Without \(t\) & 71.1 & 88.0 \\ Without InfoNCE & 67.0 & 86.0 \\ Full setup & **83.0** & **88.8** \\ \hline \hline \end{tabular}
\end{table} TABLE VI: Ablation studies of \(\mathbf{z}^{c}\) on CIFAR10 including without the equipartition constraint, the normalization, temperature \(t\) (default 1.0) and the InfoNCE loss. We compare the clustering results and linear evaluation on the pretrained backbone separately.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Arch} & \multirow{2}{*}{
\begin{tabular}{c} pretrain \\ epochs \\ \end{tabular} } & \multicolumn{3}{c}{bbox} & \multicolumn{2}{c}{segm} \\ & & AP & AP\({}_{50}\) & AP\({}_{75}\) & AP & AP\({}_{50}\) & AP\({}_{75}\) \\ \hline Supervised & R50 & - & 40.0 & 59.9 & 43.1 & 34.7 & 56.5 & 36.9 \\ \hline MoCo v2 [10] & R50 & 200 & 40.7 & 60.5 & 44.1 & 35.4 & 57.3 & 37.6 \\ CDC (Ours) & R50 & 200 & **40.8** & **60.6** & **44.3** & **35.5** & 57.3 & **38.0** \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Transfer learning results to object detection tasks on COCO dataset. The detection model is fine-tuned on COCO train2017 dataset and evaluated on COCO val2017 dataset. All the baseline results are from [2].
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Category} & \multicolumn{2}{c}{MoCo v2} & \multicolumn{2}{c}{CDC} \\ \cline{2-5} & \(\mathbf{z}\) & \(\mathbf{z}^{c}\) & \(\mathbf{z}^{n}\) \\ \hline Augmented & 0.781 \(\pm\) 0.188 & 0.872 \(\pm\) 0.154 & 0.720 \(\pm\) 0.199 \\ Same & 0.062 \(\pm\) 0.175 & 0.504 \(\pm\) 0.237 & 0.004 \(\pm\) 0.183 \\ Different & -0.006 \(\pm\) 0.115 & -0.033 \(\pm\) 0.323 & -0.001 \(\pm\) 0.167 \\ \hline \hline \end{tabular}
\end{table} TABLE VII: The mean and standard deviation of similarity scores for \(\mathbf{z}^{c}\) and \(\mathbf{z}^{n}\) / \(\mathbf{z}\) from a category perspective. Augmented: samples from the same instance, Same: samples from the same category, Different: samples from different categories.
\begin{table}
\begin{tabular}{l l c c c} \hline \hline Method & Architecture & \#pretrain & \multicolumn{2}{c}{VOC} \\ & & epochs & AP\({}_{50}\) & AP & AP\({}_{75}\) \\ \hline Supervised & R50 & - & 81.3 & 53.5 & 58.8 \\ \hline MoCo v2 [10] & R50 & 200 & 82.4 & **57.0** & 63.6 \\ CDC (Ours) & R50 & 200 & **82.6** & 56.8 & **63.7** \\ \hline \hline \end{tabular}
\end{table} TABLE V: Transfer learning results to object detection tasks on VOC dataset. The detection model is fine-tuned on VOC07+12 trainval dataset and evaluated on VOC07 test dataset. The baseline results are from [10].
and uniformity. Our method also demonstrates that the MLP projection head plays a role in the transformation from the linearly separable feature space to the instance-wise representation space.
## 5 Related work
**Disentanglement.** Learning disentangled representations can be used to uncover the underlying variation factors within the data [40]. Generally, existing disentangling methods can be classified into two distinct categories: The first category involves separating the latent representations into two [41, 42, 43] or three [44] parts. For instance, Mathieu _et al._[41] introduced a conditional variational autoencoder (VAE) combined with adversarial training to disentangle the latent representations into factors relevant to labels and unspecified factors. In these two-step disentanglement methods [42], the initial step involves extracting label-relevant representations by training a classifier, followed by obtaining label-irrelevant representations primarily through reconstruction loss. These approaches aim to enhance disentanglement performance by leveraging (partial) label information and minimizing cross-entropy loss. The second disentanglement category, exemplified by methods like \(\beta\)-VAE [45], FactorVAE [46], and \(\beta\)-TCVAE [47], focuses on learning to disentangle each dimension in the latent space without manual annotations. Unlike many disentanglement learning methods (such as those based on generative models like VAEs and GANs), which have been proposed in previous studies [43, 48, 49], this paper introduces a novel approach. The proposed method utilizes contrastive learning to train the encoder and achieves the disentanglement of latent variables.
**Clustering.** Recent research has focused on the joint learning of feature representation and clustering, either in an alternating or simultaneous manner. Earlier studies (_e.g._ IMSAT [6], IIC [1]) focus on learning a clustering method from a mutual information maximization perspective. Since these methods may only focus on low-level features, such as color and texture, they don't achieve excellent clustering results. DeepCluster [12] performs clustering and representation learning alternatingly, which is further improved with online clustering [50]. Self-labelling [13] is another simultaneous learning method by maximizing the information between labels and data indices. ContrastiveClustering [51] proposes two MLP heads and handles instance- and cluster-level contrastive learning separately, which results in the use of two distinct contrastive losses. In this work, we propose a self-adjusting contrastive loss to directly disentangle the feature representation for clustering.
**Contrastive learning.** The instance-wise contrastive learning considers each sample as its own class. Wu _et al._[9] first propose to utilize a memory bank and the noise contrastive estimation for learning. He _et al._[10] continue to improve the training strategy by introducing momentum updates. SimCLR [11] is another representative work that applies a large batch instead of the memory bank. SwAV [3] and PCL [2] bridges contrastive learning with clustering to improve representation learning. There are also some recent works [52, 53, 54] that consider only the similarity between positive samples. Although the typical contrastive loss enables learning a latent space where each instance is well-differentiated, it cannot deal well with the uniformity of the hard negative samples. In our work, the proposed contrastive loss with self-adjusting negative weights solves this problem well. Once the feature representation is obtained, cluster assignment is either obtained by K-means clustering [2, 4, 12] or training an additional component [55, 7]. However, they still cannot achieve promising clustering results on a large-scale dataset, _e.g._, ImageNet, which requires to develop an efficient objective function that jointly learns cluster assignment and representation learning.
Fig. 4: The t-SNE visualization of \(\mathbf{z}\) / \(\mathbf{z}^{n}\) on CIFAR10 test dataset. (a): MoCo v2, (b): CDC(ours). Colors indicate different categories.
## 6 Conclusions
This work introduces a novel approach to learning disentangled representations for clustering using contrastive learning. The proposed method decomposes the representation into two distinct parts: one dedicated to clustering and the other focused on instance-wise learning. This enables end-to-end training, distinguishing it from existing state-of-the-art clustering methods that typically involve two separate steps, where representation learning and clustering are decoupled. By avoiding this decoupling, our method has the potential to achieve superior results, especially on large-scale datasets. Experiments on multiple benchmarks demonstrate that our disentanglement method not only achieves excellent clustering performance, but also improves contrastive learning. Note that CDC can be combined with over-clustering, vision transformers, advanced augmentation and training strategies. Due to the limitations of our computational resources, we will explore these techniques in future work.
## 7 Acknowledgments
The work of Feng Luo was supported in part by the U.S. National Science Foundation (NSF) under Grant ABI-1759856, Grant MRI-2018069, and Grant MTM2-2025541, the U.S. National Institute of Food and Agriculture (NIFA) under Grant 2017-70016-26051.
|
2304.08764 | How Infrared Singularities Affect Nambu-Goldstone Bosons at Finite
Temperatures | Ordered phases realized through broken continuous symmetries embrace
long-range order-parameter fluctuations as manifest in the power-law decays of
both the transverse and longitudinal correlations, which are similar to those
at the second-order transition point. We calculate the transverse one-loop
correction to the dispersion relation of nonrelativistic Nambu-Goldstone (NG)
bosons at finite temperatures, assuming that they have well-defined dispersion
relations with some integer exponents. It is found that the transverse
correlations make the lifetime of the NG bosons vanish right on the assumed
dispersion curve below three dimensions at finite temperatures. Combined with
the vanishing of the longitudinal "mass", the result indicates that the
correlations generally bring both an intrinsic lifetime to the excitations and
a qualitative change in the dispersion relation at long wavelengths below three
dimensions at finite temperatures, unless it is protected by some conservation
law. A more detailed calculation performed specifically on a single-component
Bose-Einstein condensate reveals that the gapless Bogoliubov mode predicted by
the perturbative treatment is modified substantially after incorporating the
correlations to have (i) an intrinsic lifetime and (ii) a peak shift at long
wavelengths from the linear dispersion relation. Thus, the NG bosons may be
fluctuating intrinsically into and out of the ordered "vacuum" to maintain the
order temporally. | Takafumi Kita | 2023-04-18T06:39:36Z | http://arxiv.org/abs/2304.08764v1 | # How Infrared Singularities Affect Nambu-Goldstone Bosons
###### Abstract
Ordered phases realized through broken continuous symmetries embrace long-range order-parameter fluctuations as manifest in the power-law decays of both the transverse and longitudinal correlations, which are similar to those at the second-order transition point. We calculate the transverse one-loop correction to the dispersion relation of nonrelativistic Nambu-Goldstone (NG) bosons at finite temperatures, assuming that they have well-defined dispersion relations with some integer exponents. It is found that the transverse correlations make the lifetime of the NG bosons vanish right on the assumed dispersion curve below three dimensions at finite temperatures. Combined with the vanishing of the longitudinal "mass", the result indicates that the correlations generally bring both an intrinsic lifetime to the excitations and a qualitative change in the dispersion relation at long wavelengths below three dimensions at finite temperatures, unless it is protected by some conservation law. A more detailed calculation performed specifically on a single-component Bose-Einstein condensate reveals that the gapless Bogoliubov mode predicted by the perturbative treatment is modified substantially after incorporating the correlations to have (i) an intrinsic lifetime and (ii) a peak shift at long wavelengths from the linear dispersion relation. Thus, the NG bosons may be fluctuating intrinsically into and out of the ordered "vacuum" to maintain the order temporally.
## I Introduction
Goldstone's theorem [1; 2; 3; 4] states that broken continuous symmetries necessarily accompany gapless excitations called Nambu-Goldstone (NG) bosons in the ordered phases. Examples include spin waves of isotropic ferromagnets [5; 6; 7] and antiferromagnets, [8; 9] and the Bogoliubov mode of single-component Bose-Einstein condensates (BECs). [10; 11; 12; 13; 14] Their existence has been elucidated theoretically prior to the proof of the theorem [3] based on perturbative treatments around zero temperature. [5; 6; 7; 8; 9; 10; 11; 12; 13; 14] The connection among the number of broken continuous symmetries, that of emergent NG bosons, and their dispersion relations has been clarified for non-relativistic systems [15; 16; 17; 18; 19] by implicitly assuming existence of well-defined (long-lived) excitations and analyticity in their dispersion relations.
On the other hand, the gapless excitations have been known to cause infrared-singular behaviors in the ordered phases similar to those at the second-order transition point. [20; 21; 22] Most notably, the transverse order-parameter correlation function diverges as \(k^{-2}\) in the wave vector space for \(\mathbf{k}\to\mathbf{0}\), as can be shown rigorously based on the Bogoliubov inequality, [23; 24] to make any long-range order impossible below _two_ dimensions at finite temperatures; the exponent \(-2\) without the anomalous dimension is crucial in reaching the conclusion. [24] Moreover, the longitudinal correlation function also diverges as \(k^{d-4}\) in \(d\) dimensions for \(2\leq d<4\) due to the transverse correlations, as can be concluded based on various techniques. [25] Hence, the longitudinal order-parameter susceptibility, which is predicted to recover a finite value in the ordered phases by the mean-field analysis, [20; 21] remains divergent from the second-order transition point down to zero temperature. [21; 25] Indeed, this fact has been found separately and independently for specific systems such as isotropic ferromagnets [26; 27; 28; 29] and single-component BECs. [30] Thus, the transverse correlations bring a qualitative change in those of the longitudinal branch from _massive to massless_ in the terminology of relativistic quantum field theory. [4; 25]
The purpose of the present paper is to study how the infrared singularities affect the NG bosons themselves, which have been regarded as well-defined quasiparticles in the transverse channel having definite dispersion relations with some integer exponents. [15; 16; 17; 18; 19] For example, the Bogoliubov theory on the single-component BECs with a single broken symmetry in the phase angle predicts that there should be a gapless excitation with a linear dispersion relation at long wavelengths, [10; 31] in agreement with the general argument of assuming analyticity. [18] However, the predicted linearity results from the assumption inherent in the perturbative treatment that the longitudinal susceptibility is finite, which no longer holds after the correlations through the transverse channel are incorporated, as mentioned above. Hence, one may expect a substantial modification of the long-wavelength dispersion relation from the linear one with an infinite lifetime.
It should be noted that the topic on the BECs has already been studied at zero temperature [30; 32] to conclude that the Bogoliubov mode no longer exists because of the infrared singularities to be replaced by another gapless branch with a linear dispersion relation. An implicit and crucial assumption in reaching the result is again the analyticity of the inverse Green's function \(\hat{G}^{-1}(\vec{k})\) at \(\vec{k}\!=\!\vec{0}\), on which it is expanded in \(\vec{k}\!=\!(\mathbf{k},\varepsilon)\) up to the second order, especially up to \(\varepsilon^{2}\) in terms of the frequency \(\varepsilon\) beyond the standard linear order to find the new mode. However, the presumed analyticity is not self-evident at all in view of the divergence of the longitudinal susceptibility; hence, existence of the distinct phonon-like mode at \(T\!=\!0\) in the single-particle branch remains unestab
lished yet. This is more so at finite temperatures where the Bose distribution function \(f(\varepsilon)\) with a pole at \(\varepsilon\!=\!0\) is relevant, so that the singular \(\varepsilon\!=\!0\) branch should be treated separately from those of \(\varepsilon\!\neq\!0\), similarly as in the treatment of the critical phenomena. [20; 21; 22]
This paper is organized as follows. In Sec. 2, we give a general estimate on the lifetime of a Nambu-Goldstone boson assuming a well-defined dispersion relation initially and incorporating the transverse correlations perturbatively. In Sec. 3, we present a more detailed calculation on the excitation spectrum of a single-component Bose-Einstein condensate. In Sec. 4, we provide concluding remarks. We use the units of \(\hbar=k_{\rm B}=1\).
## II Analytic estimation of lifetime
In this section, we present an analytic one-loop estimation of the lifetime of an NG boson at finite temperatures, which is assumed to have a well-defined dispersion relation \(E_{k}\propto k^{n}\) with \(n\geq 1\). Unlike previous studies, [33; 34; 35] we carry it out by retaining the full spectral resolution in terms of the wave vector; see the last paragraph of this section on this point.
The corresponding transverse Green's function \(G_{t}\) can be expressed in the Lehmann representation [36; 37] as
\[G_{t}(k,z)\!=\!\int_{-\infty}^{\infty}\frac{d\varepsilon}{2\pi}\frac{\rho_{t }(k,\varepsilon)}{z-\varepsilon}. \tag{1}\]
where \(\rho_{t}\) is the spectral function
\[\rho_{t}(k,\varepsilon)=2\pi[w_{k}^{+}\delta(\varepsilon-E_{k})-w_{k}^{-} \delta(\varepsilon+E_{k})], \tag{2}\]
given in terms of the excitation energy \(E_{k}>0\) and its spectral weight \(w_{k}^{\pm}\geq 0\). The weight \(w_{k}^{-}\) becomes finite when the relevant broken symmetry yields a connection between the particle and hole channels, as in the cases of BEC and antiferromagnetism. It has been shown generally [23; 24] that \(G_{t}(k,z=0)\propto k^{-2}\) holds in the limit of \(k\to 0\), which for Eq. (1) with Eq. (2) can be expressed alternatively as
\[\frac{w_{k}^{\pm}}{E_{k}}\propto k^{-2}. \tag{3}\]
We now consider the transverse one-loop process that is responsible for the infrared singularities of the continuous-symmetry-breaking phases. It can be expressed analytically in terms of \(G_{t}(k,z)\) by
\[\chi_{tt}(q,i\omega_{\ell})\equiv-\frac{T}{V}\!\sum_{\mathbf{k}}\!\sum_{n}G_{t}(| \mathbf{k}+\mathbf{q}|,i\varepsilon_{n}+i\omega_{\ell})G_{t}(k,i\varepsilon_{n}), \tag{4}\]
where \(T\) and \(V\) are the temperature and volume, respectively, and \(\varepsilon_{n}\!\equiv\!2n\pi T\) and \(\omega_{\ell}\!\equiv\!2\ell\pi T\) (\(n,\ell\!=\!0,\pm 1,\cdots\)) are boson Matsubara frequencies (\(\hbar\!=\!k_{\rm B}\!=\!1\)). Substituting Eq. (1) and using the Bose distribution function \(f(\varepsilon)\equiv(e^{\varepsilon/T}-1)^{-1}\), we can transform the sum over \(n\) in Eq. (4) into a contour integral, collect residues on the real axis, and perform the analytic continuation \(i\omega_{\ell}\to\omega+i0_{+}\). [36; 37] We thereby obtain the imaginary part of the retarded response function as
\[{\rm Im}\chi_{tt}^{\rm R}(q,\omega) = \frac{1}{2V}\sum_{\mathbf{k}}\int_{-\infty}^{\infty}\frac{d\varepsilon }{2\pi}\rho_{t}(k,\varepsilon)\rho_{t}(|\mathbf{k}+\mathbf{q}|,\varepsilon+\omega) \tag{5}\] \[\times[f(\varepsilon+\omega)-f(\varepsilon)].\]
Substitution of Eq. (2) into Eq. (5) yields
\[{\rm Im}\chi_{tt}^{\rm R}(q,\omega) \tag{6}\] \[= - \frac{\pi}{V}\sum_{\mathbf{k}}\sum_{\sigma=\pm}\biggl{\{}f(E_{k})w_{ k}^{\sigma}w_{|\mathbf{k}+\mathbf{q}|}^{\sigma}\delta(\omega-E_{|\mathbf{k}+\mathbf{q}|}+E_{k})\] \[+\left[f(E_{k})+\frac{1}{2}\right]w_{k}^{\sigma}w_{|\mathbf{k}+\mathbf{q} |}^{-\sigma}\delta(\omega-E_{|\mathbf{k}+\mathbf{q}|}-E_{k})\] \[-\;(\omega\to-\omega)\biggr{\}},\]
where we have transformed \(f(-E)\!=\!-1-f(E)\) for \(E>0\) and subsequently made a change of variables \(\mathbf{k}+\mathbf{q}\!=\!-\mathbf{k}^{\prime}\) for terms with the factor \(f(E_{|\mathbf{k}+\mathbf{q}|})\) or \(f(E_{|\mathbf{k}+\mathbf{q}|})+\frac{1}{2}\).
Let us focus on the first term in the curly brackets of Eq. (6), where we can estimate
\[f(E_{k})w_{k}^{\sigma}\propto(T/E_{k})w_{k}^{\sigma}\propto k^{-2}.\] (7a) for \[k\to 0\] at \[T>0\] based on Eq. ( 3 ). Note that it is the pole \[\varepsilon\!=\!0\] of \[f(\varepsilon)\] that brings the singularity. Moreover, integration of \[w_{|\mathbf{k}+\mathbf{q}|}^{\sigma}\delta(\omega-E_{|\mathbf{k}+\mathbf{q}|}+E_{k})\] over the angle \[\theta\equiv\arccos(\mathbf{k}\cdot\mathbf{q}/kq)\] yields another factor \[k^{-1}\] at \[\omega=E_{q},\] \[\int_{0}^{\pi}d\theta\sin^{d-2}\theta\,w_{|\mathbf{k}+\mathbf{q}|}^{ \sigma}\delta(\omega-E_{|\mathbf{k}+\mathbf{q}|}+E_{k})\] \[= \frac{1}{kq}\int_{E_{|\mathbf{k}-\mathbf{q}|}}^{E_{k+q}}dE_{k_{1}}\frac{k_ {1}\sin^{d-3}\theta}{dE_{k_{1}}/dk_{1}}w_{k_{1}}^{\sigma}\delta(\omega-E_{k_{1 }}+E_{k})\] \[\stackrel{{\omega=E_{q}}}{{\longrightarrow}}\frac{1}{kq }\left[\frac{k_{1}\sin^{d-3}\theta}{dE_{k_{1}}/dk_{1}}w_{k_{1}}^{\sigma} \right]_{E_{k_{1}}=E_{q}+E_{k}}\propto\frac{1}{k}, \tag{7b}\]
where we have made a successive change of variables \(\theta\to k_{1}\equiv|\mathbf{k}+\mathbf{q}|\to E_{k_{1}}\). Since \(E_{|\mathbf{k}-\mathbf{q}|}<E_{q}+E_{k}\leq E_{k+q}\) holds for \(E_{k}\propto k^{n}\) with \(n\geq 1\), the factor in the square brackets in the third line is finite and positive; see the original expression of the first line on the latter point.
The same transformation can be performed for the other terms with \(f(E_{k})\) in the curly brackets of Eq. (6). They yield additional contributions for which \(E_{k_{1}}\!=\!E_{q}+E_{k}\) in the third line of Eq. (7b) is replaced by \(E_{k_{1}}\!=\!E_{q}-E_{k}\), \(E_{k_{1}}\!=\!-E_{q}+E_{k}\), and \(E_{k_{1}}\!=\!-E_{q}-E_{k}\), respectively. However, the latter two conditions cannot be satisfied in the limit of \(k\to 0\), as seen from the fact that they reduce to \(E_{q}=-E_{q}\). Hence, the latter two terms vanish for \(k\to 0\), leaving the first two finite contributions that are additive.
It follows from Eq. (7) that the integrand of Eq. (6) at \(\omega=E_{q}\) with \(q>0\) behaves as \(k^{(d-1)-3}\) for \(k\to 0\)
Hence, \(\mathrm{Im}\chi^{\mathrm{R}}_{tt}(q,\omega)\) at \(\omega\!=\!E_{q}\) is concluded to diverge for \(d\leq 3\) at finite temperatures. We also note that, with the emergence of three-point vertices in the ordered phases, \(\chi^{\mathrm{R}}_{tt}(q,\omega)\) contributes directly to the self-energy; see also Fig. 1(b) on this point. Hence, the divergence of \(\mathrm{Im}\chi^{\mathrm{R}}_{tt}(q,\omega)\) at \(\omega=E_{q}\) implies that the lifetime of the quasiparticles with the dispersion relation \(\omega=E_{q}\) is zero right on the dispersion curve, in contradiction to the original assumption of Eq. (2). This fact implies that the long-range transverse correlations in the continuous-symmetry-breaking phases generally prevent any gapless excitation from having a well-defined dispersion relation with an integer exponent below three dimensions at finite temperatures. There can be exceptional cases where well-defined dispersion relations are protected by some conservation laws. Otherwise, cancellation of the infrared singularities to recover a sharp dispersion relation is unlikely to happen; no such mechanism at finite temperatures has been known to date.
It should be noted that the first and second terms in the curly brackets of Eq. (6), known as the Landau and Beliaev dampings, respectively, have certainly been investigated theoretically.[33; 34; 35] However, the damping rate in those finite-temperature studies is defined by a sum of Eq. (6) over \(\mathbf{q}\) for a given \(\omega\) with no spectral resolution in terms of \(\mathbf{q}\). Hence, the result obtained above has been overlooked. Whereas the results of Ref. [33; 34; 35] show the stability of the entire system against the thermal activation of quasiparticle excitations, the present one reveals the presence of an intrinsic lifetime in each excitation which is not describable perturbatively. Combined with the vanishing of the longitudinal mass, it tells us that the dispersion relation of the NG bosons may also be modified qualitatively at long wavelengths from that of the mean-field analysis.
## III Detailed consideration on BEC
In this section, we study in more detail how the \(k^{-2}\) correlations modify the spectral function of Eq. (2) based on a formalism where the longitudinal susceptibility vanishes manifestly. Specifically, we consider an isotropic homogenous Bose-Einstein condensate composed of identical particles with mass \(m\) and spin \(0\) at \(T>0\) interacting via a two-body potential \(U(\mathbf{r})\), which can be described in terms of the scalar fields \((\psi,\psi^{*})\!\equiv\!(\psi_{1},\psi_{2})\).
### Vanishing of transverse and longitudinal masses
Let us introduce Green's functions by \(G_{jj^{\prime}}(\vec{r},\vec{r}\,^{\prime})\!\equiv\!-\!\langle T_{r}\psi_{j }(\vec{r})\psi_{j^{\prime}}(\vec{r}\,^{\prime})\rangle+\langle\psi_{j}(\vec{ r})\rangle\langle\psi_{j^{\prime}}(\vec{r}\,^{\prime})\rangle\) with \(j\!=\!1,2\) and \(\vec{r}\!\equiv\!(\mathbf{r},\tau)\), where \(\tau\) lies in \([0,T^{-1}]\) with the periodic boundary conditions.[36; 37] The Fourier transform of \(\hat{G}\!\equiv\!(G_{jj^{\prime}})\) can be parametrized in units of \(m\!=\!1/2\) as
\[\hat{G}(\vec{k})\!=\!\left[\!\!\!\!\begin{array}{cc}-\Delta(\vec{k})&-i \varepsilon_{n}\!-\!k^{2}\!-\!\Sigma(-\vec{k})\!+\!\mu\\ i\varepsilon_{n}\!-\!k^{2}\!-\!\Sigma(\vec{k})\!+\!\mu&-\Delta(\vec{k})\end{array} \!\!\!\right]^{-1} \tag{8}\]
where \(\mu\) is the chemical potential and \(\vec{k}\!\equiv\!(\mathbf{k},i\varepsilon_{n})\) in this expression.[38] The anomalous self-energy satisfies \(\Delta(-\vec{k})\!=\!\Delta(\vec{k})\), and the Hugenholtz-Pines relation[13]\(\Sigma(\vec{0})-\mu=\Delta(\vec{0})\) ensures existence of a gapless excitation branch in the system. Indeed, choosing the condensate wave function \(\Psi_{j}\!\equiv\!\langle\psi_{j}\rangle\) as real (\(\Psi_{1}\!=\!\Psi_{2}\!\equiv\!\Psi\)) and unitary-transforming
\[\psi_{j}\!=\!\frac{1}{\sqrt{2}}\{\phi_{l}+i(-1)^{j-1}\phi_{t}\}, \tag{9}\]
we can decompose the field \(\psi_{j}\) into the longitudinal and transverse ones (\(\phi_{l},\phi_{t}\)) with \(\langle\phi_{l}\rangle\!=\!\sqrt{2}\Psi\) and \(\langle\phi_{t}\rangle\!=\!0\). Elements of the transformed inverse Green's function are given by
\[G_{ll}^{-1}(\vec{k}) = -\xi_{ll}(\vec{k}), \tag{10a}\] \[G_{tt}^{-1}(\vec{k}) = -\xi_{tt}(\vec{k}),\] (10b) \[G_{lt}^{-1}(\vec{k}) = -G_{tl}^{-1}(\vec{k})=i\left\{i\varepsilon_{n}-\frac{\Sigma(\vec{ k})-\Sigma(-\vec{k})}{2}\right\}, \tag{10c}\]
with
\[\xi_{ll}(\vec{k})\atop\xi_{tt}(\vec{k})\right\}\equiv k^{2}+\frac{\Sigma( \vec{k})+\Sigma(-\vec{k})}{2}-\mu\pm\Delta(\vec{k}), \tag{11}\]
so that \(\xi_{ll}(\vec{0})=2\Delta(\vec{0})\) and \(\xi_{tt}(\vec{0})=0\) according to the Hugenholtz-Pines relation. Thus, the transverse branch is gapless and should behave as \(\xi_{tt}(\mathbf{k},0)\propto k^{2}\) for \(k\to 0\).[24] The transformation (9) also clarifies that the Bogoliubov mode with the linear dispersion relation, which is non-analytic in \(k^{2}=\mathbf{k}\cdot\mathbf{k}\), is realized by the mixing of the gapless transverse branch \(\xi_{tt}\propto k^{2}\) with the apparently gapped longitudinal one \(\xi_{ll}(\vec{0})\propto 2\Delta(\vec{0})\) through a finite frequency \(i\varepsilon_{n}\). However, the longitudinal branch also becomes gapless as \(\Delta(\mathbf{k},0)\propto k^{d-4}\) for \(\mathbf{k}\to\mathbf{0}\) due to the transverse correlations.[25; 30]
Both the Hugenholtz-Pines relation and vanishing of the longitudinal _mass_\(\Delta(\vec{0})=0\) can be derived based on Goldstone's theorem.[3; 4] Specifically, the grand potential (or _effective action_) \(\Gamma\) as a functional of the condensate wave function is invariant under the global gauge transformation
\[\delta\Psi_{j}(\vec{r})\!\equiv\!i\,\delta\theta\sum_{j^{\prime}}(\hat{\sigma}_ {3})_{jj^{\prime}}\Psi_{j^{\prime}}(\vec{r})\]
by the phase angle \(\delta\theta\), where \(\hat{\sigma}_{3}\) denotes the third Pauli matrix and the \(\vec{r}\) dependence originates from a coupling
of \(\psi_{j}\) to an external source field \(J_{j}(\vec{r})\).[21; 25; 4] This invariance of \(\Gamma\) reads
\[\sum_{jj^{\prime}}\int d^{4}r\frac{\delta\Gamma}{\delta\Psi_{j^{\prime}}(\vec{r} )}(\hat{\sigma}_{3})_{j^{\prime}j}\Psi_{j}(\vec{r})\!=\!0.\]
Differentiating the equality \(n\) times in terms of \(\Psi_{j_{i}}(\vec{r}_{j_{i}})\) (\(i\!=\!1,2,\cdots,n\)), switching off the source field, and adopting the \(\vec{k}\) representation, we obtain[21; 25; 38]
\[\sum_{i=1}^{n}\sum_{j^{\prime}_{i}}\Gamma^{(n)}_{j_{1}\cdots j_{ i}^{\prime}\cdots j_{n}}(\vec{k}_{1},\cdots,\vec{k}_{n})(\hat{\sigma}_{3})_{j^{ \prime}j_{i}}\] \[+\sum_{j^{\prime}j}\Gamma^{(n+1)}_{j_{1}\cdots j_{n}j^{\prime}}( \vec{k}_{1},\cdots,\vec{k}_{n},\vec{0})(\hat{\sigma}_{3})_{j^{\prime}j}\Psi_ {j}=0, \tag{12}\]
where \(\Gamma^{(n)}_{j_{1}\cdots j_{n}}(\vec{k}_{1},\cdots,\vec{k}_{n})\propto\delta _{\vec{k}_{1}+\cdots+\vec{k}_{n}\vec{0}}\) is the \(n\)-point vertex defined by the Fourier transform of \(\delta^{n}\Gamma/\delta\Psi_{j_{1}}(\vec{r}_{1})\cdots\delta\Psi_{j_{n}}(\vec {r}_{n})\). The \(n\!=\!1\) case of Eq. (12) is the identity known standardly as Goldstone's theorem[4] and also equivalent to the Hugenholtz-Pines relation.[13] The latter fact can be seen by substituting the stationarity condition \(\Gamma^{(1)}_{j}(\vec{0})=0\) of the grand potential and also the connection \(\hat{\Gamma}^{(2)}(-\vec{k},\vec{k})=-\hat{G}^{-1}(\vec{k})\) between \(\hat{\Gamma}^{(2)}\equiv(\Gamma^{(2)}_{jj^{\prime}})\) and Eq. (8).[4; 21] On the other hand, the case of \(n=2\) in Eq. (12) yields the vanishing of \(\Delta(\vec{0})\). To see this, it is convenient to transform Eq. (12) into the \((\phi_{l},\phi_{t})\) representation through \(\hat{\sigma}_{3}\rightarrow\hat{\sigma}_{2}\) and \(\Psi_{j}\rightarrow\delta_{jl}\sqrt{2}\Psi\), where the Hugenholtz-Pines relation is given by
\[\Gamma^{(2)}_{tt}(\vec{0},\vec{0})=0. \tag{13}\]
The choice \((j_{1},j_{2})=(l,t)\) then yields
\[\Gamma^{(2)}_{ll}(-\vec{k},\vec{k})-\Gamma^{(2)}_{tt}(-\vec{k},\vec{k})= \sqrt{2}\Gamma^{(3)}_{tt}(-\vec{k},\vec{k},\vec{0})\Psi\]
so that
\[\Gamma^{(2)}_{ll}(\vec{0},\vec{0})=\sqrt{2}\Gamma^{(3)}_{ltt}(\vec{0},\vec{0},\vec{0})\Psi \tag{14}\]
holds according to Eq. (13). On the other hand, the self-energies can be expressed graphically as Fig. 1.[39] The graph (b) contains the transverse contribution \(c_{1}\chi_{tt}(\vec{0})\Gamma^{(3)}_{ltt}(\vec{0},\vec{0},\vec{0})\) to \(\Gamma^{(2)}_{ll}(\vec{0},\vec{0})\) given in terms of some constant \(c_{1}\), whereas the other processes yield a finite value \(c_{0}\). Hence, \(\Delta(\vec{0})=\Gamma^{(2)}_{ll}(\vec{0},\vec{0})/2\) can be written as
Footnote 1: The \(\Delta(\vec{0})\) term is not included in the definition of the \(\Delta(\vec{0})\) term.
\[\Delta(\vec{0})=\frac{c_{0}/2}{1-c_{1}\chi_{tt}(\vec{0})/\sqrt{2}\Psi}, \tag{15}\]
where \(\chi_{tt}(\vec{0})\) diverges for \(d<4\) at finite temperatures as seen by inspecting Eq. (4) with \(G_{tt}(k,0)\propto k^{-2}\) for \(k\to 0\). Thus, we conclude that \(\Delta(\vec{0})=0\) holds for \(d<4\) at \(T>0\).[38] This argument is the finite-temperature extension of the one given by Nepomnyashchii and Nepomnyashchii[30] that found \(\Delta(\vec{0})=0\) for \(d\leq 3\) at \(T=0\).
### Numerical study
The qualitative argument given above clarifies the crucial importance of the identities of Eq. (12) for studying the excitations of BECs. We now perform a more detailed calculation of its spectral function by adopting a contact potential \(U(\mathbf{r})=U\delta(\mathbf{r})\). The formalism developed previously [39] so as to satisfy Eq. (12) yields the following expressions for the self-energies in the weak-coupling regime
\[\Sigma(\vec{k})-\mu=\Delta(\vec{k})=\Psi^{2}\tilde{U}(\vec{k}) \tag{16}\]
given in terms of the effective potential
\[\tilde{U}(\vec{k})=\frac{U}{1-U[2\chi_{GG}(\vec{k})\!+\!2\chi_{FF}(\vec{k})\!- \!\chi_{GF}(\vec{k})\!-\!\chi_{FG}(\vec{k})]}, \tag{17}\]
where \(\chi_{GF}(\vec{k})\), for example, is obtained from Eq. (4) by the replacement \((G_{t},G_{t})\rightarrow(G,F)\equiv(G_{12},-G_{11})\) given in terms of the upper elements of Eq. (8). Equation (17) is compatible with all the qualitative results mentioned in the previous paragraph, as seen by noting \(G(k,0)\approx F(k,0)\approx\frac{1}{2}G_{tt}(k,0)\) for \(k\to 0\). The Bogoliubov theory is reproduced from Eq. (16) by omitting the correlation effects as \(\tilde{U}(\vec{k})\to U\). Indeed, substituting the corresponding self-energy into Eq. (8), we obtain the Bogoliubov spectrum
\[E_{\rm B}(k)=k\sqrt{k^{2}+2\Delta_{\rm B}},\hskip 28.452756pt\Delta_{\rm B} \equiv\Psi^{2}U, \tag{18}\]
which has a linear dispersion relation for \(k\xi\!\lesssim\!0.5\) where \(\xi\equiv\Delta_{\rm B}^{-\frac{1}{2}}\). Below we present results of the first iteration based on Eqs. (8), (16) and (17) starting from the Bogoliubov theory, in which \(\chi_{GG}(\vec{k})\) etc. are estimated by the one-loop approximation given by Eq. (4). We have adopted a weak-coupling interaction of \(\bar{n}U/T_{0}=0.1\) (\(\bar{n}\): particle density), where we can also approximate \(\Psi^{2}\approx[1-(T/T_{0})^{3/2}]\bar{n}\) using the non-interacting condensation temperature \(T_{0}\).
Figure 2 shows the normalized effective potential \(\tilde{U}(q,\omega)/U\) calculated at \(T/T_{0}=0.3\). In Fig. 2(a), a vertical fault with a deep steep valley is seen to develop in the plateau of \({\rm Re}\,\tilde{U}\) along the Bogoliubov dispersion \(\omega\!=\!E_{\rm B}(q)\). The imaginary part \({\rm Im}\,\tilde{U}\) of Fig. 2(b), on the other hand, is characterized by sharp mountainous peaks
Figure 1: (Color online) Self-energy diagrams. Small and medium circles represent symmetrized bare and renormalized vertices, respectively.
along \(\omega\!=\!E_{\rm B}(q)\) that rise and broaden increasingly towards the origin. A closer look at the region of \(\omega\!=\!E_{\rm B}(q)\) reveals that both \({\rm Re}\,\tilde{U}\) and \({\rm Im}\,\tilde{U}\) vanish along the Bogoliubov dispersion due to the divergence of the denominator in Eq. (17), as seen in Fig. 2(c) and (d) calculated for \(q\xi=0.005\). The approaches to \(0\) are logarithmic and non-analytic in three dimensions. Thus, the first iteration predicts that the vanishing of the longitudinal mass \(\Delta(0,0)\!=\!0\) extends to the finite \((q,\omega)\) region along the Bogoliubov dispersion curve. Figure 3 shows the corresponding spectral function \(\rho(k,\varepsilon)\equiv-2{\rm Im}G^{\rm R}(k,\varepsilon)\). The spectral weight vanishes along the Bogoliubov dispersion to produce a double-peak structure for each \(k\) as a function of \(\varepsilon\). Moreover, the higher peak shifts towards zero frequency as \(k\to 0\) due to \(\tilde{U}(0,0)=0\). The fault in Fig. 2(a) decreases in magnitude as \(T\to 0\) to disappear completely at zero temperature, but \(\tilde{U}(0,0)\) still vanishes logarithmically (i.e., non-analytically) even in the limit.[30]
The above results of the first iteration elucidate essential modifications to the mean-field dispersion relation by the transverse correlations. Most notably, the spectral weight has disappeared completely along the mean-field dispersion curve with the \(\delta\)-function spectrum. They in
Figure 2: (Color online) Normalized effective potential \(\tilde{U}(q,\omega)/U\) at \(T/T_{0}=0.3\). Three-dimensional plots of (a) the real part and (b) the imaginary part. Detailed plots of (c) the real part and (d) the imaginary part as functions of \(\omega/E_{\rm B}(q)\) at \(q\xi=0.005\).
dicate presence of an intrinsic lifetime in the excitations and a qualitative change in the dispersion relation from the mean-field analytic one at long wavelengths due to the vanishing of the longitudinal mass, as seen by setting \(\Delta_{\rm B}\to 0\) for \(k\to 0\) in Eq. (18). Note that the longitudinal mass vanishes even at zero temperature logarithmically, so that the latter statement holds true even at zero temperature. An intrinsic lifetime in the excitations may also persist down to \(T\!=\!0\), as for the case of some sorts of magnon systems.[40]
Experiments on the Bogoliubov excitations of Eq. (18) have been performed for trapped Bose-Einstein condensates.[31; 41; 42] The particle-hole mixing of Eq. (4) predicted by the Bogoliubov theory has been observed clearly at \(k\xi=0.38\),[31; 41] and a reasonable agreement between the Bogoliubov theory and measured excitation energies has been reported for \(k\xi\gtrsim 0.5\).[31; 42] However, the observed excitation spectra are accompanied by substantial broadenings of the order of the excitation energies themselves,[31; 41] which may be caused partly by the inhomogeneity of the trap potential. Hence, they cannot tell whether the dispersion relation is really a sharp distinct one or accompanied by some essential broadening and deviation from Eq. (18), especially at the low excitation energies of \(k\xi\lesssim 0.1\). It remains to be performed both theoretically and experimentally to develop methods to detect them in high resolutions and clarify the excitation spectrum definitely.
## IV Concluding remarks
We have studied how the long-range transverse correlations affect the dispersion relation of Nambu-Goldstone bosons at finite temperatures by incorporating the correlations perturbatively at the one-loop level. It is found that the divergence of the longitudinal susceptibility (i.e., vanishing of the longitudinal _mass_), which is characteristic of systems with broken continuous symmetries, extends to the finite energy-momentum region as vanishing of the spectral weight right on the assumed dispersion curve below three dimensions at finite temperatures.
We finally point out the necessity of distinguishing various correlation functions that have been focused in respective studies of Goldstone's theorem. Specifically, its first proof,[3; 4] given around Eq. (12), is relevant to the two-point function of the field \(\psi_{j}(\vec{r})\), whereas the second proof[3; 4] concerns the correlations of \(\psi_{j}(\vec{r})\) with the field \(j_{\alpha}^{0}(\vec{r})\) that constitutes the conserved charge \(Q_{\alpha}\equiv\int j_{\alpha}^{0}(\vec{r})\,d^{3}r\). Finally, the dispersion relations of the NG bosons have been discussed in terms of the two-point functions of \(j_{\alpha}^{0}(\vec{r})\).[15; 16; 17; 18; 19] The three kinds of correlation functions are generally different and can have different poles. This may be realized clearly in terms of the single-component BECs where \(j_{\alpha}^{0}(\vec{r})\) is the density operator \(n(\vec{r})=\psi_{2}(\vec{r})\psi_{1}(\vec{r})\). The fact that its four-point vertices have different \(q\)- and \(\omega\)-limits,[39] contrary to the assumption made by Gaoret and Nozieres,[14] implies that the system can embrace collective modes in the correlation functions of \((n,\psi)\) and \((n,n)\) that are distinct from the single-particle excitations in \((\psi,\psi)\), similarly as in the case of Fermi liquids and superfluid Fermi liquids.[36; 43] The topic here is relevant to the poles of \((\psi,\psi)\) concerning the first proof.
## Acknowledgment
This work was supported by JSPS KAKENHI Grant Number JP20K03848.
|
2301.04958 | Multifractal analysis of measures arising from random substitutions | We study regularity properties of frequency measures arising from random
substitutions, which are a generalisation of (deterministic) substitutions
where the substituted image of each letter is chosen independently from a fixed
finite set. In particular, for a natural class of such measures, we derive a
closed-form analytic formula for the $L^q$-spectrum and prove that the
multifractal formalism holds. This provides an interesting new class of
measures satisfying the multifractal formalism. More generally, we establish
results concerning the $L^q$-spectrum of a broad class of frequency measures.
We introduce a new notion called the inflation word $L^q$-spectrum of a random
substitution and show that this coincides with the $L^q$-spectrum of the
corresponding frequency measure for all $q \geq 0$. As an application, we
obtain closed-form formulas under separation conditions and recover known
results for topological and measure theoretic entropy. | Andrew Mitchell, Alex Rutar | 2023-01-12T12:06:40Z | http://arxiv.org/abs/2301.04958v2 | # Multifractal analysis of measures arising from random substitutions
###### Abstract.
We study regularity properties of frequency measures arising from random substitutions, which are a generalisation of (deterministic) substitutions where the substituted image of each letter is chosen independently from a fixed finite set. In particular, for a natural class of such measures, we derive a closed-form analytic formula for the \(L^{q}\)-spectrum and prove that the multifractal formalism holds. This provides an interesting new class of measures satisfying the multifractal formalism. More generally, we establish results concerning the \(L^{q}\)-spectrum of a broad class of frequency measures. We introduce a new notion called the _inflation word \(L^{q}\)-spectrum_ of a random substitution and show that this coincides with the \(L^{q}\)-spectrum of the corresponding frequency measure for all \(q\geq 0\). As an application, we obtain closed-form formulas under separation conditions and recover known results for topological and measure theoretic entropy.
Key words and phrases:random substitution, multifractal analysis, \(L^{q}\)-spectrum, multifractal formalism 2020 Mathematics Subject Classification: 37B10, 37C45, 52C23
###### Contents
* 1 Introduction
* 1.1 Entropy and \(L^{q}\)-spectra
* 1.2 Random substitutions
* 1.3 Statement and discussion of main results
* 1.4 Discussion and further work
* 2 Preliminaries
* 2.1 Symbolic notation
* 2.2 Dynamics, entropy and dimension
* 2.3 \(L^{q}\)-spectra and smoothness
* 2.4 Multifractal spectrum and multifractal formalism
* 2.5 Random substitutions and frequency measures
* 2.6 Primitive random substitutions
* 2.7 Compatible random substitutions
* 2.8 Frequency measures
* 2.9 Separation conditions and recognisability
* 3 \(L^{q}\)-spectra of frequency measures
* 3.1 Inflation word \(L^{q}\)-spectra
* 3.2 \(L^{q}\)-spectra for non-negative \(q\)
3.3 \(L^{q}\)-spectra for negative \(q\): lower bounds * 3.4 Proof of general bounds for the \(L^{q}\) spectrum * 3.5 \(L^{q}\)-spectra for negative \(q\) under recognisability * 3.6 Recovering entropy from the \(L^{q}\)-spectrum
* 4 Recognisability and the multifractal formalism
* 4.1 Non-typical local dimensions
* 4.2 Proof of the multifractal formalism
* 5 Examples, counterexamples and applications
* 5.1 Failure of bounds for \(q<0\) without recognisability
* 5.2 Examples with recognisability
* 5.3 Examples without recognisability
## 1. Introduction
A _substitution_ is a combinatorial object consisting of a finite alphabet \(\mathcal{A}\) along with a set of _transformation rules_. The theory of substitutions, along with statistical properties of the system under repeated iteration, is a large and actively researched field at the interface of combinatorics and symbolic dynamics. A thorough introduction to the statistical properties and dynamics of substitutions can be found in [2, 25]. Associated with a (deterministic) substitution is a _frequency measure_, which encodes the frequency of subwords under repeated iteration. Notably, the corresponding subshift supporting this measure has zero topological entropy, and the frequency measure is the unique ergodic measure supported on this subshift.
Random substitutions are a generalisation of (deterministic) substitutions [13] where we apply a transformation rule to each letter randomly and independently chosen from a finite set of possibilities. Similarly to the deterministic case, subshifts associated with random substitutions support ergodic frequency measures which capture the expected rate of occurrence of subwords under random iteration. But in contrast to the deterministic case, the corresponding subshift typically has positive topological entropy and supports uncountably many ergodic measures. Random substitutions include examples exhibiting deterministic behaviour, while also including examples which are subshifts of finite type. Moreover, there is a large amount of intermediate behaviour: subshifts of random substitutions can simultaneously exhibit long range correlation [3] (an indication of order) and positive topological entropy [14] (an indication of disorder).
In this paper, we study the fine scaling properties of frequency measures associated with random substitutions from the perspective of _multifractal analysis_. This perspective is relevant in a wide variety of contexts, such as the geometry of fractal sets and measures and in dynamical systems, with typical applications to geometric
measure theory and number theory. In our setting, our primary objects of study are the \(L^{q}\)-_spectrum_, which is a parametrised family of quantities which capture the inhomogeneous scaling properties of a measure, and the _local dimension_, which capture the exponential growth rate of a measure around a point. The \(L^{q}\)-spectrum and local dimensions are related through a heuristic relationship known as the _multifractal formalism_[18]. It is an important and well-studied question to determine settings in which the multifractal formalism holds, and to determine qualitative conditions describing its failure.
Much of the work on multifractal analysis has been done in the setting of local dimensions of self-similar measures (for some examples, see [1, 10, 11, 19, 29]) and Birkhoff sums of potentials in dynamical systems with a finite type property (see, for example, [8, 24] and the reference therein). As a notable recent example, in P. Shmerkin's recent proof of the Furstenberg \(\times 2\times 3\) conjecture [29], he computes the \(L^{q}\)-spectrum of a large class of dynamically self-similar measures and relates such results to the multifractal analysis of slices of sets. This information about \(L^{q}\)-spectra also implies \(L^{p}\)-smoothness properties in the question of absolute continuity of Bernoulli convolutions (see [30] for some background on this classic problem). For more detail on the geometry of measures and multifractal analysis, we refer the reader to the foundational work by L. Olsen [22] and the classic texts of K. Falconer [7] and Ya. Pesin [23].
Returning to our setting, substitution dynamical systems have characteristic features of (dynamical) self-similarity, but in many cases are far from being ergodic measures on shifts of finite type. More generally, frequency measures provide a rich family of shift-invariant ergodic measures which exhibit interesting and unique properties in symbolic dynamics in a natural way. For example, it was proved in [15, Theorem 4.8] that for a certain class of random substitutions, the corresponding subshift supports a frequency measure that is the unique ergodic measure of maximal entropy. However, this measure is not a Gibbs measure with respect to the zero potential, and therefore the system does not satisfy the common _specification property_, which is a well-known strategy for proving intrinsic ergodicity of symbolic dynamical systems (see [5] and the references therein). Moreover, there are examples of random substitutions such that the corresponding subshift supports multiple ergodic measures of maximal entropy [17, Example 6.5]. More generally, many key properties of frequency measures associated with random substitutions are poorly understood.
In this paper, we derive symbolic expressions for the \(L^{q}\)-spectrum of frequency measures associated with random substitutions under certain weak assumptions. Then under an additional assumption (recognisability), we prove a closed-form analytic expression for the \(L^{q}\)-spectrum and a variational formula which together imply the multifractal formalism. We emphasise that this class of examples has novel properties not witnessed before: in general, the unique frequency measure of maximal dimension is not a Gibbs measure with respect to the zero potential and the
corresponding subshift is not sofic. The techniques and results provide important new perspectives on the geometry and dynamics of the respective measures.
### Entropy and \(L^{q}\)-spectra
For a Borel probability measure in a compact metric space, the _\(L^{q}\)-spectrum_ is a well-studied quantity which encodes the scaling properties of the measure, in a weak sub-exponential sense. Specifically, the \(L^{q}\)-spectrum of \(\mu\) is given by
\[\tau_{\mu}(q)=\liminf_{r\to 0}\frac{\log\sup\sum_{i}\mu\big{(}B(x_{i},r) \big{)}^{q}}{\log r}.\]
where the supremum is taken over \(2r\)-separated subsets \(\{x_{i}\}_{i}\) of the support of \(\mu\).
The \(L^{q}\)-spectrum encodes information about the local scaling of the measure \(\mu\). We define the _local dimension_ of \(\mu\) at \(x\) by
\[\dim_{\mathrm{loc}}(\mu,x)=\lim_{r\to 0}\frac{\log\mu\big{(}B(x,r)\big{)}}{\log r}\]
when the limit exists. We then define the _multifractal spectrum_ of \(\mu\) by
\[f_{\mu}(\alpha)=\dim_{\mathrm{H}}\left\{x\in X:\dim_{\mathrm{loc}}(\mu,x)= \alpha\right\}.\]
In general, the structure of the set of local dimensions can be very complex--for example, the level sets are often dense uncountable subsets of the support of \(\mu\). However, the "multifractal miracle" is the phenomenon that, even though the level sets are very complex, the multifractal spectrum is often a concave analytic function of \(\alpha\).
In fact, the multifractal spectrum and the \(L^{q}\)-spectrum are related through a heuristic relationship called the _multifractal formalism_[18], which speculates that under certain regularity assumptions, the multifractal spectrum is given by the _concave conjugate_ of the \(L^{q}\)-spectrum, that is the quantity
\[\tau_{\mu}^{*}(\alpha)=\inf_{q\in\mathbb{R}}(q\alpha-\tau_{\mu}(q)).\]
Generally speaking, \(\tau_{\mu}^{*}(\alpha)\geq f_{\mu}(\alpha)\)[19]: in particular, the slopes of the asymptotes of the \(L^{q}\)-spectrum bound the exponential scaling of measures of balls \(B(x,r)\) uniformly for all \(x\in\operatorname{supp}\mu\).
In our specific setting, where our metric space is the two-sided shift \(\mathcal{A}^{\mathbb{Z}}\) and the measure \(\mu\) is ergodic, the local dimension is precisely the scaling rate of the _information function_ of \(\mu\). In fact, the Shannon-McMillan-Breiman theorem states that the local dimension of the measure (with an appropriate choice of the metric) is almost surely the entropy of the measure. Thus the \(L^{q}\)-spectrum provides _uniform_ control over the scaling rate of the information function. More detail about these notions are given in Section 2.
### Random substitutions
A (deterministic) substitution is a rule which replaces each symbol in a finite or infinite string over an alphabet \(\mathcal{A}\) with a finite word over the same alphabet. Random substitutions generalise this notion by substituting a randomly chosen word (according to a fixed finite distribution) independently for each letter. We can also think of a random substitution as a (deterministic) set-valued substitution \(\vartheta\), together with a choice of probabilities.
For example, given \(p\in(0,1)\), the _random Fibonacci substitution_\(\vartheta_{p}\) is defined by
\[\vartheta_{p}\colon\begin{cases}a\mapsto\begin{cases}ab&\text{with probability }p,\\ ba&\text{with probability }1-p,\end{cases}\\ b\mapsto a.\end{cases}\]
To a given (primitive) random substitution \(\vartheta_{\boldsymbol{P}}\), one can canonically a subshift \(X_{\vartheta}\) of the two-sided shift \(\mathcal{A}^{\mathbb{Z}}\) along with an ergodic _frequency measure_\(\mu_{\boldsymbol{P}}\), which quantifies the relative occurrence of a given word under repeated application of the random substitution.
As highlighted in the introduction, primitive random substitutions give rise to subshifts and measures with a wide variety of properties. As a result, we will impose additional conditions.
Our main assumption, which we call _compatibility_, asserts that for each \(a\in\mathcal{A}\), the number of occurrences of each \(b\in\mathcal{A}\) is identical in every possible substituted image of \(a\). For example, the random Fibonacci substitution is compatible since in all the possible images of \(a\), \(a\) occurs once and \(b\) occurs once. The key feature of compatibility is that the one can define a deterministic _substitution matrix_, such that the Perron-Frobenius eigenvalue is the asymptotic growth rate of lengths of words, and the corresponding right eigenvector encodes the asymptotic frequency with which the individual letters appear. Compatibility is a common assumption: for example, it is assumed in the main results of [3, 14, 21, 27].
Another standard assumption is that of _recognisability_, which heuristically states that each element of the subshift is the _unique_ image of another element of the subshift. Recognisability precludes the existence of periodic points [27] and is one of the assumptions required to to establish intrinsic ergodicity in [15]. It is also assumed in the main results of [12, 21].
### Statement and discussion of main results
We now give concise statements of the main results in this paper. We refer the reader to Section 2 for full statements of the notation and definitions used in this section.
Fix a random substitution \(\vartheta_{\boldsymbol{P}}\) and let \(\lambda\) and \(\boldsymbol{R}\) denote the Perron-Frobenius eigenvalue and corresponding right eigenvector of the substitution matrix of \(\vartheta_{\boldsymbol{P}}\), respectively. Given \(q\in\mathbb{R}\) and \(k\in\mathbb{N}\), define
\[\varphi_{\vartheta_{\boldsymbol{P}},k}(q)=\varphi_{k}(q)=-\sum_{a\in\mathcal{ A}}R_{a}\log\left(\sum_{s\in\vartheta^{k}(a)}\mathbb{P}[\vartheta^{k}_{ \boldsymbol{P}}(a)=s]^{q}\right).\]
We define the _inflation word_\(L^{q}\)-_spectrum of \(\vartheta_{\boldsymbol{P}}\)_ by
\[T_{\vartheta,\boldsymbol{P}}(q)=\liminf_{k\to\infty}\frac{1}{\lambda^{k}} \varphi_{k}(q).\]
We similarly define the upper variant \(\overline{T}_{\vartheta,\boldsymbol{P}}\) by taking a limit superior in place of the limit inferior. Throughout, \(\mu_{\boldsymbol{P}}\) will denote the frequency measure associated with \(\vartheta_{\boldsymbol{P}}\).
Heuristically, the inflation word spectrum approximates the frequency measure \(\mu_{\boldsymbol{P}}\) by the probability distribution on the iterated system. The normalisation factors follow from the observation that the words in the \(k^{\text{th}}\) iteration have length approximately \(\lambda^{k}\) when normalised by the left Perron-Frobenius eigenvector.
Our main general result bounding the \(L^{q}\)-spectrum is the following, which states for \(q\geq 0\) that \(T_{\vartheta,\boldsymbol{P}}\) and \(\tau_{\mu_{\boldsymbol{P}}}\) coincide, and moreover provides bounds on \(\tau_{\mu_{\boldsymbol{P}}}\) in terms of the functions \(\varphi_{k}\) for all \(q\in\mathbb{R}\).
**Theorem A**.: _Let \(\vartheta_{\boldsymbol{P}}=(\vartheta,\boldsymbol{P})\) be a primitive and compatible random substitution with corresponding frequency measure \(\mu_{\boldsymbol{P}}\). Then the limits defining \(\tau_{\mu_{\boldsymbol{P}}}(q)\) and \(T_{\vartheta,\boldsymbol{P}}(q)\) exist and coincide for all \(q\geq 0\). Moreover,_
1. _For all_ \(0\leq q\leq 1\)_,_ (1.1) \[\frac{1}{\lambda^{k}-1}\varphi_{k}(q)\leq\tau_{\mu_{\boldsymbol{P}}}(q)\leq \frac{1}{\lambda^{k}}\varphi_{k}(q)\] _and_ \((\lambda^{-k}\varphi_{k}(q))_{k=1}^{\infty}\) _converges monotonically to_ \(\tau_{\mu_{\boldsymbol{P}}}(q)\) _from above._
2. _For all_ \(q\geq 1\)_,_ (1.2) \[\frac{1}{\lambda^{k}}\varphi_{k}(q)\leq\tau_{\mu_{\boldsymbol{P}}}(q)\leq \frac{1}{\lambda^{k}-1}\varphi_{k}(q)\] _and_ \((\lambda^{-k}\varphi_{k}(q))_{k=1}^{\infty}\) _converges monotonically to_ \(\tau_{\mu_{\boldsymbol{P}}}(q)\) _from below._
3. _For all_ \(q<0\)_,_ (1.3) \[\frac{1}{\lambda^{k}-1}\varphi_{k}(q)\leq\tau_{\mu_{\boldsymbol{P}}}(q).\]
The notion of compatibility is defined in Section 2.7, which is key in order to obtain the uniform estimate given in Lemma 2.9. For \(q<0\), it is _not_ true in general that \(\tau_{\mu_{\boldsymbol{P}}}(q)\) and \(T_{\vartheta,\boldsymbol{P}}(q)\) coincide (a counterexample is given in Example 5.2): the problem is essentially "non-uniqueness of cutting points", as highlighted by the averaging procedure in the construction of the measure (see Lemma 2.12). In other words, the corresponding upper bound in (1.3) does not hold in general. Despite this, it still follows from (1.3) that \(T_{\vartheta,\boldsymbol{P}}(q)\) provides a lower bound for \(\tau_{\mu_{\boldsymbol{P}}}(q)\).
In Proposition 3.1, we prove that if \(\vartheta_{\mathbf{P}}\) also satisfies the _disjoint set condition_, or the _identical set condition with identical production probabilities_ (see Definition 2.13), then a closed-form expression can be obtained for the \(L^{q}\)-spectrum of the corresponding frequency measure. By combining this result with Theorem A, we obtain the following corollary.
**Corollary B**.: _Let \(\vartheta_{\mathbf{P}}\) be a primitive and compatible random substitution with corresponding frequency measure \(\mu_{\mathbf{P}}\). Then for all \(q\geq 0\):_
1. _If_ \(\vartheta_{\mathbf{P}}\) _satisfies the disjoint set condition, then_ \[\tau_{\mu_{\mathbf{P}}}(q)=\frac{1}{\lambda-1}\varphi_{1}(q).\]
2. _If_ \(\vartheta_{\mathbf{P}}\) _satisfies the identical set condition and has identical production probabilities, then_ \[\tau_{\mu_{\mathbf{P}}}(q)=\frac{1}{\lambda}\varphi_{1}(q).\]
In particular, under the disjoint set condition or identical set condition with identical production probabilities, the \(L^{q}\)-spectrum is analytic on \((0,\infty)\).
In [15], a result analogous to Theorem A is obtained for entropy in terms of the _inflation word entropy_, without the compatibility assumption. In fact, to highlight the generality of our results on \(L^{q}\)-spectra, as a consequence of Theorem A, we obtain new proofs of this result (as well as a result on topological entropy).
1. We obtain the main result of [14] on topological entropy, which states that for subshifts of primitive and compatible random substitutions, the topological entropy can be characterised in terms of the asymptotic growth rate of inflation words.
2. For frequency measures corresponding to primitive and compatible random substitutions, we also obtain the characterisation of (measure theoretic) entropy obtained in [15, Theorem 3.3]--under the additional hypothesis that the substitution is compatible.
This is described in the following corollary.
**Corollary C**.: _Let \(\vartheta_{\mathbf{P}}=(\vartheta,\mathbf{P})\) be a primitive and compatible random substitution with associated subshift \(X_{\vartheta}\) and frequency measure \(\mu_{\mathbf{P}}\)._
1. _The limit_ \[\lim_{k\to\infty}\frac{1}{\lambda^{k}}\sum_{a\in\mathcal{A}}R_{a}\log(\# \vartheta^{k}(a))\] _exists and is equal to_ \(h_{\mathrm{top}}(X_{\vartheta})\)_._
2. _The_ \(L^{q}\)_-spectrum of_ \(\mu_{\mathbf{P}}\) _is differentiable at_ \(1\)_. Moreover, the limit_ \[\lim_{k\to\infty}\frac{1}{\lambda^{k}}\sum_{a\in\mathcal{A}}R_{a}\sum_{v\in \vartheta^{k}(a)}-\mathbb{P}[\vartheta^{k}_{\mathbf{P}}(a)=v]\log(\mathbb{P}[ \vartheta^{k}_{\mathbf{P}}(a)=v])\] _exists and is equal to_ \(h_{\mu_{\mathbf{P}}}(X_{\vartheta})=\dim_{\mathrm{H}}\mu_{\mathbf{P}}=\tau^{\prime}_{ \mu_{\mathbf{P}}}(1)\)_._
We now turn our attention to the multifractal spectrum. Firstly, while \(\tau_{\mu_{\mathbf{P}}}(q)\) and \(T_{\vartheta,\mathbf{P}}(q)\) do not coincide in general for \(q<0\), if the random substitution that gives rise to the frequency measure \(\mu_{\mathbf{P}}\) is additionally assumed to be recognisable (see Definition 2.14), then the limits defining \(\tau_{\mu_{\mathbf{P}}}(q)\) and \(T_{\vartheta,\mathbf{P}}(q)\) both exist and coincide for all \(q\in\mathbb{R}\). Moreover, under recognisability, we prove that the multifractal spectrum is the concave conjugate of the \(L^{q}\)-spectrum: in other words, the multifractal formalism holds for any associated frequency measure \(\mu_{\mathbf{P}}\). In particular, we conclude that \(f_{\mu_{\mathbf{P}}}\) is a concave analytic function.
In fact, in Proposition 4.5 we prove a stronger variational formula for the multifractal spectrum. For each \(\alpha\in\mathbb{R}\), we construct measures \(\nu\) such that \(\dim_{\mathrm{H}}\nu\geq\tau^{*}(\alpha)\) and \(\dim_{\mathrm{loc}}(\mu_{\mathbf{P}},x)=\alpha\) for \(\nu\)-a.e. \(x\in X_{\vartheta}\). In particular, we can take the measures to be frequency measures associated with permissible probabilities for the substitution \(\vartheta\).
**Theorem D**.: _Let \(\vartheta_{\mathbf{P}}\) be a primitive, compatible, and recognisable random substitution with corresponding frequency measure \(\mu_{\mathbf{P}}\). Then for all \(q\in\mathbb{R}\),_
\[\tau_{\mu_{\mathbf{P}}}(q)=T_{\vartheta,\mathbf{P}}(q)=\frac{1}{\lambda-1}\varphi_{1} (q).\]
_Moreover, \(f_{\mu_{\mathbf{P}}}(\alpha)=\tau^{*}_{\mu_{\mathbf{P}}}(\alpha)\) is an analytic and concave function. In fact, for each \(\alpha\in\mathbb{R}\) such that \(f_{\mu_{\mathbf{P}}}(\alpha)\geq 0\), there are permissible probabilities \(\mathbf{Q}\) such that \(f_{\mu_{\mathbf{P}}}(\alpha)=\dim_{\mathrm{H}}\mu_{\mathbf{Q}}\) and \(\dim_{\mathrm{loc}}(\mu_{\mathbf{P}},x)=\alpha\) for \(\mu_{\mathbf{Q}}\)-a.e. \(x\in X_{\vartheta}\)._
To conclude this section, we observe that our results on \(L^{q}\)-spectra also give uniform bounds on the exponential scaling rate of the frequency measures. The following result is a direct application of Theorem A and Theorem D, combined with Proposition 4.4.
**Corollary E**.: _Let \(\vartheta_{\mathbf{P}}=(\vartheta,\mathbf{P})\) be a primitive, compatible, and recognisable random substitution. Then_
\[\alpha_{\min} \coloneqq\lim_{q\to\infty}\frac{\tau_{\mu_{\mathbf{P}}}(q)}{q}=- \sum_{a\in\mathcal{A}}R_{a}\log\left(\max_{s\in\vartheta(a)}\mathbb{P}[ \vartheta_{\mathbf{P}}(a)=s]\right)\] \[\alpha_{\max} \coloneqq\lim_{q\to-\infty}\frac{\tau_{\mu_{\mathbf{P}}}(q)}{q}=- \sum_{a\in\mathcal{A}}R_{a}\log\left(\min_{s\in\vartheta(a)}\mathbb{P}[ \vartheta_{\mathbf{P}}(a)=s]\right).\]
_and for all \(x\in X_{\vartheta}\), \(\alpha_{\min}\leq\underline{\dim}_{\mathrm{loc}}(\mu_{\mathbf{P}},x)\leq\overline {\dim}_{\mathrm{loc}}(\mu_{\mathbf{P}},x)\leq\alpha_{\max}\). Moreover,_
\[\{\dim_{\mathrm{loc}}(\mu_{\mathbf{P}},x):x\in X_{\vartheta}\}=[\alpha_{\min}, \alpha_{\max}].\]
In particular, when the probabilities \(\mathbf{P}\) are chosen so that for each \(a\in\mathcal{A}\), \(\mathbb{P}[\vartheta_{\mathbf{P}}(a)=s]=1/\#\vartheta(a)\) for all \(s\in\vartheta(a)\), then the \(L^{q}\)-spectrum is the line with slope \(h_{\mathrm{top}}(X_{\vartheta})\) passing through \((1,0)\). Thus the local dimension of \(\mu_{\mathbf{P}}\) exists at every \(x\in X_{\vartheta}\) and is given by the constant value \(\alpha_{\min}=\alpha_{\max}\). This can be rephrased in terms of a weak Gibbs-type property, which says that for every \(\epsilon>0\), all \(n\) sufficiently large (depending on \(\epsilon\)), and \(u\in\mathcal{L}^{n}\),
\[\exp(-|u|(h_{\mathrm{top}}(X_{\vartheta})+\epsilon))\leq\mu_{\mathbf{P}}([u])\leq \exp(-|u|(h_{\mathrm{top}}(X_{\vartheta})-\epsilon)); \tag{1.4}\]
see, for example, [29, Lemma 1.4] for the short argument. In general, the error term \(\epsilon\) cannot be dropped by the addition of a constant factor. Under certain assumptions, one can show that there are infinitely many words with \(\mu_{\boldsymbol{P}}([u])\approx|u|^{-1}\exp(-|u|(h_{\mathrm{top}}(X_{\vartheta }))\), as explained in [15, Lemma 4.12]. These assumptions are satisfied, for example, in Example 5.6.
Of course, similar one-sided results hold for \(q\geq 0\) only under the assumption of compatibility, by iterating the formula for \(\varphi_{k}\) and taking an appropriate maximum at each level. In fact, since \(\tau_{\mu_{\boldsymbol{P}}}(q)\) is differentiable at \(1\), with derivative giving the entropy, and since \(h_{\mathrm{top}}(X_{\vartheta})=\tau_{\mu_{\boldsymbol{P}}}(0)\), it follows that \(\mu_{\boldsymbol{P}}\) is a measure of maximal entropy if and only if \(\tau^{\prime}_{\mu_{\boldsymbol{P}}}(q)\) exists and is constant on the interval \((0,1)\).
### Discussion and further work
We conclude the introduction with a list of comments and potentially interesting questions.
1. What is the \(L^{q}\)-spectrum for a compatible substitution when \(q<0\)? We do not known this even for the random substitution given in Example 5.1, which satisfies the identical set condition with identical production probabilities. Obtaining results for \(q<0\) is substantially more challenging, since the sum in \(\tau_{\mu_{\boldsymbol{P}}}(q)\) depends on the measure of cylinders with very small (but non-zero) measure. For example, in the self-similar case, without the presence of strong separation assumptions, little is known (in contrast to the \(q\geq 0\) case).
2. What happens without compatibility? Do the formulas in Theorem A hold in general? In [15], it suffices to use an almost sure version of Lemma 2.9. However, since the \(L^{q}\)-spectrum is sensitive to scaling at individual points as \(q\) tends to \(\pm\infty\), such an almost sure result in our case is insufficient.
3. Outside the disjoint set condition and the identical set condition, what can be said about differentiability of the \(L^{q}\)-spectrum? For \(q\geq 0\), we give the \(L^{q}\)-spectrum as a uniform limit of analytic functions: however, aside from the exceptional point \(q=1\) where we can say more, this is not enough to give information about differentiability.
4. Can our results on \(L^{q}\)-spectra and multifractal spectra (which hold for recognisable substitutions) be relaxed to a weaker condition such as the disjoint set condition (see Definition 2.13)?
5. Can the error term in (1.4) be determined precisely, up to a constant? The approximate Gibbs-type bounds discussed following Corollary E are closely related to the bounds used in the proof of intrinsic ergodicity given in [15, Theorem 4.8]. It could be worth exploring the relationship between intrinsic ergodicity and Gibbs-type properties given by the \(L^{q}\)-spectrum.
## 2. Preliminaries
In this section we introduce the key notation and definitions that we will use throughout the paper. After introducing some basic notation, in Section 2.2 we introduce symbolic dynamics on the two-sided shift, as well as our notions of entropy
and dimension. In Section 2.3 we present the key definitions and basic results from multifractal analysis that we work with throughout, including the definitions of the \(L^{q}\)-spectrum and local dimensions of a measure. Then, in the following sections we provide an introduction to random substitutions and their associated dynamical systems. In Section 2.5 we give the definition of a random substitution via its action on words, and define the subshift associated to a random substitution. Then, in Section 2.6 and Section 2.7, we define what it means for a random substitution to be _primitive_ and _compatible_ and present the key properties of such random substitutions. In Section 2.8, we give the definition of the frequency measure associated to a random substitution and state a key result used in the proof of our main results which relates the measures of cylinder sets via the action of the random substitution. Finally, in Section 2.9, we define what it means for a substitution to satisfy the disjoint or identical set condition, and introduce recognisable random substitutions.
### Symbolic notation
Throughout, we use the following symbolic notation, which is essentially the same as the notation used in [2, 20].
For a set \(B\), we let \(\#B\) be the cardinality of \(B\) and let \(\mathcal{F}(B)\) be the set of non-empty finite subsets of \(B\).
We fix an _alphabet_\(\mathcal{A}=\{a_{1},\ldots,a_{d}\}\), for some \(d\in\mathbb{N}\), which is a finite set of _letters_\(a_{i}\), and equip it with the discrete topology. Then a _word_\(u\) with letters in \(\mathcal{A}\) is a finite concatenation of letters, namely \(u=a_{i_{1}}\cdots a_{i_{n}}\) for some \(n\in\mathbb{N}\). We write \(|u|=n\) for the length of the word \(u\), and for \(m\in\mathbb{N}\), we let \(\mathcal{A}^{m}\) denote the set of all words of length \(m\) with letters in \(\mathcal{A}\).
We set \(\mathcal{A}^{+}=\bigcup_{m\in\mathbb{N}}\mathcal{A}^{m}\) and let
\[\mathcal{A}^{\mathbb{Z}}=\{(a_{i_{n}})_{n\in\mathbb{Z}}:a_{i_{n}}\in\mathcal{ A}\text{ for all }n\in\mathbb{Z}\}\]
denote the set of all bi-infinite sequences with elements in \(\mathcal{A}\), and endow \(\mathcal{A}^{\mathbb{Z}}\) with the product topology. We also fix a metric on \(\mathcal{A}^{\mathbb{Z}}\) as follows. Given points \(x=(x_{n})_{n\in\mathbb{Z}}\) and \(y=(y_{n})_{n\in\mathbb{Z}}\), let \(N(x,y)=\sup\{n\in\mathbb{Z}:x_{j}=y_{j}\text{ for all }|j|\leq n\}\) and let \(d(x,y)=e^{-2N(x,y)-1}\). The space \(X\) is compact with topology generated by the metric.
We will frequently write sequences \((x_{n})_{n\in\mathbb{Z}}\in\mathcal{A}^{\mathbb{Z}}\) as \(\cdots x_{-1}x_{0}x_{1}\cdots\), with the corresponding notation for finite sequences.
If \(i\) and \(j\in\mathbb{Z}\) with \(i\leq j\), and \(x=\cdots x_{-1}x_{0}x_{1}\cdots\in\mathcal{A}^{\mathbb{Z}}\), then we let \(x_{[i,j]}=x_{i}x_{i+1}\cdots x_{j}\). We use the same notation if \(v\in\mathcal{A}^{+}\) and \(1\leq i\leq j\leq|v|\). For \(u\) and \(v\in\mathcal{A}^{+}\) (or \(v\in\mathcal{A}^{\mathbb{Z}}\)), we write \(u\triangleleft v\) if \(u\) is a subword of \(v\), namely if there exist \(i\) and \(j\in\mathbb{Z}\) with \(i\leq j\) so that \(u=v_{[i,j]}\). For \(u\) and \(v\in\mathcal{A}^{+}\), we set \(|v|_{u}\) to be the number of (possibly overlapping) occurrences of \(u\) as a subword of \(v\). If \(u=a_{i_{1}}\cdots a_{i_{n}}\) and \(v=a_{j_{1}}\cdots a_{j_{m}}\in\mathcal{A}^{+}\), for some \(n\) and \(m\in\mathbb{N}\), we write \(uv\) for the _concatenation_ of \(u\) and \(v\). The _abelianisation_ of a word \(u\in\mathcal{A}^{+}\) is the vector \(\Phi(u)\in\mathbb{N}_{0}^{\#\mathcal{A}}\), defined by \(\Phi(u)_{a}=|u|_{a}\) for all \(a\in\mathcal{A}\).
### Dynamics, entropy and dimension
We equip the space \(\mathcal{A}^{\mathbb{Z}}\) with invertible dynamics from the _left-shift map_\(S\colon\mathcal{A}^{\mathbb{Z}}\to\mathcal{A}^{\mathbb{Z}}\). Throughout, we will work with a
subshift_\(X\subset\mathcal{A}^{\mathbb{Z}}\), which is compact and _shift-invariant_, that is \(S^{-1}(X)=X\). Then \(\mu\) will denote an ergodic and \(S\)-invariant Borel probability measure with support contained in \(X\).
The metric structure on \(\mathcal{A}^{\mathbb{Z}}\) enables us to define the _Hausdorff dimension_ of Borel subsets of \(X\). Using this, we define the _Hausdorff dimension of \(\mu\)_ to be the quantity
\[\dim_{\mathrm{H}}\mu=\inf\{\dim_{\mathrm{H}}E:\mu(E)>0\}\]
where the infimum is taken over Borel sets \(E\). Here, we use Hausdorff dimension as inherited from the underlying metric; though it would also be appropriate to use Bowen's generalisation of topological entropy to non-compact sets [4]. We also define the _lower local dimension_ of \(\mu\) at \(x\) by
\[\underline{\dim}_{\mathrm{loc}}(\mu,x)=\liminf_{r\to 0}\frac{\log\mu\big{(}B(x,r) \big{)}}{\log r}\]
We define the _upper local dimension_\(\overline{\dim}_{\mathrm{loc}}(\mu,x)\) analogously using the limit superior in place of the limit inferior, and when the limits coincide, we refer to the shared quantity as the _local dimension_ and denote it by \(\dim_{\mathrm{loc}}(\mu,x)\).
Local dimensions and Hausdorff dimension are closely related: it follows from, for instance, [7, Proposition 10.1] that
\[\dim_{\mathrm{H}}\mu=\sup\{s:\underline{\dim}_{\mathrm{loc}}(\mu,x)\geq s\text { for }\mu\text{-a.e. }x\}. \tag{2.1}\]
Now fix a partition \(\xi\) so that with \(\xi_{k}=\bigvee_{i=-k}^{k}S^{-i}(\xi)\), \(\{\xi_{k}\}_{k=1}^{\infty}\) generates the Borel \(\sigma\)-algebra on \(X\). We recall that the _entropy_ of \(\mu\) with respect to \(S\) is given by
\[h_{\mu}(X)=\lim_{k\to\infty}\frac{1}{2k+1}\sum_{A\in\xi_{k}}-\mu(A)\log\bigl{(} \mu(A)).\]
where, by the classical Kolmogorov-Sinai theorem, the quantity does not depend on the choice of partition.
Now given \(x\in X\), let \(\xi_{k}(x)\) denote the unique element in the partition \(\xi_{k}\) containing \(x\). Then the Shannon-McMillan-Breiman theorem states that the entropy of \(\mu\) is almost surely the _information rate_ of \(\mu\), that is for \(\mu\)-a.e. \(x\in X\),
\[\lim_{k\to\infty}\frac{-\log\mu\bigl{(}\xi_{k}(x)\bigr{)}}{2k+1}=h_{\mu}(X).\]
We refer the reader to [6] for greater detail.
Now suppose \(\xi=\{E_{a}\}_{a\in\mathcal{A}}\) is the partition of \(X\) where \(E_{a}=\{(x_{n})_{n\in\mathbb{Z}}\in X:x_{0}=a\}\). For the remainder of this paper, \(\xi\) will always denote this partition. Then given \(x=(x_{n})_{n\in\mathbb{Z}}\in X\),
\[\xi_{k}(x)=\{y\in X:x_{j}=y_{j}\text{ for all }|j|\leq k\}=B(x,e^{-(2k+1)})\]
and therefore
\[\dim_{\mathrm{loc}}(\mu,x)=\lim_{k\to\infty}\frac{-\log\mu\bigl{(}\xi_{k}(x) \bigr{)}}{2k+1}\]
where both limits exist if either limit exists. Since the limit on the right is \(\mu\) almost surely \(h_{\mu}(X)\), it follows from (2.1) that \(\dim_{\mathrm{H}}\mu=h_{\mu}(X)\).
Finally, the _topological entropy_ of \(X\) is given by
\[h_{\mathrm{top}}(X)=\lim_{k\to\infty}\frac{-\log\#\{E\in\xi_{k}:E\cap X\neq \varnothing\}}{2k+1}.\]
Of course, \(h_{\mathrm{top}}(X)=\dim_{\mathrm{B}}X\), the box counting dimension of \(X\).
### \(L^{q}\)-spectra and smoothness
Given \(q\in\mathbb{R}\), we define
\[\overline{S}_{\mu,r}(q)=\sup_{\{x_{i}\}_{i}\in\mathcal{P}(r)}\sum_{i}\mu \big{(}B(x_{i},r)\big{)}^{q}\]
where \(\mathcal{P}(r)\) is the set of discrete \(2r\)-separated subsets of \(X\), that is \(\mathcal{P}(r)=\{\{x_{i}\}_{i}:x_{i}\in X,d(x_{i},x_{j})\geq 2r\text{ for }i\neq j\}\). We then define the _\(L^{q}\)-spectrum of \(\mu\)_ to be the function
\[\tau_{\mu}(q)=\liminf_{q\to 0}\frac{\log\overline{S}_{\mu,r}(q)}{\log r}.\]
For convenience, we also denote the upper variant \(\overline{\tau}_{\mu}(q)\) by taking a limit superior in place of the limit inferior. It is a standard consequence of Holder's inequality that \(\tau_{\mu}(q)\) is a concave increasing function of \(q\) (note that this need not hold for \(\overline{\tau}_{\mu}(q)\)).
Of course, the preceding definitions hold more generally in an arbitrary metric space, but in our particular setting we can rephrase the \(L^{q}\)-spectrum in terms of more familiar sums over cylinders. Recall that \(\xi\) denotes the partition of \(X_{\vartheta}\) into cylinders at \(0\) corresponding to the letters in \(\mathcal{A}\). Then set
\[S_{\mu,n}(q)=\sum_{E\in\xi_{k}}\mu(E)^{q}.\]
Since distinct elements in the partition \(\xi_{k}\) are \(e^{-(2k+1)}\)-separated,
\[S_{\mu,n}(q)=\overline{S}_{\mu,e^{-(2n+1)}}(q).\]
It follows immediately that
\[\tau_{\mu}(q)=\liminf_{n\to\infty}\frac{-\log S_{\mu,n}(q)}{2n+1}\]
with the analogous result for \(\overline{\tau}_{\mu}(q)\). In particular, we observe that \(\tau_{\mu}(0)=h_{\mathrm{top}}(X)\) assuming \(\mu\) is fully supported on \(X\).
Finally, by shift invariance, we can characterise the subshift \(X\) in terms of a _language_ on \(X\). Given \(n\in\mathbb{N}\), we set
\[\mathcal{L}^{n}(X)=\{w\in\mathcal{A}^{n}:w\triangleleft x\text{ for some }x\in X\}.\]
Given \(w\in\mathcal{L}^{n}(X)\), we let \([w]=\{(x_{n})_{n\in\mathbb{Z}}\in X:x_{i}=w_{i}\text{ for all }1\leq i\leq n\}\). Of course, by shift invariance, there is a measure-preserving bijection between \(\mathcal{L}^{2n+1}(X)\) and \(X_{n}\)
so it follows again that
\[\tau_{\mu}(q)=\liminf_{n\to\infty}-\frac{1}{n}\log\sum_{u\in\mathcal{L}^{n}(X)} \mu([u])^{q}.\]
We will primarily use this characterisation throughout the paper.
We first list some basic properties of the \(L^{q}\)-spectrum of the measure \(\mu\). Here, (a) is a direct consequence of Holder's inequality, (b) is standard (see, for example, [29, Lemma 1.4]) and (c) was proven in [9, Theorem 1.4].
**Lemma 2.1**.: _Let \(\mu\) be a shift-invariant measure on \(X\)._
1. _The_ \(L^{q}\)_-spectrum_ \(\tau_{\mu}(q)\) _is continuous, increasing and concave on_ \(\mathbb{R}\)_._
2. _Let_ \(\alpha_{\min}=\lim_{q\to\infty}\tau_{\mu}(q)/q\) _and_ \(\alpha_{\max}=\lim_{q\to-\infty}\tau_{\mu}(q)/q\)_. Then for every_ \(s<\alpha_{\min}\leq\alpha_{\max}<t\)_, all_ \(n\) _sufficiently large and_ \(u\in\mathcal{L}^{n}\)_,_ \(e^{-tn}\leq\mu([u])\leq e^{-sn}\)_. In particular, the local dimensions satisfy_ \[\alpha_{\min}\leq\inf_{x\in X}\underline{\dim}_{\rm loc}(\mu,x)\leq\sup_{x\in X }\overline{\dim}_{\rm loc}(\mu,x)\leq\alpha_{\max}.\]
3. _The left and right derivatives of_ \(\tau_{\mu}\) _at_ \(q=1\) _bound the Hausdorff dimension of_ \(\mu\)_, that is_ \(\tau_{\mu}^{+}(1)\leq\dim_{\rm H}\mu\leq\tau_{\mu}^{-}(1)\)_._
In fact, (a) gives intuition for why the \(L^{q}\)-spectrum encodes smoothness: rather than obtain almost sure information on local dimensions, the \(L^{q}\)-spectrum contains _uniform_ information about local dimensions.
Finally, we prove a simple result concerning the \(L^{q}\)-spectrum which will be useful later in the paper.
**Lemma 2.2**.: _Let \(\zeta>1\) be arbitrary. Then_
\[\tau_{\mu}(q)=\frac{1}{\zeta}\liminf_{n\to\infty}-\frac{1}{n}\log\left(\sum_{ u\in\mathcal{L}^{\lfloor\zeta n\rfloor}(X)}\mu([u])^{q}\right) \tag{2.2}\]
_and_
\[\overline{\tau}_{\mu}(q)=\frac{1}{\zeta}\limsup_{n\to\infty}-\frac{1}{n}\log \left(\sum_{u\in\mathcal{L}^{\lfloor\zeta n\rfloor}(X)}\mu([u])^{q}\right). \tag{2.3}\]
Proof.: Of course, it always holds that
\[\tau_{\mu}(q) \leq\frac{1}{\zeta}\liminf_{n\to\infty}-\frac{1}{n}\log\left( \sum_{u\in\mathcal{L}^{\lfloor\zeta n\rfloor}(X)}\mu([u])^{q}\right)\] \[\overline{\tau}_{\mu}(q) \geq\frac{1}{\zeta}\limsup_{n\to\infty}-\frac{1}{n}\log\left(\sum _{u\in\mathcal{L}^{\lfloor\zeta n\rfloor}(X)}\mu([u])^{q}\right).\]
First, let \(q<0\) and let \(n\in\mathbb{N}\) be arbitrary. Let \(k_{n}\) be minimal so that \(\lfloor\zeta k_{n}\rfloor\geq n\). Observe that there is some \(M\in\mathbb{N}\) (independent of \(n\)) so that \(\lfloor\zeta k_{n}\rfloor\leq n+M\): it follows that \(\lim_{n\to\infty}n/k_{n}=\zeta\). Then if \(v\in\mathcal{L}^{\lfloor\zeta k_{n}\rfloor}(X)\) is arbitrary, \([v]\subset[u]\) for some \(u\in\mathcal{L}^{n}(X)\) and \(\mu([v])^{q}\geq\mu([u])^{q}\). Thus
\[S_{\lfloor\zeta k_{n}\rfloor,\mu}(q)\geq S_{n,\mu}(q).\]
which gives (2.2) for \(q<0\) since \(\lim n/k_{n}=\zeta\).
Similarly, for \(q\geq 0\), since there are at most \((\#\mathcal{A})^{M}\) words \(v\in\mathcal{L}^{\lfloor\zeta k_{n}\rfloor}(X)\) with \([v]\subset[u]\), pigeonholing, for each \(u\in\mathcal{L}^{n}(X)\) there is some \(v(u)\in\mathcal{L}^{\lfloor\zeta k_{n}\rfloor}(X)\) such that \(\mu([v(u)])^{q}\geq(\#\mathcal{A})^{-qM}\mu([u])^{q}\). Thus
\[S_{\lfloor\zeta k_{n}\rfloor,\mu}(q)\geq(\#\mathcal{A})^{-qM}S_{n,\mu}(q).\]
This gives (2.2) for \(q\geq 0\).
The arguments for (2.3) follow analogously by choosing \(k_{n}\) maximal so that \(\lfloor\zeta k_{n}\rfloor\leq n\).
### Multifractal spectrum and multifractal formalism
The \(L^{q}\)-spectrum of a measure is related to the _(fine) multifractal spectrum_. Let \(\mu\) be a shift-invariant measure on a subshift \(X\). We recall that the local dimension of \(\mu\) at \(x\in X\) is given by
\[\dim_{\mathrm{loc}}(\mu,x)=\lim_{n\to\infty}-\frac{1}{2n+1}\log\mu([x_{[-n,n]}])\]
when the limit exists. Given \(\alpha\in\mathbb{R}\), set
\[F_{\mu}(\alpha)=\left\{x\in X:\dim_{\mathrm{loc}}(\mu,x)=\alpha\right\}.\]
We then define the _multifractal spectrum_ of \(\mu\) by
\[f_{\mu}(\alpha)=\dim_{\mathrm{H}}F_{\mu}(\alpha)\]
using the convention that \(\dim_{\mathrm{H}}\varnothing=-\infty\).
The multifractal spectrum is related to the \(L^{q}\)-spectrum by the following result. Let \(g\colon\,\mathbb{R}\to\mathbb{R}\cup\{-\infty\}\) be a concave function. For \(x\in\mathbb{R}\), we let \(g^{+}(x)\) (resp. \(g^{-}(x)\)) denote the right (resp. left) derivative of \(g\) at \(x\). Such limits necessarily exist by concavity. We denote the _subdifferential_ of \(g\) at \(x\) by \(\partial g(x)=[g^{+}(x),g^{-}(x)]\). We then recall that the _concave conjugate_ of \(g\) is given by
\[g^{*}(\alpha)=\inf_{q\in\mathbb{R}}\{q\alpha-g(q)\}.\]
Note that \(g^{*}\) is always concave since it is the infimum of a family of affine functions. For more detail concerning the theory of concave functions, we refer the reader to [26].
Now, we say that \(\mu\) satisfies the _multifractal formalism_ when \(f_{\mu}=\tau_{\mu}^{*}\). In general, the multifractal formalism need not hold, but it is well-known that the concave conjugate of the \(L^{q}\)-spectrum is an upper bound for the multifractal spectrum. For the convenience of the reader, we provide a short self-contained proof, which follows the main ideas of [19, Theorem 4.1].
**Proposition 2.3**.: _Let \(\mu\) be a shift-invariant measure on a subshift \(X\). Then \(f_{\mu}(\alpha)\leq\tau^{*}(\alpha)\) for all \(\alpha\in\mathbb{R}\)._
Proof.: Recall that \(\xi_{n}\) denotes the partition of \(X\) into cylinders corresponding to words of length \(2n+1\), each of which has diameter precisely \(e^{-2n-1}\). For \(\alpha\in\mathbb{R}\), \(n\in\mathbb{N}\) and \(\epsilon>0\), let
\[\mathcal{M}_{n,\epsilon}(\alpha)=\Big{\{}I\in\xi_{n}:e^{-(2n+1)(\alpha+ \epsilon)}\leq\mu(I)\leq e^{-(2n+1)(\alpha-\epsilon)}\Big{\}}.\]
In other words, \(\mathcal{M}_{n,\epsilon}(\alpha)\) is an \(\epsilon\)-approximation of \(F_{\mu}(\alpha)\) at level \(n\). Our strategy is to control the size of the sets \(\mathcal{M}_{n,\epsilon}(\alpha)\) in terms of the \(L^{q}\)-spectrum of \(\mu\), and then use these sets to build a good cover of \(F_{\mu}(\alpha)\). Let \(q\in\partial\tau^{*}(\alpha)\): we prove this in the case that \(q\geq 0\); the case \(q<0\) is analogous.
First, observe that
\[S_{2n+1,\mu}(q)=\sum_{I\in\xi_{n}}\mu(I)^{q}\geq\sum_{u\in\mathcal{M}_{n, \epsilon}(\alpha)}\mu(I)^{q}\geq e^{-(2n+1)(\alpha+\epsilon)q}\#\mathcal{M}_{ n,\epsilon}(\alpha). \tag{2.4}\]
Since \(\tau_{\mu}(q)=\liminf_{n\to\infty}(\log S_{2n+1,\mu}(q))/(-2n-1)\) by Lemma 2.2, there is some \(N_{\epsilon}\in\mathbb{N}\) so that for all \(n\geq N_{\epsilon}\), \(S_{2n+1,\mu}(q)\leq e^{-(2n+1)(\tau_{\mu}(q)-\epsilon)}\). Combining this with (2.4),
\[\#\mathcal{M}_{n,\epsilon}(\alpha)\leq e^{-(2n+1)(\tau(q)-\epsilon)}\cdot e^{( 2n+1)(\alpha+\epsilon)q}=e^{(2n+1)(\tau^{*}(\alpha)+(q+1)\epsilon)} \tag{2.5}\]
for all \(n\geq N_{\epsilon}\) where we have used the fact that \(q\in\partial\tau^{*}(\alpha)\).
Now for each \(x\in F_{\mu}(\alpha)\), we can find some \(n_{x}\in\mathbb{N}\) so that for all \(n\geq n_{x}\), \(\mu(\xi_{n}(x))\geq e^{-(2n+1)(\alpha+\epsilon)}\). In particular,
\[\mathcal{G}_{\epsilon}\coloneqq\bigcup_{n=N_{\epsilon}}^{\infty}\mathcal{M}_{ n,\epsilon}(\alpha)\]
is a Vitali cover for \(F_{\mu}(\alpha)\).
Now suppose \(\{I_{j}\}_{j=1}^{\infty}\) is any disjoint subcollection of \(\mathcal{G}_{\epsilon}\): then with \(s=\tau^{*}(\alpha)+2\epsilon(1+q)\),
\[\sum_{j=1}^{\infty}(\operatorname{diam}I_{j})^{s} \leq\sum_{n=N_{\epsilon}}^{\infty}\sum_{I\in\mathcal{M}_{n, \epsilon}(\alpha)}(\operatorname{diam}I)^{s}\leq\sum_{n=N_{\epsilon}}^{ \infty}e^{-(2n+1)s}\#\mathcal{M}_{n,\epsilon}(\alpha)\] \[\leq\sum_{n=N_{\epsilon}}^{\infty}e^{-(2n+1)s}e^{(2n+1)(\tau^{*} (\alpha)+(q+1)\epsilon)}\] \[=\sum_{n=N_{\epsilon}}^{\infty}(e^{-(1+q)\epsilon})^{2n+1}<\infty\]
by (2.5). Thus by the Vitali covering theorem for Hausdorff measure, there is a cover \(\{E_{i}\}_{i=1}^{\infty}\) for \(F_{\mu}(\alpha)\) such that
\[\mathcal{H}^{s}(F_{\mu}(\alpha))\leq\sum_{i=1}^{\infty}(\operatorname{diam}E_ {i})^{s}<\infty\]
and thus \(\dim_{\mathrm{H}}F_{\mu}(\alpha)\leq\tau^{*}(\alpha)+2\epsilon(1+q)\). But \(\epsilon>0\) was arbitrary, so the desired result follows.
### Random substitutions and frequency measures
We now introduce our primary objects of interest: _random substitutions_, and their associated _frequency measures_. In a similar manner to [14, 15], we define a random substitution by the data required to determine its action on letters. We then extend this to a random map on words.
**Definition 2.4**.: Let \(\mathcal{A}=\{a_{1},\ldots,a_{d}\}\) be a finite alphabet. A _random substitution_\(\vartheta_{\boldsymbol{P}}=(\vartheta,\boldsymbol{P})\) is a _set-valued substitution_\(\vartheta\colon\mathcal{A}\to\mathcal{F}(\mathcal{A}^{+})\) together with a set of non-degenerate probability vectors
\[\boldsymbol{P}=\left\{\boldsymbol{p}_{i}=(p_{i,1},\ldots,p_{i,r_{i}}):r_{i}= \#\vartheta(a_{i});\,\boldsymbol{p}_{i}\in(0,1]^{r_{i}};\,\sum_{j=1}^{r_{i}}p _{i,j}=1\text{ for all }i=1,\ldots,d\right\},\]
such that
\[\vartheta_{\boldsymbol{P}}\colon a_{i}\mapsto\begin{cases}s^{(i,1)}&\text{ with probability }p_{i,1},\\ \vdots&\vdots\\ s^{(i,r_{i})}&\text{with probability }p_{i,r_{i}},\end{cases}\]
for every \(1\leq i\leq d\), where \(\vartheta(a_{i})=\{s^{(i,j)}\}_{1\leq j\leq r_{i}}\).
We call each \(s^{(i,j)}\) a _realisation_ of \(\vartheta_{\boldsymbol{P}}(a_{i})\). If \(r_{i}=1\) for all \(i\in\{1,\ldots,d\}\), then we call \(\vartheta_{\boldsymbol{P}}\)_deterministic_.
**Example 2.5** (Random Fibonacci).: Let \(\mathcal{A}=\{a,b\}\), and let \(p\in(0,1)\). The _random Fibonacci substitution_\(\vartheta_{\boldsymbol{P}}=(\vartheta,\boldsymbol{P})\) is the random substitution given by
\[\vartheta_{\boldsymbol{P}}\colon\begin{cases}a\mapsto\begin{cases}ab&\text{ with probability }p,\\ ba&\text{with probability }1-p,\end{cases}\\ b\mapsto a\end{cases}\]
with defining data \(r_{a}=2\), \(r_{b}=1\), \(s^{(a,1)}=ab\), \(s^{(a,2)}=ba\), \(s^{(b,1)}=a\), \(\boldsymbol{P}=\{\boldsymbol{p}_{a}=(p,1-p),\boldsymbol{p}_{b}=(1)\}\) and corresponding set-valued substitution \(\vartheta\colon a\mapsto\{ab,ba\},b\mapsto\{a\}\).
In the following we describe how a random substitution \(\vartheta_{\boldsymbol{P}}\) determines a (countable state) Markov matrix \(Q\), indexed by \(\mathcal{A}^{+}\times\mathcal{A}^{+}\). We interpret the entry \(Q_{u,v}\) as the probability of mapping a word \(u\) to a word \(v\) under the random substitution. Formally, \(Q_{a_{i},s^{(i,j)}}=p_{i,j}\) for all \(j\in\{1,\ldots,r_{i}\}\) and \(Q_{a_{i},v}=0\) if \(v\notin\vartheta(a_{i})\). We extend the action of \(\vartheta_{\boldsymbol{P}}\) to finite words by mapping each letter _independently_ to one of its realisations. More precisely, given \(n\in\mathbb{N}\), \(u=a_{i_{1}}\cdots a_{i_{n}}\in\mathcal{A}^{n}\) and \(v\in\mathcal{A}^{+}\) with \(|v|\geq n\), we let
\[\mathcal{D}_{n}(v)=\{(v^{(1)},\ldots,v^{(n)})\in(\mathcal{A}^{+})^{n}:v^{(1) }\cdots v^{(n)}=v\}\]
denote the set of all decompositions of \(v\) into \(n\) individual words and set
\[Q_{u,v}=\sum_{(v^{(1)},\ldots,v^{(n)})\in\mathcal{D}_{n}(v)}\prod_{j=1}^{n}Q_{a_{ i_{j}},v^{(j)}}.\]
In words, \(\vartheta_{\boldsymbol{P}}(u)=v\) with probability \(Q_{u,v}\).
For \(u\in\mathcal{A}^{+}\), let \((\vartheta^{n}_{\boldsymbol{P}}(u))_{n\in\mathbb{N}}\) be a stationary Markov chain on some probability space \((\Omega_{u},\mathcal{F}_{u},\mathbb{P}_{u})\), with Markov matrix given by \(Q\); that is,
\[\mathbb{P}_{u}[\vartheta^{n+1}_{\boldsymbol{P}}(u)=w\mid\vartheta^{n}_{ \boldsymbol{P}}(u)=v]=\mathbb{P}_{v}[\vartheta_{\boldsymbol{P}}(v)=w]=Q_{v,w}\]
for all \(v\) and \(w\in\mathcal{A}^{+}\), and \(n\in\mathbb{N}\). In particular,
\[\mathbb{P}_{u}[\vartheta^{n}_{\boldsymbol{P}}(u)=v]=(Q^{n})_{u,v}\]
for all \(u\) and \(v\in\mathcal{A}^{+}\), and \(n\in\mathbb{N}\). We often write \(\mathbb{P}\) for \(\mathbb{P}_{u}\) if the initial word is understood. In this case, we also write \(\mathbb{E}\) for the expectation with respect to \(\mathbb{P}\). As before, we call \(v\) a _realisation_ of \(\vartheta^{n}_{\boldsymbol{P}}(u)\) if \((Q^{n})_{u,v}>0\) and set
\[\vartheta^{n}(u)=\{v\in\mathcal{A}^{+}:(Q^{n})_{u,v}>0\}\]
to be the set of all realisations of \(\vartheta^{n}_{\boldsymbol{P}}(u)\). Conversely, we may regard \(\vartheta^{n}_{\boldsymbol{P}}(u)\) as the set \(\vartheta^{n}(u)\) endowed with the additional structure of a probability vector. If \(u=a\in\mathcal{A}\) is a letter, we call a word \(v\in\vartheta^{k}(a)\) a _level-\(k\) inflation word_, or _exact inflation word_.
To a given random substitution \(\vartheta_{\boldsymbol{P}}=(\vartheta,\boldsymbol{P})\) one can associate a subshift. First, we say that a word \(u\in\mathcal{A}^{+}\) is _(\(\vartheta\)-)legal_ if there exists an \(a_{i}\in\mathcal{A}\) and \(k\in\mathbb{N}\) such that \(u\) appears as a subword of some word in \(\vartheta^{k}(a_{i})\). We define the _language_ of \(\vartheta\) by \(\mathcal{L}_{\vartheta}=\{u\in\mathcal{A}^{+}:u\text{ is }\vartheta\text{- legal}\}\) and, for \(w\in\mathcal{A}^{+}\cup\mathcal{A}^{\mathbb{Z}}\), we let \(\mathcal{L}(w)=\{u\in\mathcal{A}^{+}:u\text{\textquotedwedge}w\}\) denote the language of \(w\).
**Definition 2.6**.: The _random substitution subshift_ of a random substitution \(\vartheta_{\boldsymbol{P}}=(\vartheta,\boldsymbol{P})\) is the system \((X_{\vartheta},S)\), where \(X_{\vartheta}=\{w\in\mathcal{A}^{\mathbb{Z}}:\mathcal{L}(w)\subseteq\mathcal{ L}_{\vartheta}\}\) and \(S\) denotes the (left) shift map, defined by \(S(w)_{i}=w_{i+1}\) for each \(w\in X_{\vartheta}\).
Under very mild assumptions, the space \(X_{\vartheta}\) is non-empty [17]. This holds, for example, if the generating random substitution is primitive (we give a definition in Section 2.6). We endow \(X_{\vartheta}\) with the subspace topology inherited from \(\mathcal{A}^{\mathbb{Z}}\), and since \(X_{\vartheta}\) is defined in terms of a language, it is a compact \(S\)-invariant subspace of \(\mathcal{A}^{\mathbb{Z}}\). Hence, \(X_{\vartheta}\) is a subshift. For \(n\in\mathbb{N}\), we write \(\mathcal{L}^{n}_{\vartheta}=\mathcal{L}_{\vartheta}\cap\mathcal{A}^{n}\) and \(\mathcal{L}^{n}(w)=\mathcal{L}(w)\cap\mathcal{A}^{n}\) to denote the subsets of \(\mathcal{L}_{\vartheta}\) and \(\mathcal{L}(w)\), respectively, consisting of words of length \(n\). The set-valued function \(\vartheta\) naturally extends to \(X_{\vartheta}\), where for \(w=\cdots w_{-1}w_{0}w_{1}\cdots\in X_{\vartheta}\) we let \(\vartheta(w)\) denotes the (infinite) set of sequences of the form \(v=\cdots v_{-2}v_{-1}.v_{0}v_{1}\cdots\), with \(v_{j}\in\vartheta(w_{j})\) for all \(j\in\mathbb{Z}\). It is easily verified that \(\vartheta(X_{\vartheta})\subset X_{\vartheta}\).
The notation \(X_{\vartheta}\) reflects the fact that the random substitution subshift does not depend on the choice of (non-degenerate) probabilities \(\boldsymbol{P}\). In fact, this is the case for many structural properties of \(\vartheta_{\boldsymbol{P}}\). In these cases, one sometimes refers to \(\vartheta\) instead of \(\vartheta_{\boldsymbol{P}}\) as a random substitution, see for instance [14, 16, 27, 28]. On the other hand, for
some applications, one needs additional structure on the probability space. In fact, there is an underlying branching process, similar to a Galton-Watson process, that allows one to construct more refined random variables, see [17] for further details. The measure theoretic properties we consider are typically dependent on the choice of probabilities; however, some of the auxiliary results we use only depend on the set-valued substitution \(\vartheta\). To avoid confusion, for results where there is no dependence on the choice of probabilities we will give the statement in terms of the set-valued substitution \(\vartheta\) and omit the dependence on \(\mathbf{P}\) in the notation.
### Primitive random substitutions
A standard assumption in the study of substitutions (both deterministic and random) is that of _primitivity_. Given a random substitution \(\vartheta_{\mathbf{P}}=(\vartheta,\mathbf{P})\) over an alphabet \(\mathcal{A}=\{a_{1},\ldots,a_{d}\}\) with cardinality \(d\in\mathbb{N}\), we define the _substitution matrix_\(M=M_{\vartheta_{\mathbf{P}}}\in\mathbb{R}^{d\times d}\) of \(\vartheta_{\mathbf{P}}\) by
\[M_{i,j}=\mathbb{E}[|\vartheta_{\mathbf{P}}(a_{j})|_{a_{i}}]=\sum_{k=1}^{r_{j}}p_{ j,k}|s^{(j,k)}|_{a_{i}}.\]
Since \(M\) has only non-negative entries, it has a real eigenvalue \(\lambda\) of maximal modulus. Observe that \(\lambda\geq 1\), with \(\lambda=1\) precisely if \(M\) is column-stochastic, so that the random substitution is non-expanding. To avoid this degenerate situation, we will assume that \(\lambda>1\). If the matrix \(M\) is _primitive_ (that is if there exists a \(k\in\mathbb{N}\) such that all the entries of \(M^{k}\) are positive), the Perron-Frobenius theorem gives that \(\lambda\) is a simple eigenvalue and that the corresponding (right) eigenvector \(\mathbf{R}=(R_{1},\ldots,R_{d})\) can be chosen to have strictly positive entries. We will normalise this eigenvector so that \(\left\lVert\mathbf{R}\right\rVert_{1}=1\). We will refer to \(\lambda\) as the _Perron-Frobenius eigenvalue_ of the random substitution, \(\vartheta_{\mathbf{P}}\), with corresponding _Perron-Frobenius eigenvector \(\mathbf{R}\)_.
**Definition 2.7**.: We say that \(\vartheta_{\mathbf{P}}\) is _primitive_ if \(M=M_{\vartheta_{\mathbf{P}}}\) is primitive and its Perron-Frobenius eigenvalue satisfies \(\lambda>1\).
We emphasise that for a random substitution \(\vartheta_{\mathbf{P}}\), being primitive is independent of the (non-degenerate) choice of probabilities \(\mathbf{P}\). In this sense, primitivity is a property of \(\vartheta\) rather than \(\vartheta_{\mathbf{P}}\).
Since \(M_{\vartheta_{\mathbf{P}}}^{k}=M_{\vartheta_{\mathbf{P}}^{k}}\), the Perron-Frobenius eigenvalue of \(\vartheta_{\mathbf{P}}^{k}\) is \(\lambda^{k}\).
### Compatible random substitutions
Another standard assumption in the study of random substitutions is that of compatibility, which gives that exact inflation words have a well-defined abelianisation. In particular, the matrix of a compatible random substitution is independent of the choice of probabilities, so the letter frequencies are uniform and do not depend on the realisation. As discussed in the introduction, the existence of uniform letter frequencies is fundamental in the proofs of our main results.
**Definition 2.8**.: We say that a random substitution \(\vartheta_{\mathbf{P}}=(\vartheta,\mathbf{P})\) is _compatible_ if for all \(a\in\mathcal{A}\), and \(u,v\in\vartheta(a)\), we have \(\Phi(u)=\Phi(v)\).
Observe that compatibility is independent of the choice of probabilities, and that a random substitution \(\vartheta_{\boldsymbol{P}}=(\vartheta,\boldsymbol{P})\) is compatible if and only if for all \(u\in\mathcal{A}^{+}\), we have that \(|s|_{a}=|t|_{a}\) for all \(s\) and \(t\in\vartheta(u)\), and \(a\in\mathcal{A}\). We write \(|\vartheta(u)|_{a}\) to denote this common value, and let \(|\vartheta(u)|\) denote the common length of words in \(\vartheta(u)\). For convenience, we also set \(|\vartheta|=\max_{a\in\mathcal{A}}|\vartheta(a)|\). For a random substitution that is both primitive and compatible, the (uniform) letter frequencies are encoded by the right Perron-Frobenius eigenvector of the substitution matrix, which by compatibility is independent of the choice of probabilities. In particular, we have the following (see [25] for a proof in the deterministic case, which also holds in the random case by compatibility).
**Lemma 2.9** (Letter frequency bounds).: _If \(\vartheta_{\boldsymbol{P}}\) is a primitive and compatible random substitution, then for all \(\varepsilon>0\) there is an integer \(N\) such that every word \(v\) of length at least \(N\) satisfies_
\[|v|(R_{a}-\varepsilon)<|v|_{a}<|v|(R_{a}+\varepsilon)\]
_for all \(a\in\mathcal{A}\)._
The random Fibonacci substitution defined in Example 2.5 is compatible, since \(\Phi(ab)=\Phi(ba)=(1,1)\). It is also primitive, since the square of its substitution matrix is positive. For any choice of probabilities, the right Perron-Frobenius eigenvector is given by \((\tau^{-1},\tau^{-2})\), where \(\tau\) denotes the golden ratio. In terms of letter frequencies, this means that in all sufficiently long legal words, approximately \(\tau^{-1}\) proportion of the letters are \(a\) and \(\tau^{-2}\) proportion are \(b\).
The following consequence of Lemma 2.9 is useful in the proof of Theorem A.
**Lemma 2.10**.: _Let \(\vartheta_{\boldsymbol{P}}=(\vartheta,\boldsymbol{P})\) be a primitive and compatible random substitution and let \(q\geq 1\). For all \(\varepsilon>0\), there is an \(M\in\mathbb{N}\) such that for every \(m\geq M\) and \(v\in\mathcal{L}_{\vartheta}^{m}\),_
\[\prod_{a\in\mathcal{A}}\left(\sum_{s\in\vartheta(a)}\mathbb{P}[ \vartheta_{\boldsymbol{P}}(a)=s]^{q}\right)^{m(R_{a}+\varepsilon)} \leq\sum_{w\in\vartheta(v)}\mathbb{P}[\vartheta_{\boldsymbol{P}} (v)=w]^{q}\] \[\leq\prod_{a\in\mathcal{A}}\left(\sum_{s\in\vartheta(a)}\mathbb{ P}[\vartheta_{\boldsymbol{P}}(a)=s]^{q}\right)^{m(R_{a}-\varepsilon)}.\]
_For \(q\leq 1\), the same result holds with reversed inequalities._
Proof.: Since \(\vartheta_{\boldsymbol{P}}\) is compatible, the cutting points of inflation tiles are well-defined, so breaking the sum into inflation tiles we obtain
\[\sum_{w\in\vartheta(v)}\mathbb{P}[\vartheta_{\boldsymbol{P}}(v)=w ]^{q} =\sum_{w^{1}\in\vartheta(v_{1})}\mathbb{P}[\vartheta_{\boldsymbol{P}}(v_{1 })=w]^{q}\sum_{w^{2}\in\vartheta(v_{2})}\cdots\sum_{w^{m}\in\vartheta(v_{m})} \mathbb{P}[\vartheta_{\boldsymbol{P}}(v_{m})=w^{m}]^{q}\] \[=\prod_{a\in\mathcal{A}}\left(\sum_{s\in\vartheta(a)}\mathbb{P}[ \vartheta_{\boldsymbol{P}}(a)=s]^{q}\right)^{|v|_{a}}.\]
The result then follows by applying Lemma 2.9 to bound \(|v|_{a}\), noting that for all \(a\in\mathcal{A}\) we have \(\sum_{s\in\vartheta(a)}\mathbb{P}[\vartheta_{\boldsymbol{P}}(a)=s]^{q}\leq 1\) if \(q\geq 1\) and \(\sum_{s\in\vartheta(a)}\mathbb{P}[\vartheta_{\boldsymbol{P}}(a)=s]^{q}\geq 1\) if \(q\leq 1\).
### Frequency measures
The main object that we associate with a given primitive random substitution \(\vartheta_{\boldsymbol{P}}\) is the _frequency measure_\(\mu_{\boldsymbol{P}}\). This measure quantifies the relative occurrence of a given word in a random substitution. We now define this measure precisely.
First, we define the _expected frequency_ of a word \(v\in\mathcal{L}_{\vartheta}\) by
\[\text{freq}(v)=\lim_{k\to\infty}\frac{\mathbb{E}[|\vartheta_{\boldsymbol{P}} ^{k}(a)|_{v}]}{\mathbb{E}[|\vartheta_{\boldsymbol{P}}^{k}(a)|]},\]
where, by primitivity, this limit is independent of the choice of \(a\in\mathcal{A}\). In fact, we have the stronger property that the word frequencies exist \(\mathbb{P}\)-almost surely in the limit of large inflation words and are given by \(\text{freq}(v)\) for all \(v\in\mathcal{L}_{\vartheta}\) (see [17] for further details). Recalling that \(\xi(X_{\vartheta})\) is the algebra of cylinder sets on \(X_{\vartheta}\) that specify the origin, we define \(\mu_{\boldsymbol{P}}\colon\xi(X_{\vartheta})\cup\{\varnothing\}\to[0,1]\) by \(\mu_{\boldsymbol{P}}(\varnothing)=0\), \(\mu_{\boldsymbol{P}}(X_{\vartheta})=1\), and \(\mu_{\boldsymbol{P}}([v]_{m})=\text{freq}(v)\) for \(v\in\mathcal{L}_{\vartheta}\) and \(m\in\{1-|v|,2-|v|,\ldots,0\}\). This set function extends to a unique measure.
**Proposition 2.11** ([17, Proposition 5.3 and Theorem 5.9]).: _The set function \(\mu_{\boldsymbol{P}}\) is a content with mass one which extends uniquely to a shift-invariant ergodic Borel probability measure on \(X_{\vartheta}\)._
We call the measure \(\mu_{\boldsymbol{P}}\) defined in Proposition 2.11 the _frequency measure_ corresponding to the random substitution \(\vartheta_{\boldsymbol{P}}\). Observe that frequency measures are dependent on the probabilities of the substitution. As such, for the subshift of a primitive random substitution that is non-deterministic, there exist uncountably many frequency measures supported on this subshift [17]. In contrast, the subshift of a primitive deterministic substitution has precisely one frequency measure, which is the unique ergodic measure [25].
Frequency measures corresponding to primitive and compatible random substitutions satisfy the following renormalisation lemma, which relates the measure of a cylinder set of a legal word to measures of cylinder sets of shorter words via the production probabilities of the random substitution. This result first appeared in [17] and is central to the proof of the main result in [15].
**Lemma 2.12** (Renormalisation).: _Let \(\vartheta_{\boldsymbol{P}}\) be a primitive and compatible random substitution with corresponding frequency measure \(\mu_{\boldsymbol{P}}\). Let \(n\in\mathbb{N}\) and let \(k\) be an integer such that every \(v\in\mathcal{L}_{\vartheta}^{k}\) has \(|\vartheta(v)|\geq n+|\vartheta(v_{1})|\). Then for every \(u\in\mathcal{L}_{\vartheta}^{n}\),_
\[\mu_{\boldsymbol{P}}([u])=\frac{1}{\lambda}\sum_{v\in\mathcal{L}_{\vartheta} ^{k}}\mu_{\boldsymbol{P}}([v])\sum_{j=1}^{|\vartheta(v_{1})|}\mathbb{P}[ \vartheta_{\boldsymbol{P}}(v)_{[j,j+m-1]}=u].\]
Lemma 2.12 is key to the proof of Theorem A, as it relates the sums \(\sum_{u\in\mathcal{L}_{\vartheta}^{n}}\mu_{\boldsymbol{P}}([u])\) to sums over smaller words via the production probabilities. This in turn allows us
to obtain relations between \(\tau_{\mu_{\mathbf{P}}}\) and \(\varphi_{k}\). Under additional assumptions, simplified reformulations of Lemma 2.12 can be obtained (see, for example, Lemma 2.18, which is used in Theorem D).
### Separation conditions and recognisability
In this section, we introduce additional common assumptions which either (1) impose a certain separation on inflation words, or (2) impose a certain uniformity of the inflation and the probabilities. Under these conditions, we can obtain closed-form formulas for the \(L^{q}\)-spectrum.
**Definition 2.13**.: A random substitution \(\vartheta_{\mathbf{P}}=(\vartheta,\mathbf{P})\) satisfies the _disjoint set condition_ if
\[u\text{ and }v\in\vartheta(a)\text{ with }u\neq v\implies\vartheta^{k}(u) \cap\vartheta^{k}(v)=\varnothing\]
for all \(a\in\mathcal{A}\) and \(k\in\mathbb{N}\). It satisfies the _identical set condition_ if
\[u\text{ and }v\in\vartheta(a)\implies\vartheta^{k}(u)=\vartheta^{k}(v)\]
for all \(a\in\mathcal{A}\) and \(k\in\mathbb{N}\). Moreover, we say that \(\vartheta_{\mathbf{P}}\) has _identical production probabilities_ if for all \(a\in\mathcal{A}\), \(k\in\mathbb{N}\) and \(v\in\vartheta^{k}(a)\),
\[\mathbb{P}[\vartheta_{\mathbf{P}}^{k-1}(u_{1})=v]=\mathbb{P}[\vartheta_{\mathbf{P}}^{ k-1}(u_{2})=v]\]
for all \(u_{1}\) and \(u_{2}\in\vartheta(a)\).
A consequence of the disjoint set condition is that for every \(a\in\mathcal{A}\), \(k\in\mathbb{N}\) and \(w\in\vartheta^{k}(a)\), there is a unique \(v\in\vartheta^{k-1}(a)\) such that \(w\in\vartheta(v)\). In other words, every exact inflation word can be uniquely de-substituted. The following definition extends this idea of unique de-substitution from inflation words to all elements in the subshift.
**Definition 2.14**.: Let \(\vartheta_{\mathbf{P}}=(\vartheta,\mathbf{P})\) be a primitive and compatible random substitution. We call \(\vartheta_{\mathbf{P}}\)_recognisable_ if for every \(x\in X_{\vartheta}\) there exists a unique \(y=\cdots y_{-1}y_{0}y_{1}\cdots\in X_{\vartheta}\) and a unique integer \(k\in\{0,\ldots,|\vartheta(y_{0})|-1\}\) with \(S^{-k}(x)\in\vartheta(y)\).
The following follows routinely from the definition of recognisability (a proof is given in [15, Lemma 4.5]).
**Lemma 2.15**.: _If \(\vartheta_{\mathbf{P}}\) is a primitive, compatible and recognisable random substitution, then \(\vartheta_{\mathbf{P}}\) satisfies the disjoint set condition._
In contrast to the disjoint set condition, recognisability is stable under taking powers (see [15, Lemma 4.6]).
**Lemma 2.16**.: _Let \(\vartheta_{\mathbf{P}}\) be a primitive and compatible random substitution and \(m\in\mathbb{N}\). If \(\vartheta_{\mathbf{P}}\) is recognisable, then so is \(\vartheta_{\mathbf{P}}^{m}\)._
An alternative characterisation of recognisability is the following _local_ version. Intuitively, local recognisability means that applying a _finite window_ to a sequence is enough to determine the position and the type of the inflation word in the middle of that window. The following result is given in [15, Lemma 4.4] (see also [12, Proposition 5.7]).
**Lemma 2.17**.: _Let \(\vartheta_{\mathbf{P}}=(\vartheta,\mathbf{P})\) be a primitive and compatible random substitution. If \(\vartheta_{\mathbf{P}}\) is recognisable, then there exists a smallest natural number \(\kappa(\vartheta)\), called the recognisability radius of \(\vartheta_{\mathbf{P}}\), with the following property: if \(x\in\vartheta([a])\) for some \(a\in\mathcal{A}\) and \(x_{[-\kappa(\vartheta),\kappa(\vartheta)]}=y_{[-\kappa(\vartheta),\kappa( \vartheta)]}\) for some \(y\in X_{\vartheta}\), then \(y\in\vartheta([a])\)._
As a consequence of this local characterisation of recognisability, for every legal word \(u\) with length greater than twice the radius of recognisability there exists an inflation word \(w\), appearing as a subword of \(u\), which has a unique decomposition into exact inflation words. We call the largest such \(w\) the _recognisable core_ of \(u\).
Local recognisability allows us to obtain a stronger version of Lemma 2.12 for recognisable random substitutions. This result is key to obtaining the coincidence of the \(L^{q}\)-spectrum and its inflation word analogue under recognisability for \(q<0\), and thus the conclusion of Theorem D.
**Lemma 2.18**.: _Let \(\vartheta_{\mathbf{P}}=(\vartheta,\mathbf{P})\) be a primitive and compatible random substitution, with corresponding frequency measure \(\mu_{\mathbf{P}}\) and \(u\in\mathcal{L}_{\vartheta}\). If \(v\in\mathcal{L}_{\vartheta}\) and \(w\in\vartheta(v)\) contains \(u\) as a subword, then_
\[\mu_{\mathbf{P}}([u])\geq\frac{1}{\lambda}\mu_{\mathbf{P}}([v])\mathbb{P}[\vartheta_{ \mathbf{P}}(v)=w].\]
_If, additionally, \(\vartheta_{\mathbf{P}}\) is recognisable, \(|u|>2\kappa(\vartheta)\) and \(w^{\prime}\) is the recognisable core of \(u\) with \(v^{\prime}\in\mathcal{L}_{\vartheta}\) the unique legal word such that \(w^{\prime}\in\vartheta(v^{\prime})\), then_
\[\mu_{\mathbf{P}}([u])\leq\frac{\kappa(\vartheta)}{\lambda}\mu_{\mathbf{P}}([v^{\prime }])\mathbb{P}[\vartheta_{\mathbf{P}}(v^{\prime})=w^{\prime}].\]
Proof.: If \(u\) is a subword of \(w\in\vartheta(v)\), then \(\mu_{\mathbf{P}}([u])\geq\mu_{\mathbf{P}}([w])\). Thus by Lemma 2.12 applied to \(\mu_{\mathbf{P}}([w])\),
\[\mu_{\mathbf{P}}([u])\geq\frac{1}{\lambda}\mu_{\mathbf{P}}([v])\mathbb{P}[\vartheta_{ \mathbf{P}}(v)=w].\]
Now, assume that \(\vartheta_{\mathbf{P}}\) is recognisable, \(|u|>2\kappa(\vartheta)\) and \(w^{\prime}\in\vartheta(v^{\prime})\) is the recognisable core of \(u\). Let \(k\) be an integer such that every \(t\in\mathcal{L}_{\vartheta}^{k}\) has \(|\vartheta(t)|\geq k+|\vartheta(v_{1})|\). Since there are at most \(\kappa(\vartheta)\) letters of \(u\) preceding the recognisable core, if \(t\in\mathcal{L}_{\vartheta}^{k}\) is a word for which \(u\in\vartheta(t)_{[j,j+|u|-1]}\) for some \(j\in\{1,\ldots,|\vartheta(t_{1})|\}\), then \(t_{i}\cdots t_{i+|v|-1}=v^{\prime}\) for some \(i\in\{1,\ldots,\kappa(\vartheta)\}\). Moreover, since there is a unique way to decompose \(w^{\prime}\) into exact inflation words, for each \(t\in\mathcal{L}_{\vartheta}^{k}\) there can be at most one \(j\in\{1,\ldots,\vartheta(t_{1})\}\) such that
\(u\in\vartheta(t)_{[j,j+|u|-1]}\). Hence, it follows by Lemma 2.12 that
\[\mu_{\boldsymbol{P}}([u]) =\frac{1}{\lambda}\sum_{t\in\mathcal{L}^{k}}\mu_{\boldsymbol{P}}([ t])\sum_{j=1}^{|\vartheta(t_{1})|}\mathbb{P}[\vartheta_{\boldsymbol{P}}(t)_{ [j,j+|u|-1]}=u]\] \[\leq\frac{1}{\lambda}\sum_{i=1}^{\kappa(\vartheta)}\sum_{ \begin{subarray}{c}t\in\mathcal{L}^{k}_{\vartheta}\\ t_{i}\cdots t_{i+|v|-1}=v^{\prime}\end{subarray}}\mu_{\boldsymbol{P}}([t]) \mathbb{P}[\vartheta_{\boldsymbol{P}}(v^{\prime})=w^{\prime}]\] \[=\frac{\kappa(\vartheta)}{\lambda}\mu_{\boldsymbol{P}}([v^{ \prime}])\mathbb{P}[\vartheta_{\boldsymbol{P}}(v^{\prime})=w^{\prime}],\]
which completes the proof.
## 3. \(L^{q}\)-spectra of frequency measures
In this section, we prove our main results on \(L^{q}\)-spectra of frequency measures. Here, we relate the \(L^{q}\)-spectrum to a certain "symbolic" \(L^{q}\)-spectrum, which we call the _inflation word \(L^{q}\)-spectrum_. Heuristically, the inflation word \(L^{q}\)-spectrum is the natural guess for the \(L^{q}\)-spectrum if you do not account for non-uniqueness in the positions in which legal words can appear in inflation words. This notion is introduced in Section 3.1, where we also state and prove some of its key properties. In particular, in Proposition 3.1, we prove a simple closed-form formula for the inflation word \(L^{q}\)-spectrum under the disjoint set condition or the identical set condition with identical production probabilities. In Proposition 3.2, we establish basic monotonicity results.
Then, in Section 3.2 and Section 3.3, we establish the general bounds for the \(L^{q}\)-spectrum in terms of the inflation word \(L^{q}\)-spectrum, giving Theorem A (the proof is given in Section 3.4). We also prove that this bound is sharp in Section 3.5, under the recognisability assumption. This proves the first part of Theorem D. However this bound need not hold in general: we discuss a counterexample in Example 5.2. Finally, in Section 3.6, we prove differentiability of the \(L^{q}\)-spectrum at \(q=1\) and show how to recover known results for measure theoretic and topological entropy from our results concerning \(L^{q}\)-spectra.
### Inflation word \(L^{q}\)-spectra
Given a primitive random substitution \(\vartheta_{\boldsymbol{P}}=(\vartheta,\boldsymbol{P})\), we can define an analogue of the \(L^{q}\)-spectrum in terms of its production probabilities, in a similar manner to the inflation word analogue of entropy introduced in [15]. In many cases, this notion coincides with the \(L^{q}\)-spectrum of the frequency measure associated to \(\vartheta_{\boldsymbol{P}}\). For each \(k\in\mathbb{N}\) and \(q\in\mathbb{R}\), define
\[\varphi_{k}(q)=-\sum_{a\in\mathcal{A}}R_{a}\log\left(\sum_{s\in\vartheta^{k}( a)}\mathbb{P}[\vartheta_{\boldsymbol{P}}^{k}(a)=s]^{q}\right),\]
where \(\mathbf{R}=(R_{a})_{a\in\mathcal{A}}\) is the right Perron-Frobenius eigenvector of the substitution matrix of \(\vartheta_{\mathbf{P}}\). We define the _inflation word \(L^{q}\)-spectrum of \(\vartheta_{\mathbf{P}}\)_ by
\[T_{\vartheta,\mathbf{P}}(q)=\liminf_{k\to\infty}\frac{\varphi_{k}(q)}{\lambda^{k}}.\]
We similarly define the upper variant \(\overline{T}_{\vartheta,\mathbf{P}}\) by taking a limit supremum in place of the limit infimum.
We first state some key properties of \(T_{\vartheta,\mathbf{P}}(q)\) which follow easily from the definition. Firstly, if the random substitution \(\vartheta_{\mathbf{P}}\) is compatible and satisfies either the disjoint set condition or the identical set condition with identical production probabilities, then the limit defining \(T_{\vartheta,\mathbf{P}}(q)\) exists for all \(q\in\mathbb{R}\) and is given by a closed form expression. For \(q\geq 0\), these properties transfer to the \(L^{q}\)-spectrum by Theorem A.
**Proposition 3.1**.: _Let \(\vartheta_{\mathbf{P}}\) be a primitive and compatible random substitution and \(q\in\mathbb{R}\). If \(\vartheta_{\mathbf{P}}\) satisfies the disjoint set condition, then the limit defining \(T_{\vartheta,\mathbf{P}}(q)\) exists and_
\[T_{\vartheta,\mathbf{P}}(q)=\frac{1}{\lambda-1}\varphi_{1}(q).\]
_If \(\vartheta_{\mathbf{P}}\) satisfies the identical set condition and has identical production probabilities, then the limit defining \(T_{\vartheta,\mathbf{P}}(q)\) exists and_
\[T_{\vartheta,\mathbf{P}}(q)=\frac{1}{\lambda}\varphi_{1}(q).\]
Proof.: Fix \(q\in\mathbb{R}\). By the Markov property of \(\vartheta_{\mathbf{P}}\), for all \(a\in\mathcal{A}\), \(k\in\mathbb{N}\) and \(v\in\vartheta^{k}(a)\),
\[\mathbb{P}[\vartheta_{\mathbf{P}}^{k}(a)=v]=\sum_{s\in\vartheta(a)}\mathbb{P}[ \vartheta_{\mathbf{P}}(a)=s]\,\mathbb{P}[\vartheta_{\mathbf{P}}^{k-1}(s)=v]. \tag{3.1}\]
If \(\vartheta_{\mathbf{P}}\) satisfies the disjoint set condition, then for every \(v\in\vartheta^{k}(a)\) there is a unique \(s(v)\in\vartheta(a)\) such that \(v\in\vartheta^{k-1}(s(v))\). Thus, for all \(s\in\vartheta(a)\) such that \(s\neq s(v)\), we have \(\mathbb{P}[\vartheta_{\mathbf{P}}^{k-1}(s)=v]=0\), and so it follows by (3.1) that
\[\sum_{v\in\vartheta^{k}(a)}\mathbb{P}[\vartheta_{\mathbf{P}}^{k}(a)=v]^ {q} =\sum_{v\in\vartheta^{k}(a)}\mathbb{P}[\vartheta_{\mathbf{P}}(a)=s(v )]^{q}\,\mathbb{P}[\vartheta_{\mathbf{P}}^{k-1}(s(v))=v]^{q}\] \[=\sum_{s\in\vartheta(a)}\mathbb{P}[\vartheta_{\mathbf{P}}(a)=s]^{q} \sum_{u\in\vartheta^{k-1}(s)}\mathbb{P}[\vartheta_{\mathbf{P}}^{k-1}(s)=u]^{q}\] \[=\left(\sum_{s\in\vartheta(a)}\mathbb{P}[\vartheta_{\mathbf{P}}(a)=s] ^{q}\right)\cdot\prod_{b\in\mathcal{A}}\left(\sum_{u\in\vartheta^{k-1}(b)} \mathbb{P}[\vartheta_{\mathbf{P}}^{k-1}(b)=u]^{q}\right)^{|\vartheta(a)|_{b}},\]
where in the final equality we use compatibility to split the second sum into inflation tiles. Thus
\[\varphi_{k}(q)= -\sum_{a\in\mathcal{A}}R_{a}\sum_{b\in\mathcal{A}}\lvert\vartheta(a )\rvert_{b}\log\left(\sum_{u\in\vartheta^{k-1}(b)}\mathbb{P}[\vartheta_{\mathbf{P}}^ {k-1}(b)=u]^{q}\right)\] \[-\sum_{a\in\mathcal{A}}R_{a}\log\left(\sum_{s\in\vartheta(a)} \mathbb{P}[\vartheta_{\mathbf{P}}(a)=s]^{q}\right)\] \[= \lambda\varphi_{k-1}(q)+\varphi_{1}(q),\]
noting that \(\sum_{a\in\mathcal{A}}R_{a}\lvert\vartheta(a)\rvert_{b}=\lambda R_{b}\). It follows inductively that
\[\frac{1}{\lambda^{k}}\varphi_{k}(q)=\sum_{j=1}^{k}\frac{1}{\lambda^{j}} \varphi_{1}(q)\xrightarrow{k\to\infty}\frac{1}{\lambda-1}\varphi_{1}(q),\]
so the limit defining \(T_{\vartheta,\mathbf{P}}(q)\) exists and is equal to \((\lambda-1)^{-1}\varphi_{1}(q)\).
On the other hand, if \(\vartheta_{\mathbf{P}}\) satisfies the identical set condition and has identical production probabilities, then \(\mathbb{P}[\vartheta_{\mathbf{P}}^{k-1}(s^{1})=u]=\mathbb{P}[\vartheta_{\mathbf{P}}^ {k-1}(s^{2})=u]\) for all \(s^{1},s^{2}\in\vartheta(a)\). Hence, it follows by (3.1) that
\[\sum_{v\in\vartheta^{k}(a)}\mathbb{P}[\vartheta_{\mathbf{P}}^{k}(a)=v]^{q}=\sum_{ v\in\vartheta^{k}(a)}\mathbb{P}[\vartheta_{\mathbf{P}}^{k-1}(s)=v]^{q}\]
for any choice of \(s\in\vartheta(a)\). By compatibility and the independence of the action,
\[\sum_{v\in\vartheta^{k}(a)}\mathbb{P}[\vartheta_{\mathbf{P}}^{k}(a)=v]^{q}=\prod_ {b\in\mathcal{A}}\left(\sum_{u\in\vartheta^{k-1}(b)}\mathbb{P}[\vartheta_{\bm {P}}^{k-1}(b)=u]^{q}\right)^{\lvert\vartheta(a)\rvert_{b}},\]
and thus
\[\varphi_{k}(q)=\sum_{b\in\mathcal{A}}\sum_{a\in\mathcal{A}}R_{a}\lvert \vartheta(a)\rvert_{b}\log\left(\sum_{v\in\vartheta^{k-1}(b)}\mathbb{P}[ \vartheta_{\mathbf{P}}^{k-1}(b)=v]^{q}\right)=\lambda\varphi_{k-1}(q),\]
noting that \(\sum_{a\in\mathcal{A}}R_{a}\lvert\vartheta(a)\rvert_{b}=R_{b}\). It follows by induction that \(\varphi_{k}(q)/\lambda^{k}=\varphi_{1}(q)/\lambda\) for all \(k\in\mathbb{N}\), so we conclude that \(T_{\vartheta,\mathbf{P}}(q)\) exists and equals \(\lambda^{-1}\varphi_{1}(q)\).
**Proposition 3.2**.: _Let \(\vartheta_{\mathbf{P}}\) be a primitive and compatible random substitution. For all \(q>1\) and \(q<0\), the sequence \((\lambda^{-k}\varphi_{k}(q))_{k}\) is non-decreasing; and for all \(0<q<1\), the sequence is non-increasing._
Proof.: This is largely a consequence of Jensen's inequality. Note that on the interval \((0,1]\), the function \(x\mapsto x^{q}\) is convex if \(q>1\) or \(q<0\), and concave if \(0<q<1\). We
first prove this for the case when \(q>1\) or \(q<0\). Observe that for all \(a\in\mathcal{A}\), \(k\in\mathbb{N}\) with \(k\geq 2\) and \(v\in\vartheta^{k}(a)\), it follows by the Markov property of \(\vartheta_{\boldsymbol{P}}\) that
\[\sum_{v\in\vartheta^{k}}\mathbb{P}[\vartheta_{\mathbf{P}}^{k}(a) =v]^{q} =\sum_{v\in\vartheta^{k}(a)}\left(\sum_{s\in\vartheta(a)\,:\,v \in\vartheta^{k-1}(s)}\mathbb{P}[\vartheta_{\mathbf{P}}(a)=s]\mathbb{P}[ \vartheta_{\mathbf{P}}^{k-1}(s)=v]\right)^{q}\] \[\leq\sum_{v\in\vartheta^{k}(a)}\left(\frac{\sum_{s\in\vartheta(a) \,:\,v\in\vartheta^{k-1}(s)}\mathbb{P}[\vartheta_{\mathbf{P}}(a)=s]\mathbb{P} [\vartheta_{\mathbf{P}}^{k-1}(s)=v]^{q}}{\sum_{s\in\vartheta(a)\,:\,v\in \vartheta^{k-1}(s)}\mathbb{P}[\vartheta_{\mathbf{P}}(a)=s]}\right)\] \[\leq\prod_{b\in\mathcal{A}}\left(\sum_{w\in\vartheta^{k-1}(b)} \mathbb{P}[\vartheta_{\mathbf{P}}^{k-1}(b)=w]^{q}\right)^{|\vartheta(a)|_{b}}.\]
In the second line, we apply Jensen's inequality, and in the last line, we use compatibility to decompose each probability \(\mathbb{P}[\vartheta_{\mathbf{P}}^{k-1}(s)=w]\) into inflation tiles. It follows that
\[\frac{1}{\lambda^{k}}\varphi_{k}(q)\geq-\frac{1}{\lambda^{k}}\sum_{b\in \mathcal{A}}R_{b}\sum_{a\in\mathcal{A}}R_{a}|\vartheta(a)|_{b}\log\left(\sum_{ w\in\vartheta^{k-1}(b)}\mathbb{P}[\vartheta_{\boldsymbol{P}}^{k-1}(b)=w]^{q} \right)=\frac{1}{\lambda^{k-1}}\varphi_{k-1}(q),\]
noting that \(\sum_{a\in\mathcal{A}}R_{a}|\vartheta(a)|_{b}=\lambda\).
The \(0<q<1\) case follows similarly, with Jensen's inequality giving the opposite inequality since \(x\mapsto x^{q}\) is concave.
An analogous monotonicity result does not hold in general for the \((\lambda^{k}-1)^{-1}\varphi_{k}(q)\) bounds, even when \(q\geq 0\). A counterexample is given by the random period doubling substitution (Example 5.7) with non-uniform probabilities.
### \(L^{q}\)-spectra for non-negative \(q\)
The majority of the work in proving Theorem A lies in proving the bounds in (1.1), (1.2) and (1.3). Observe that it suffices to prove the bound for the case \(k=1\), since we then obtain the bound for other \(k\in\mathbb{N}\) by considering higher powers of the random substitution. We first prove the upper bound for the case \(q>1\).
Throughout this section, we assume that the random substitution is primitive and compatible.
**Proposition 3.3**.: _For all \(q>1\),_
\[\overline{\tau}_{\mu_{\boldsymbol{P}}}(q)\leq\frac{1}{\lambda-1}\varphi_{1}(q).\]
Proof.: Fix \(q>1\). Let \(\varepsilon>0\) and, for each \(n\in\mathbb{N}\), let \(m(n)\) be the integer defined by
\[m(n)=\left\lceil\frac{n}{\lambda-\varepsilon}\right\rceil.\]
Then the integers \(n\) and \(m(n)\) satisfy the conditions of Lemma 2.12, so it follows that
\[\sum_{u\in\mathcal{L}_{\vartheta}^{n}}\mu_{\boldsymbol{P}}([u])^{q}=\sum_{u\in \mathcal{L}_{\vartheta}^{n}}\left(\frac{1}{\lambda}\sum_{v\in\mathcal{L}_{ \vartheta}^{m(n)}}\mu_{\boldsymbol{P}}([v])\sum_{j=1}^{|\vartheta(v_{1})|} \mathbb{P}[\vartheta_{\boldsymbol{P}}(v)_{[j,j+n-1]}=u]\right)^{q}.\]
Since \(q>1\), the function \(x\mapsto x^{q}\) is superadditive on the interval \([0,1]\), so
\[\sum_{u\in\mathcal{L}_{\vartheta}^{n}}\mu_{\boldsymbol{P}}([u])^{q} \geq\sum_{u\in\mathcal{L}_{\vartheta}^{n}}\sum_{v\in\mathcal{L}_ {\vartheta}^{m(n)}}\mu_{\boldsymbol{P}}([v])^{q}\left(\frac{1}{\lambda}\sum_{ j=1}^{|\vartheta(v_{1})|}\mathbb{P}[\vartheta_{\boldsymbol{P}}(v)_{[j,j+n-1]}=u] \right)^{q}\] \[\geq\frac{1}{\lambda^{q}}\sum_{v\in\mathcal{L}_{\vartheta}^{m(n) }}\mu_{\boldsymbol{P}}([v])^{q}\sum_{j=1}^{|\vartheta(v_{1})|}\sum_{u\in \mathcal{L}_{\vartheta}^{n}}\mathbb{P}[\vartheta_{\boldsymbol{P}}(v)_{[j,j+n -1]}=u]^{q}.\]
We now bound the probability on the right of this expression by the production probability of an inflation word. Observe that if \(w(u)\in\vartheta(v)\) contains \(u\) as a subword in position \(j\), then \(\mathbb{P}[\vartheta_{\boldsymbol{P}}(v)_{[j,j+n-1]}=u]\geq\mathbb{P}[ \vartheta_{\boldsymbol{P}}(v)=w(u)].\) Hence,
\[\sum_{u\in\mathcal{L}_{\vartheta}^{n}}\mathbb{P}[\vartheta_{\boldsymbol{P}}(v )_{[j,j+n-1]}=u]^{q}\geq\sum_{w\in\vartheta(v)}\mathbb{P}[\vartheta_{ \boldsymbol{P}}(v)=w]^{q}\]
for all \(j\in\{1,\ldots,|\vartheta(v_{1})|\}\).
Since \(\vartheta_{\boldsymbol{P}}\) is compatible, by Lemma 2.10 there exists an \(N\in\mathbb{N}\) such that for all \(n\geq N\) and all \(v\in\mathcal{L}_{\vartheta}^{m(n)}\)
\[\sum_{w\in\vartheta(v)}\mathbb{P}[\vartheta_{\boldsymbol{P}}(v)=w]^{q}\geq \prod_{a\in\mathcal{A}}\left(\sum_{s\in\vartheta(a)}\mathbb{P}[\vartheta_{ \boldsymbol{P}}(a)=s]^{q}\right)^{m(n)(R_{a}+\varepsilon)}.\]
Hence,
\[\sum_{u\in\mathcal{L}_{\vartheta}^{n}}\mu_{\boldsymbol{P}}([u])^{q}\geq\frac{ 1}{\lambda^{q}}\prod_{a\in\mathcal{A}}\left(\sum_{s\in\vartheta(a)}\mathbb{P}[ \vartheta_{\boldsymbol{P}}(a)=s]^{q}\right)^{m(n)(R_{a}+\varepsilon)}\sum_{v \in\mathcal{L}_{\vartheta}^{m(n)}}\mu_{\boldsymbol{P}}([v])^{q}.\]
Taking logarithms, rearranging and dividing by \(n\) gives
\[-\frac{1}{n}\log\left(\sum_{u\in\mathcal{L}_{\vartheta}^{n}}\mu_ {\boldsymbol{P}}([u])^{q}\right) \leq -\frac{1}{n}\log\left(\sum_{v\in\mathcal{L}_{\vartheta}^{m(n)}} \mu_{\boldsymbol{P}}([v])^{q}\right)+\frac{1}{n}\log\lambda^{q}\] \[-\frac{m(n)}{n}\sum_{a\in\mathcal{A}}(R_{a}+\varepsilon)\log \left(\sum_{s\in\vartheta(a)}\mathbb{P}[\vartheta_{\boldsymbol{P}}(a)=s]^{q} \right).\]
Noting that \(m(n)/n\to(\lambda-\varepsilon)^{-1}\) as \(n\to\infty\), it follows by Lemma 2.2 that
\[\overline{\tau}_{\mu_{\boldsymbol{P}}}(q)\leq\frac{1}{\lambda-\varepsilon} \overline{\tau}_{\mu_{\boldsymbol{P}}}(q)+\frac{1}{\lambda-\varepsilon}\sum_{a \in\mathcal{A}}\log\left(\sum_{s\in\vartheta(a)}\mathbb{P}[\vartheta_{ \boldsymbol{P}}(a)=s]^{q}\right)+c\varepsilon\]
where \(c\coloneqq(\#\mathcal{A})\max_{a\in\mathcal{A}}\log(\sum_{s\in\vartheta(a)} \mathbb{P}[\vartheta_{\boldsymbol{P}}(a)=s]^{q})\). But \(\varepsilon>0\) was arbitrary; letting \(\varepsilon\to 0\) and rearranging we obtain
\[\overline{\tau}_{\mu_{\boldsymbol{P}}}(q)\leq\frac{1}{\lambda-1}\varphi_{1}(q),\]
which completes the proof.
We now prove the corresponding lower bound.
**Proposition 3.4**.: _For all \(q>1\),_
\[\tau_{\mu_{\boldsymbol{P}}}(q)\geq\frac{1}{\lambda}\varphi_{1}(q).\]
Proof.: Let \(\varepsilon>0\) and, for each \(n\in\mathbb{N}\), let \(m(n)\) be the integer defined by
\[m(n)=\left\lceil\frac{n}{\lambda-\varepsilon}\right\rceil.\]
Since \(q>1\), the function \(x\mapsto x^{q}\) is convex on the interval \([0,1]\). Hence, it follows by Lemma 2.12 and two applications of Jensen's inequality that
\[\sum_{u\in\mathcal{L}_{\vartheta}^{n}}\mu_{\boldsymbol{P}}([u])^ {q} =\sum_{u\in\mathcal{L}_{\vartheta}^{n}}\left(\frac{1}{\lambda}\sum _{v\in\mathcal{L}_{\vartheta}^{m(n)}}\mu_{\boldsymbol{P}}([v])\sum_{j=1}^{| \vartheta(v_{1})|}\mathbb{P}[\vartheta_{\boldsymbol{P}}(v)_{[j,j+n-1]}=u] \right)^{q}\] \[\leq\sum_{v\in\mathcal{L}_{\vartheta}^{m(n)}}\mu_{\boldsymbol{P}} ([v])\sum_{u\in\mathcal{L}_{\vartheta}^{n}}\left(\frac{1}{\lambda}\sum_{j=1}^{ |\vartheta(v_{1})|}\mathbb{P}[\vartheta_{\boldsymbol{P}}(v)_{[j,j+n-1]}=u] \right)^{q}\] \[\leq\frac{|\vartheta|^{q-1}}{\lambda^{q}}\sum_{v\in\mathcal{L}_{ \vartheta}^{m(n)}}\mu_{\boldsymbol{P}}([v])\sum_{j=1}^{|\vartheta(v_{1})|} \sum_{u\in\mathcal{L}_{\vartheta}^{n}}\mathbb{P}[\vartheta_{\boldsymbol{P}}(v )_{[j,j+n-1]}=u]^{q}.\]
We bound above the probability on the right of this expression by the production probability of a sufficiently large inflation word contained in \(u\). By compatibility, there is an integer \(k(n)\) such that \(j+n\leq|\vartheta(v_{[1,m(n)-k(n)]})|\) for all \(n\in\mathbb{N}\) and \(v\in\mathcal{L}_{\vartheta}^{m(n)}\), where \(\lim k(n)/n=0\). In particular, for every \(v\in\mathcal{L}_{\vartheta}^{n}\), a realisation of \(\vartheta(v_{[2,m(n)-k(n)]})\) is contained in \(u\) as an inflation word, so
\[\sum_{u\in\mathcal{L}_{\vartheta}^{n}}\mathbb{P}[\vartheta_{\boldsymbol{P}}(v) _{[j,j+n-1]}=u]^{q}\leq\sum_{w\in\vartheta(v_{2}\cdots v_{m(n)-k(n)})} \mathbb{P}[\vartheta(v_{2}\cdots v_{m(n)-k(n)})=w]^{q}.\]
We now bound this quantity uniformly for all \(v\in\mathcal{L}_{\vartheta}^{m(n)}\). By Lemma 2.10 and the above, there is an \(N\in\mathbb{N}\) such that for all \(n\geq N\)
\[\sum_{u\in\mathcal{L}_{\vartheta}^{n}}\mu_{\boldsymbol{P}}([u])^{q}\leq\frac{| \vartheta|^{q-1}}{\lambda^{q}}\prod_{a\in\mathcal{A}}\left(\sum_{s\in\vartheta (a)}\mathbb{P}[\vartheta_{\boldsymbol{P}}(a)=s]^{q}\right)^{(m(n)-k(n)-1)(R_{ a}-\varepsilon)}.\]
Taking logarithms, rearranging and dividing by \(n\) gives
\[-\frac{1}{n}\log\left(\sum_{u\in\mathcal{L}_{\vartheta}^{n}}\mu_{ \boldsymbol{P}}([u])^{q}\right) \geq\frac{m(n)-k(n)-1}{n}\sum_{a\in\mathcal{A}}(R_{a}-\varepsilon )\log\left(\sum_{s\in\vartheta(a)}\mathbb{P}[\vartheta_{\boldsymbol{P}}(a)=s] ^{q}\right)\] \[\qquad-\frac{\log(|\vartheta|^{q-1}/\lambda^{q})}{n}\] \[\xrightarrow{n\to\infty}\frac{1}{\lambda-\varepsilon}\sum_{a\in \mathcal{A}}(R_{a}-\varepsilon)\log\left(\sum_{s\in\vartheta(a)}\mathbb{P}[ \vartheta_{\boldsymbol{P}}(a)=s]^{q}\right),\]
But \(\varepsilon>0\) was arbitrarily, so
\[\tau_{\mu_{\boldsymbol{P}}}(q)\geq\frac{1}{\lambda}\varphi_{1}(q),\]
which completes the proof.
We now state the bounds for the \(q\in(0,1)\) case. We do not give a proof here since the arguments mirror the proofs of Proposition 3.3 and Proposition 3.4, except with reversed inequalities since \(x\mapsto x^{q}\) is concave rather than convex and subadditive as opposed to superadditive.
**Proposition 3.5**.: _If \(q\in(0,1)\), then_
\[\frac{1}{\lambda-1}\varphi_{1}(q)\leq\tau_{\mu_{\boldsymbol{P}}}(q)\leq \overline{\tau}_{\mu_{\boldsymbol{P}}}(q)\leq\frac{1}{\lambda}\varphi_{1}(q).\]
### \(L^{q}\)-spectra for negative \(q\): lower bounds
For \(q<0\), there exist primitive and compatible random substitutions for which \(\tau_{\mu_{\boldsymbol{P}}}(q)\) and \(T_{\vartheta,\boldsymbol{P}}(q)\) do not coincide (see, for instance, Example 5.2). However, we still obtain that \(\tau_{\mu_{\boldsymbol{P}}}(q)\geq T_{\vartheta,\boldsymbol{P}}(q)\) for all \(q<0\). To prove this, it suffices to show the sequence of bounds in (1.3) holds. Again, we only need to prove the bound for \(k=1\) since the remaining bounds follow by considering powers of the random substitution.
**Proposition 3.6**.: _If \(\vartheta_{\boldsymbol{P}}\) is a primitive and compatible random substitution, then for all \(q<0\),_
\[\tau_{\mu_{\boldsymbol{P}}}(q)\geq\frac{1}{\lambda-1}\varphi_{1}(q).\]
Proof.: Let \(\varepsilon>0\) be sufficiently small and for \(n\) sufficiently large, let \(m(n)\) be the integer defined by
\[m(n)=\left\lceil\frac{n}{\lambda-\varepsilon}\right\rceil.\]
To avoid division by zero, we rewrite Lemma 2.12 in a form where we do not sum over elements equal to zero. Here, we write \(u\blacktriangleleft\vartheta(v)\) to mean there is a realisation \(w\) of \(\vartheta(v)\) for which \(u\) appears as a subword of \(w\). For each \(v\in\mathcal{L}_{\vartheta}^{m(n)}\) and \(u\in\mathcal{L}_{\vartheta}^{n}\), let \(\mathcal{J}(v,u)=\{j\in\{1,\ldots,|\vartheta(v_{1})|\}:u\in\vartheta(v)_{[j,j +n-1]}\}\). Observe that if \(j\notin\mathcal{J}(u,v)\), then \(\mathbb{P}[\vartheta_{\boldsymbol{P}}(v)_{[j,j+n-1]}=u]=0\), and if \(u\) does not appear as a subword of any realisations of \(\vartheta(v)\), then \(\mathcal{J}(u,v)=\varnothing\). Therefore, we can rewrite Lemma 2.12 as
\[\mu_{\boldsymbol{P}}([u])=\frac{1}{\lambda}\sum_{\begin{subarray}{c}v\in \mathcal{L}_{\vartheta}^{m(n)}\\ u\blacktriangleleft\vartheta(v)\end{subarray}}\mu_{\boldsymbol{P}}([v])\sum_ {j\in\mathcal{J}(v,u)}\mathbb{P}[\vartheta_{\boldsymbol{P}}(v)_{[j,j+n-1]}=u].\]
Hence, by the subadditivity of the function \(x\mapsto x^{q}\) on the domain \((0,1]\),
\[\sum_{u\in\mathcal{L}_{\vartheta}^{n}}\mu_{\boldsymbol{P}}([u])^{q} =\sum_{u\in\mathcal{L}_{\vartheta}^{n}}\left(\frac{1}{\lambda} \sum_{\begin{subarray}{c}v\in\mathcal{L}_{\vartheta}^{m(n)}\\ u\blacktriangleleft\vartheta(v)\end{subarray}}\mu_{\boldsymbol{P}}([v])\sum_ {j\in\mathcal{J}(v,u)}\mathbb{P}[\vartheta_{\boldsymbol{P}}(v)_{[j,j+n-1]}=u] \right)^{q}\right.\] \[\leq\frac{1}{\lambda^{q}}\sum_{u\in\mathcal{L}_{\vartheta}^{n}} \sum_{\begin{subarray}{c}v\in\mathcal{L}_{\vartheta}^{m(n)}\\ u\blacktriangleleft\vartheta(v)\end{subarray}}\mu_{\boldsymbol{P}}([v])^{q} \sum_{j\in\mathcal{J}(v,u)}\mathbb{P}[\vartheta_{\boldsymbol{P}}(v)_{[j,j+n- 1]}=u]^{q}\] \[=\frac{1}{\lambda^{q}}\sum_{v\in\mathcal{L}_{\vartheta}^{m(n)}} \mu_{\boldsymbol{P}}([v])^{q}\sum_{\begin{subarray}{c}u\in\mathcal{L}_{ \vartheta}^{n}\\ u\blacktriangleleft\vartheta(v)\end{subarray}}\sum_{j\in\mathcal{J}(v,u)} \mathbb{P}[\vartheta_{\boldsymbol{P}}(v)_{[j,j+n-1]}=u]^{q}.\]
For each \(j\in\mathcal{J}(v,u)\), let \(w_{j}(u)\in\vartheta(v)\) be a word such that \(w_{j}(u)_{[j,j+n-1]}=u\). Note that there are at most \(K\coloneqq 2|\vartheta|(\#\mathcal{A})^{|\vartheta|}\) different \(u\in\mathcal{L}_{\vartheta}^{n}\) such that \(w_{j}(u)_{[j,j+n-1]}=u\). Hence,
\[\sum_{\begin{subarray}{c}u\in\mathcal{L}_{\vartheta}^{n}\\ u\blacktriangleleft\vartheta(v)\end{subarray}}\sum_{j\in\mathcal{J}(v,u)} \mathbb{P}[\vartheta_{\boldsymbol{P}}(v)_{[j,j+n-1]}=u]^{q} \leq\sum_{\begin{subarray}{c}u\in\mathcal{L}_{\vartheta}^{n} \\ u\blacktriangleleft\vartheta(v)\end{subarray}}\sum_{j\in\mathcal{J}(v,u)} \mathbb{P}[\vartheta_{\boldsymbol{P}}(v)=w_{j}(u)]^{q}\] \[\leq K\sum_{w\in\vartheta(v)}\mathbb{P}[\vartheta_{\boldsymbol{ P}}(v)=w]^{q}\]
and it follows that
\[\sum_{u\in\mathcal{L}_{\vartheta}^{n}}\mu_{\boldsymbol{P}}([u])^{q}\leq \lambda^{-q}K\sum_{v\in\mathcal{L}_{\vartheta}^{m(n)}}\mu_{\boldsymbol{P}}([v]) ^{q}\sum_{w\in\vartheta(v)}\mathbb{P}[\vartheta_{\boldsymbol{P}}(v)=w]^{q}.\]
Thus, by Lemma 2.10, for all \(\varepsilon>0\) there is an integer \(N\) such that for all \(n\geq N\)
\[\sum_{u\in\mathcal{L}_{\vartheta}^{n}}\mu_{\boldsymbol{P}}([u])^{q}\leq\lambda ^{-q}K\prod_{a\in\mathcal{A}}\left(\sum_{s\in\vartheta(a)}\mathbb{P}[ \vartheta_{\boldsymbol{P}}(a)=s]^{q}\right)^{m(n)(R_{a}+\varepsilon)}\left( \sum_{v\in\mathcal{L}_{\vartheta}^{m(n)}}\mu_{\boldsymbol{P}}([v])^{q} \right).\]
Taking logarithms, rearranging and dividing by \(n\) gives
\[-\frac{1}{n}\log\left(\sum_{u\in\mathcal{L}_{\vartheta}^{n}}\mu_{ \boldsymbol{P}}([u])^{q}\right)\geq \ -\frac{1}{n}\log\left(\sum_{v\in\mathcal{L}_{\vartheta}^{m(n)}}\mu_{ \boldsymbol{P}}([v])^{q}\right)+\frac{1}{n}\log(\lambda^{-q}K)\] \[\ -\frac{m(n)}{n}\sum_{a\in\mathcal{A}}(R_{a}+\varepsilon)\log \left(\sum_{s\in\vartheta(a)}\mathbb{P}[\vartheta_{\boldsymbol{P}}(a)=s]^{q} \right).\]
Noting that \(m(n)/n\to(\lambda-\varepsilon)^{-1}\) as \(n\to\infty\), it follows by Lemma 2.2 that
\[\tau_{\mu_{\boldsymbol{P}}}(q)\geq\frac{1}{\lambda-\varepsilon}\tau_{\mu_{ \boldsymbol{P}}}(q)+\frac{1}{\lambda-\varepsilon}\sum_{a\in\mathcal{A}}\log \left(\sum_{s\in\vartheta(a)}\mathbb{P}[\vartheta_{\boldsymbol{P}}(a)=s]^{q} \right)+c\varepsilon\]
where \(c\coloneqq(\#\mathcal{A})\max_{a\in\mathcal{A}}\log(\sum_{s\in\vartheta(a)} \mathbb{P}[\vartheta_{\boldsymbol{P}}(a)=s]^{q})\). Letting \(\varepsilon\to 0\) and rearranging, we obtain
\[\overline{\tau}_{\mu_{\boldsymbol{P}}}(q)\geq\frac{1}{\lambda-1}\varphi_{1}(q),\]
which completes the proof.
### Proof of general bounds for the \(L^{q}\) spectrum
Using the bounds proven in the prior two sections, we can now complete the proof of Theorem A.
Proof of Theorem A.: Since, for each \(k\in\mathbb{N}\), the random substitution \(\vartheta_{\boldsymbol{P}}^{k}\) gives rise to the same frequency measure as \(\vartheta_{\boldsymbol{P}}\), applying Proposition 3.3, Proposition 3.4 and Proposition 3.5 to \(\vartheta_{\boldsymbol{P}}^{k}\),
\[\frac{1}{\lambda^{k}}\varphi_{k}(q)\leq\tau_{\mu_{\boldsymbol{P}}}(q)\leq \overline{\tau}_{\mu_{\boldsymbol{P}}}(q)\leq\frac{1}{\lambda^{k}-1}\varphi_{k }(q)\]
for all \(q>1\) and
\[\frac{1}{\lambda^{k}-1}\varphi_{k}(q)\leq\tau_{\mu_{\boldsymbol{P}}}(q)\leq \overline{\tau}_{\mu_{\boldsymbol{P}}}(q)\leq\frac{1}{\lambda^{k}}\varphi_{k} (q)\]
for \(0<q<1\). Letting \(k\to\infty\) gives
\[\tau_{\mu_{\boldsymbol{P}}}(q)=\overline{\tau}_{\mu_{\boldsymbol{P}}}(q)=T_{ \vartheta,\boldsymbol{P}}(q)=\overline{T}_{\vartheta,\boldsymbol{P}}(q)\]
for all \(q\in(0,1)\cup(1,\infty)\), so the limits defining \(\tau_{\mu_{\boldsymbol{P}}}(q)\) and \(T_{\vartheta,\boldsymbol{P}}(q)\) both exist and coincide. The same holds for \(q=0\) and \(q=1\) by continuity. The monotonicity of the bounds \(\lambda^{-k}\varphi_{k}(q)\) follows by Proposition 3.2. Finally for \(q<0\), for each \(k\in\mathbb{N}\), applying Proposition 3.6 to \(\vartheta_{\boldsymbol{P}}^{k}\) gives that
\[\tau_{\mu_{\boldsymbol{P}}}(q)\geq\frac{1}{\lambda^{k}-1}\varphi_{k}(q).\]
Passing to the limit completes the proof.
### \(L^{q}\)-spectra for negative \(q\) under recognisability
While the upper bound does not hold in general for \(q<0\), for recognisable random substitutions we can obtain this using Lemma 2.18, which we recall is a refinement of Lemma 2.12 using recognisability.
**Proposition 3.7**.: _If \(\vartheta_{\boldsymbol{P}}\) is a primitive, compatible and recognisable random substitution, then for all \(q<0\),_
\[\overline{\tau}_{\mu_{\boldsymbol{P}}}(q)\leq\frac{1}{\lambda-1}\varphi_{1}(q).\]
Proof.: Let \(\varepsilon>0\) be sufficiently small and, for each \(n\in\mathbb{N}\) sufficiently large, let \(m(n)\) be the integer defined by
\[m(n)=\left\lfloor\frac{n}{\lambda-\varepsilon}\right\rfloor.\]
For each \(u\in\mathcal{L}_{\vartheta}^{n+2\kappa(\vartheta)}\), let \(w(u)\) denote the recognisable core of \(u\). Further, let \(v(u)\) denote the unique legal word such that \(w(u)\in\vartheta(v(u))\). Then, by Lemma 2.18, we have
\[\mu_{\boldsymbol{P}}([u])\leq\frac{\kappa(\vartheta)}{\lambda}\mu_{ \boldsymbol{P}}([v(u)])\mathbb{P}[\vartheta(v(u))=w(u)]. \tag{3.2}\]
Observe that for all \(u\in\mathcal{L}_{\vartheta}^{n+2\kappa(\vartheta)}\), the recognisable core \(w(u)\) has length at least \(n\) so, by compatibility, there is an integer \(N\) such that if \(n\geq N\), then \(|v(u)|\geq m(n)\) for all \(u\in\mathcal{L}_{\vartheta}^{n+2\kappa(\vartheta)}\). In particular, for every \(u\) there exists a \(v\in\mathcal{L}_{\vartheta}^{m(n)}\) such that \(\mu_{\boldsymbol{P}}([v(u)])\leq\mu_{\boldsymbol{P}}([v])\) and a \(w\in\vartheta(v)\) such that \(\mathbb{P}[\vartheta_{\boldsymbol{P}}(v(u))=w(u)]\leq\mathbb{P}[\vartheta_{ \boldsymbol{P}}(v)=w]\). Hence, it follows by (3.2) and Lemma 2.10 that
\[\sum_{u\in\mathcal{L}_{\vartheta}^{n+2\kappa(\vartheta)}}\mu_{ \boldsymbol{P}}([u])^{q} \geq\frac{1}{\lambda^{q}}\sum_{v\in\mathcal{L}_{\vartheta}^{m(n)}} \mu_{\boldsymbol{P}}([v])^{q}\sum_{w\in\vartheta(v)}\mathbb{P}[\vartheta_{ \boldsymbol{P}}(v)=w]^{q}\] \[\geq\frac{1}{\lambda^{q}}\prod_{a\in\mathcal{A}}\left(\sum_{s \in\vartheta(a)}\mathbb{P}[\vartheta_{\boldsymbol{P}}(a)=s]^{q}\right)^{m(R_{ a}-\varepsilon)}\sum_{v\in\mathcal{L}_{\vartheta}^{m(n)}}\mu_{ \boldsymbol{P}}([v])^{q},\]
noting that since \(q<0\), the function \(x\mapsto x^{q}\) is decreasing on \((0,1]\). Taking logarithms, rearranging and dividing by \(n\) gives
\[-\frac{1}{n}\log\left(\sum_{u\in\mathcal{L}_{\vartheta}^{n}}\mu_{ \boldsymbol{P}}([u])^{q}\right) \leq -\frac{1}{n}\log\left(\sum_{v\in\mathcal{L}_{\vartheta}^{m(n)}} \mu_{\boldsymbol{P}}([v])^{q}\right)+\frac{1}{n}\log\lambda^{q}\] \[-\frac{m(n)}{n}\sum_{a\in\mathcal{A}}(R_{a}-\varepsilon)\log \left(\sum_{s\in\vartheta(a)}\mathbb{P}[\vartheta_{\boldsymbol{P}}(a)=s]^{q} \right).\]
Noting that \(m(n)/n\to(\lambda-\varepsilon)^{-1}\) as \(n\to\infty\), it follows by Lemma 2.2 that
\[\overline{\tau}_{\mu_{\boldsymbol{P}}}(q)\leq\frac{1}{\lambda-\varepsilon} \overline{\tau}_{\mu_{\boldsymbol{P}}}(q)+\frac{1}{\lambda-\varepsilon}\sum_{a \in\mathcal{A}}\log\left(\sum_{s\in\vartheta(a)}\mathbb{P}[\vartheta_{ \boldsymbol{P}}(a)=s]^{q}\right)+c\varepsilon\]
where \(c\coloneqq(\#\mathcal{A})\max_{a\in\mathcal{A}}\log(\sum_{s\in\vartheta(a)} \mathbb{P}[\vartheta_{\boldsymbol{P}}(a)=s]^{q})\). Letting \(\varepsilon\to 0\) and rearranging, we obtain
\[\overline{\tau}_{\mu_{\boldsymbol{P}}}(q)\leq\frac{1}{\lambda-1}\varphi_{1}(q),\]
which completes the proof.
### Recovering entropy from the \(L^{q}\)-spectrum
Since the \(L^{q}\)-spectrum encodes both topological and measure theoretic entropy, Theorem A provides an alternative means of proving the coincidence of these quantities with the inflation word analogues introduced in [14, 15].
For notational simplicity, set
\[\rho_{k}=-\sum_{a\in\mathcal{A}}R_{a}\sum_{s\in\vartheta^{k}(a)}\mathbb{P}[ \vartheta_{\boldsymbol{P}}^{k}(a)=s]\log(\mathbb{P}[\vartheta_{\boldsymbol{P }}^{k}(a)=s]).\]
Proof of Corollary C.: We first establish the result for topological entropy. By Theorem A, the limit defining \(T_{\vartheta,\boldsymbol{P}}(0)\) exists; in particular,
\[\lim_{m\to\infty}\frac{1}{\lambda^{k}}\sum_{a\in\mathcal{A}}R_{a}\log(\# \vartheta^{m}(a))\]
exists. Since \(h_{\mathrm{top}}(X_{\vartheta})=-\tau_{\mu_{\boldsymbol{P}}}(0)=-T_{\vartheta,\boldsymbol{P}}(0)\), we conclude that
\[h_{\mathrm{top}}(X_{\vartheta})=-\lim_{m\to\infty}\frac{1}{\lambda^{k}}\sum_{a \in\mathcal{A}}R_{a}\log(\#\vartheta^{m}(a))\]
as claimed.
Now we consider measure theoretic entropy. We first make the following elementary observation: if \(f\) and \(g\) are concave functions with \(f(1)=g(1)\) and \(f(x)\leq g(x)\) for all \(x\geq 1\), then \(f^{+}(1)\leq g^{+}(1)\). Indeed, for all \(\epsilon>0\),
\[\frac{f(1+\epsilon)-f(1)}{\epsilon}\leq\frac{g(1+\epsilon)-g(1)}{\epsilon},\]
and taking the limit as \(\epsilon\) goes to \(0\) (which always exists by concavity) yields the desired inequality.
Recall that \(\tau_{\mu_{\boldsymbol{P}}}\) and \(\lambda^{-k}\varphi_{k}\) are concave functions with \(\tau_{\mu_{\boldsymbol{P}}}(1)=\varphi_{k}(1)=0\) for all \(k\in\mathbb{N}\). Moreover, \(\varphi_{k}\) is differentiable for all \(k\in\mathbb{N}\) with \(\varphi_{k}^{\prime}(1)=\rho_{k}\) and by Proposition 3.2 and Theorem A, \(\left(\lambda^{-k}\varphi_{k}\right)_{j=1}^{\infty}\) converges monotonically to \(\tau_{\mu_{\boldsymbol{P}}}\) from
below. In particular, \(\rho_{k}/\lambda^{k}\) is a monotonically increasing sequence bounded above by \(\tau_{\mu_{\boldsymbol{P}}}^{+}(1)\), so that the limit indeed exists. Thus
\[\tau_{\mu_{\boldsymbol{P}}}^{+}(1)=\lim_{k\to\infty}\frac{\rho_{k}}{\lambda^{k}}\]
since \(\varphi_{k}(q)/(\lambda^{k}-1)\geq\tau_{\mu_{\boldsymbol{P}}}(q)\) for all \(q\in(0,\infty)\), using the preceding observation.
The result for \(\tau_{\mu_{\boldsymbol{P}}}^{-}(1)\) follows by an identical argument, instead using monotonicity and the corresponding bounds for \(q\in(0,1)\). Thus \(\tau_{\mu_{\boldsymbol{P}}}^{\prime}(1)=\lim_{k\to\infty}\rho_{k}/\lambda^{k}\), so the desired result follows by Lemma 2.1(c).
## 4. Recognisability and the multifractal formalism
In this section we establish the multifractal formalism as stated in Theorem D. To do this, we prove a variational principle by considering typical local dimensions of one frequency measure \(\mu_{\boldsymbol{P}}\) relative to another frequency measure \(\mu_{\boldsymbol{Q}}\). Our strategy is to prove the almost-sure existence of _relative letter frequencies_ in Lemma 4.3: this result, combined with recognisability, gives Proposition 4.5. The multifractal formalism then follows from this dimensional result combined with the formula for the \(L^{q}\)-spectrum proven in Proposition 3.7--the proof is given in Section 4.2.
### Non-typical local dimensions
To prove the multifractal formalism for a given frequency measure \(\mu_{\boldsymbol{P}}\), we show that for every \(\alpha\in[\alpha_{\min},\alpha_{\max}]\), there exists another frequency measure \(\mu_{\boldsymbol{Q}}\) such that \(\dim_{\mathrm{H}}\mu_{\boldsymbol{Q}}\geq\tau_{\mu_{\boldsymbol{P}}}^{*}(\alpha)\) and \(\dim_{\mathrm{loc}}(\mu_{\boldsymbol{P}},x)=\alpha\) for \(\mu_{\boldsymbol{Q}}\)-almost every \(x\in X_{\vartheta}\). Given a primitive set-valued substitution \(\vartheta\), permissible probabilities \(\boldsymbol{P}\) and \(\boldsymbol{Q}\), \(m\in\mathbb{N}\) and \(a\in\mathcal{A}\), define the quantity \(H^{m,a}_{\boldsymbol{P},\boldsymbol{Q}}(\vartheta)\) by
\[H^{m,a}_{\boldsymbol{P},\boldsymbol{Q}}(\vartheta)=\sum_{v\in\vartheta^{m}(a )}-\mathbb{P}[\vartheta^{m}_{\boldsymbol{Q}}(a)=v]\log\mathbb{P}[\vartheta^{ m}_{\boldsymbol{P}}(a)=v].\]
Further, let \(\boldsymbol{H}^{m}_{\boldsymbol{P},\boldsymbol{Q}}(\vartheta)\) denote the vector \((H^{m,a}_{\boldsymbol{P},\boldsymbol{Q}}(\vartheta))_{a\in\mathcal{A}}\). We first prove some properties of the quantity \(\boldsymbol{H}^{m}_{\boldsymbol{P},\boldsymbol{Q}}(\vartheta)\) which we will use in the proof of Proposition 4.5.
**Lemma 4.1**.: _If \(\vartheta\) is a primitive and compatible set-valued substitution and \(\boldsymbol{P}\) and \(\boldsymbol{Q}\) are permissible probabilities, then for all \(m\in\mathbb{N}\), \(a\in\mathcal{A}\) and \(s\in\vartheta(a)\),_
\[\sum_{v\in\vartheta^{m}(s)}\mathbb{P}[\vartheta^{m}_{\boldsymbol{Q}}(s)=v] \log\mathbb{P}[\vartheta^{m}_{\boldsymbol{P}}(s)=v]=\sum_{b\in\mathcal{A}} [\vartheta(a)|_{b}\,H^{m,b}_{\boldsymbol{P},\boldsymbol{Q}}(\vartheta).\]
Proof.: Since \(\vartheta\) is compatible, we can decompose each \(v\in\vartheta^{m}(s)\) into inflation words \(v=v^{1}\cdots v^{|\vartheta(a)|}\). By the Markov property of \(\vartheta_{\boldsymbol{P}}\) (respectively \(\vartheta_{\boldsymbol{Q}}\)), we have that
\[\mathbb{P}[\vartheta^{m}_{\boldsymbol{P}}(s)=v]=\mathbb{P}[\vartheta^{m}_{ \boldsymbol{P}}(s_{1})=v^{1}]\cdots\mathbb{P}[\vartheta^{m}_{\boldsymbol{P}}(s _{|\vartheta(a)|})=v^{|\vartheta(a)|}].\]
Therefore
\[\sum_{v\in\vartheta^{m}(s)}\mathbb{P}[\vartheta^{m}_{\mathbf{Q}}(s)=v] \log\mathbb{P}[\vartheta^{m}_{\mathbf{P}}(s)=v] =\sum_{b\in\mathcal{A}}\lvert\vartheta(a)\rvert_{b}\sum_{w\in \vartheta^{m}(b)}\mathbb{P}[\vartheta^{m}_{\mathbf{Q}}(b)=w]\log\mathbb{P}[ \vartheta^{m}_{\mathbf{P}}(b)=w]\] \[=\sum_{b\in\mathcal{A}}\lvert\vartheta(a)\rvert_{b}H^{m,b}_{\mathbf{ P},\mathbf{Q}}(\vartheta),\]
which completes the proof.
**Lemma 4.2**.: _If \(\vartheta\) is a primitive and compatible set-valued substitution satisfying the disjoint set condition, with right Perron-Frobenius eigenvector \(\mathbf{R}\), and \(\mathbf{P}\) and \(\mathbf{Q}\) are permissible probabilities, then the random substitutions \(\vartheta_{\mathbf{P}}=(\vartheta,\mathbf{P})\) and \(\vartheta_{\mathbf{Q}}=(\vartheta,\mathbf{Q})\) satisfy_
\[\frac{1}{\lambda^{m}}\mathbf{H}^{m}_{\mathbf{P},\mathbf{Q}}(\vartheta)\cdot\mathbf{R}\to\frac {1}{\lambda-1}\mathbf{H}^{1}_{\mathbf{P},\mathbf{Q}}(\vartheta)\cdot\mathbf{R}\]
_as \(m\to\infty\)._
Proof.: Since \(\vartheta\) satisfies the disjoint set condition, for all \(m\in\mathbb{N}\) and \(a\in\mathcal{A}\),
\[\mathbf{H}^{m+1}_{\mathbf{P},\mathbf{Q}}(\vartheta)\cdot\mathbf{R} =\sum_{a\in\mathcal{A}}R_{a}\sum_{v\in\vartheta^{m+1}(a)}\mathbb{ P}[\vartheta^{m+1}_{\mathbf{Q}}(a)=v]\log\mathbb{P}[\vartheta^{m+1}_{\mathbf{P}}(a)=v]\] \[=\sum_{a\in\mathcal{A}}R_{a}\sum_{s\in\vartheta(a)}\mathbb{P}[ \vartheta_{\mathbf{Q}}(a)=s]\log\mathbb{P}[\vartheta_{\mathbf{P}}(a)=s]\] \[\quad+\sum_{a\in\mathcal{A}}R_{a}\sum_{s\in\vartheta(a)}\mathbb{ P}[\vartheta_{\mathbf{Q}}(a)=s]\sum_{v\in\vartheta^{m}(s)}\mathbb{P}[\vartheta^{m}_ {\mathbf{Q}}(s)=v]\log\mathbb{P}[\vartheta^{m}_{\mathbf{P}}(s)=v]\] \[=\mathbf{H}^{1}_{\mathbf{P},\mathbf{Q}}(\vartheta)\cdot\mathbf{R}+\sum_{b\in \mathcal{A}}H^{m,b}_{\mathbf{P},\mathbf{Q}}(\vartheta)\sum_{a\in\mathcal{A}}\lvert \vartheta(a)\rvert_{b}R_{a}\] \[=\mathbf{H}^{1}_{\mathbf{P},\mathbf{Q}}(\vartheta)\cdot\mathbf{R}+\lambda\sum_{b \in\mathcal{A}}R_{b}H^{m,b}_{\mathbf{P},\mathbf{Q}}(\vartheta)\] \[=\mathbf{H}^{1}_{\mathbf{P},\mathbf{Q}}(\vartheta)\cdot\mathbf{R}+\lambda\mathbf{H}^ {m}_{\mathbf{P},\mathbf{Q}}(\vartheta)\cdot\mathbf{R}.\]
In the second equality we use the Markov property of \(\vartheta_{\mathbf{P}}\) and \(\vartheta_{\mathbf{Q}}\), laws of logarithms, and that \(\sum_{v\in\vartheta^{m}(s)}\mathbb{P}[\vartheta^{m}_{\mathbf{Q}}(s)=v]=1\) for all \(s\in\vartheta(a)\); in the third we apply Lemma 4.1; in the fourth we use that \(M_{\vartheta}\mathbf{R}=\lambda\mathbf{R}\). Applying the above inductively,
\[\frac{1}{\lambda^{m}}\mathbf{H}^{m}_{\mathbf{P},\mathbf{Q}}(\vartheta)\cdot\mathbf{R}=\sum_{j =1}^{m}\frac{1}{\lambda^{j}}\mathbf{H}^{1}_{\mathbf{P},\mathbf{Q}}(\vartheta)\cdot\mathbf{R} \xrightarrow{m\to\infty}\frac{1}{\lambda-1}\mathbf{H}^{1}_{\mathbf{P},\mathbf{Q}}( \vartheta)\cdot\mathbf{R},\]
which completes the proof.
Any bi-infinite sequence \(x\) in the subshift of a recognisable random substitution can be written as a bi-infinite concatenation of exact inflation words \((w^{n,a_{n}})\), where \(w^{n,a_{n}}\) is an inflation word generated from the letter \(a_{n}\). Given a recognisable set-valued
substitution \(\vartheta\), \(a\in\mathcal{A}\) and \(w\in\vartheta(a)\), we define the inflation word frequency of \((a,w)\) in \(x\in X_{\vartheta}\) by
\[f_{x}(a,w) =\lim_{n\to\infty}f_{x}^{n}(a,w)\] \[f_{x}^{n}(a,w) =\frac{1}{2n+1}\#\{m\colon a_{m}=a,w^{m,a_{m}}=w,w^{m,a_{m}}\text { in }x_{[-n,n]}\},\]
provided the limit exists. For a given frequency measure \(\mu_{\boldsymbol{P}}\), the inflation word frequency of a \(\mu_{\boldsymbol{P}}\)-typical word is determined by the production probabilities. More specifically, we have the following.
**Lemma 4.3**.: _Let \(\vartheta_{\boldsymbol{P}}=(\vartheta,\boldsymbol{P})\) be a primitive, compatible and recognisable random substitution with corresponding frequency measure \(\mu_{\boldsymbol{P}}\). For \(\mu_{\boldsymbol{P}}\)-almost every \(x\in X_{\vartheta}\), the inflation word frequency exists and is given by_
\[f_{x}(a,w)=\frac{1}{\lambda}R_{a}\mathbb{P}[\vartheta_{\boldsymbol{P}}(a)=w],\]
_for all \(a\in\mathcal{A}\) and \(w\in\vartheta(a)\)._
Proof.: Let \(A_{a,w}\) be the set of points \(x\in X_{\vartheta}\) such that the above does _not_ hold. We show that \(A_{a,w}\) is a null set. Taking the complement and then the intersection over all \(a,w\) gives a full-measure set with the required property. Given \(\varepsilon>0\), let \(E(n,\varepsilon)\) be the set of \(x\in X_{\vartheta}\) such that
\[|f_{x}^{n}(a,w)-\frac{1}{\lambda}R_{a}\mathbb{P}[\vartheta_{\boldsymbol{P}}(a )=w]|>\varepsilon.\]
By the Borel-Cantelli lemma, it suffices to show that
\[\sum_{n\in\mathbb{N}}\mu_{\boldsymbol{P}}(E(n,\varepsilon))<\infty\]
for all \(\varepsilon>0\) in order to conclude that \(A_{a,w}\) is a nullset. To this end, we show that \(\mu_{\boldsymbol{P}}(E(n,\varepsilon))\) decays exponentially with \(n\). Given \(u\) with \(|u|=2n+1>2\kappa(\vartheta)\), let \(u^{R}\) denote the recognisable core of \(u\), which has length at least \(|u|-2\kappa(\vartheta)\). Lemma 2.18 gives that
\[\mu_{\boldsymbol{P}}([u])\leq\frac{\kappa(\vartheta)}{\lambda}\mu_{\boldsymbol {P}}([v])\mathbb{P}[\vartheta_{\boldsymbol{P}}(v)=u^{R}]=\frac{\kappa( \vartheta)}{\lambda}\mu_{\boldsymbol{P}}([v])\prod_{i=1}^{|v|}\mathbb{P}[ \vartheta_{\boldsymbol{P}}(v_{i})=w^{i,v_{i}}]\]
where each \(w^{i,v_{i}}\) is the inflated image of \(v_{i}\) in \(u^{R}\). By compatibility, we can choose an integer \(N\) such that every \(v\) of length at least \(N\) satisfies \(|v|(R_{a}-\varepsilon/3)\leq|v|_{a}\leq|v|(R_{a}+\varepsilon/3)\) for all \(a\in\mathcal{A}\). For each \(v\) and \(a\in\mathcal{A}\), let \(A_{a}(v)\) denote the set of \(u^{\prime}\in\vartheta(v)\) such that the frequency of indices \(i\in\{j:a_{j}=a\}\) with \(w^{i,a}=w\) deviates from \(\mathbb{P}[\vartheta(a)=w]\) by more than \(\varepsilon/3\). Since \(\vartheta_{\boldsymbol{P}}\) acts independently on letters, it follows by Cramer's theorem that the sum \(\sum_{u^{\prime}\in A(v)}\mathbb{P}[\vartheta_{\boldsymbol{P}}(v)=u^{\prime}]\) decays exponentially with
(and hence with \(|v|\)). In particular, there is a constant \(C>0\), independent of the choice of \(v\), such that
\[\sum_{u^{\prime}\in A(v)}\mathbb{P}[\vartheta_{\boldsymbol{P}}(v)=u^{\prime}] \leq e^{-Cn}. \tag{4.1}\]
Note that if \(u\) is a sufficiently long legal word and has \([u]\cap E(n,\varepsilon)=\varnothing\), then we require that \(u^{R}\in A(v)\). Indeed, if \(u^{\prime}\notin A(v)\) and \(|v|\geq N\), then the relative inflation word frequency of \(w\) is bounded above by
\[\frac{\{j:a_{j}=a\}}{|v|}\frac{|v|}{|u|}\left(\mathbb{P}[\vartheta _{\boldsymbol{P}}(a)=w]+\frac{\varepsilon}{3}\right) \leq\frac{1}{\lambda}\left(R_{a}+\frac{\varepsilon}{3}\right) \left(\mathbb{P}[\vartheta_{\boldsymbol{P}}(a)=w]+\frac{\varepsilon}{3}\right)\] \[\leq\frac{1}{\lambda}R_{a}\mathbb{P}[\vartheta_{\boldsymbol{P}} (a)=w]+\varepsilon\]
and, similarly, bounded below by \(R_{a}\mathbb{P}[\vartheta_{\boldsymbol{P}}(a)=w]/\lambda-\varepsilon\); hence, \([u^{R}]\cap E(n,\varepsilon)=\varnothing\). Let \(\mathcal{V}_{n}\) denote set of all words which appear as the (unique) preimage of the recognisable core of a word of length \(n\). It then follows by Lemma 2.18 that
\[\mu_{\boldsymbol{P}}(E(n,\varepsilon))\leqslant\sum_{\begin{subarray}{c}u\in \mathcal{L}_{\vartheta}^{n}\\ [u]\cap E(n,\varepsilon)\neq\varnothing\end{subarray}}\mu_{\boldsymbol{P}}([u] )\leq\frac{\kappa(\vartheta)}{\lambda}\sum_{v\in\mathcal{V}_{n}}\mu_{ \boldsymbol{P}}([v])\sum_{u^{\prime}\in A(v)}\mathbb{P}[\vartheta_{\boldsymbol {P}}(v)=u^{\prime}]\leqslant e^{-Cn},\]
where in the final inequality we have used (4.1) and that
\[\sum_{v\in\mathcal{V}_{n}}\mu_{\boldsymbol{P}}([v])\leq\sum_{j=1}^{n}\sum_{v \in\mathcal{L}_{\vartheta}^{j}}\mu_{\boldsymbol{P}}([v])\leq n,\]
absorbing this contribution and the \(\kappa(\vartheta)/\lambda\) factor into the constant \(C\). It follows that
\[\sum_{n=1}^{\infty}\mu_{\boldsymbol{P}}(E(n,\varepsilon))\leq\sum_{n=1}^{ \infty}e^{-Cn}<\infty,\]
and the result then follows by the Borel-Cantelli lemma.
Finally, we require the following bounds on the exponential scaling rate of measures of cylinders, which is essentially a consequence of Theorem A. In particular, these give bounds on the possible local dimensions of the measure.
**Proposition 4.4**.: _If \(\vartheta_{\boldsymbol{P}}\) is a primitive and compatible random substitution, then there are values \(0<s_{1}<s_{2}<\infty\) and \(c_{1},c_{2}>0\) such that for all \(n\in\mathbb{N}\) and \(v\in\mathcal{L}^{n}(X_{\vartheta})\),_
\[s_{1}\cdot n+c_{1}\leq\log\mu_{\boldsymbol{P}}([v])\leq s_{2}\cdot n+c_{2}\]
Proof.: By Theorem A, for all \(k\in\mathbb{N}\) and \(q>1\),
\[\tau_{\mu_{\boldsymbol{P}}}(q)\leq\frac{1}{\lambda^{k}-1}\varphi_{k}(q);\]
and for \(q<0\),
\[\frac{1}{\lambda^{k}-1}\varphi_{k}(q)\leq\tau_{\mu_{\boldsymbol{P}}}(q),\]
Moreover, for each \(k\), with
\[\beta_{k,\min}\coloneqq\lim_{q\to\infty}\frac{\varphi_{k}(q)}{q( \lambda^{k}-1)}=-\frac{1}{\lambda^{k}-1}\sum_{a\in\mathcal{A}}R_{a}\log\left( \min_{v\in\vartheta^{k}(a)}\mathbb{P}[\vartheta^{k}_{\boldsymbol{P}}(a)=v]\right)\] \[\beta_{k,\max}\coloneqq\lim_{q\to-\infty}\frac{\varphi_{k}(q)}{q( \lambda^{k}-1)}=-\frac{1}{\lambda^{k}-1}\sum_{a\in\mathcal{A}}R_{a}\log\left( \max_{v\in\vartheta^{k}(a)}\mathbb{P}[\vartheta^{k}_{\boldsymbol{P}}(a)=v] \right),\]
it follows that \([\beta_{k,\min},\beta_{k,\max}]\subset(0,\infty)\) is a decreasing nested sequence of intervals, so with \(\beta_{\min}=\lim_{k\to\infty}\beta_{k,\min}\) and \(\beta_{\max}=\lim_{k\to\infty}\beta_{k,\max}\),
\[0<\beta_{\min}\leq\lim_{q\to\infty}\tau_{\mu_{\boldsymbol{P}}}(q)\leq\lim_{q \to-\infty}\tau_{\mu_{\boldsymbol{P}}}(q)\leq\beta_{\max}<\infty.\]
Applying Lemma 2.1(b) gives the result.
Finally, we obtain our main conclusion concerning relative local dimensions.
**Proposition 4.5**.: _Let \(\vartheta\) be a primitive, compatible and recognisable set-valued substitution, let \(\boldsymbol{P}\) and \(\boldsymbol{Q}\) be permissible probabilities, and let \(\mu_{\boldsymbol{P}}\) and \(\mu_{\boldsymbol{Q}}\) denote the frequency measure corresponding to \(\vartheta\) endowed with the probabilities \(\boldsymbol{P}\) and \(\boldsymbol{Q}\), respectively. Then, for \(\mu_{\boldsymbol{Q}}\)-almost all \(x\in X_{\vartheta}\),_
\[\dim_{\mathrm{loc}}(\mu_{\boldsymbol{P}},x)=\frac{1}{\lambda-1}\sum_{a\in \mathcal{A}}R_{a}\sum_{v\in\vartheta(a)}-\mathbb{P}[\vartheta^{m}_{\boldsymbol {Q}}(a)=v]\log\mathbb{P}[\vartheta^{m}_{\boldsymbol{P}}(a)=v]. \tag{4.2}\]
Proof.: Fix \(m\in\mathbb{N}\). It follows by Lemma 2.16 that since \(\vartheta_{\boldsymbol{P}}\) is recognisable, so is \(\vartheta^{m}_{\boldsymbol{P}}\). For each \(x\in X_{\vartheta}\) and \(n\in\mathbb{N}\) with \(n>\kappa(\vartheta^{m})\), let \(u^{n}_{-}(x)\) denote the recognisable core of \(x_{[-n,n]}\) and let \(u^{n}_{+}(x)\) denote an inflation word of minimal length that contains \(x_{[-n,n]}\). By compatibility, \(|u^{n}_{-}(x)|/(2n+1)\to\lambda^{-m}\) and \(|u^{n}_{+}(x)|/(2n+1)\to\lambda^{-m}\) as \(n\to\infty\). Further, let \(v^{n}_{-}(x)\) be the legal word such that \(u^{n}_{-}(x)\in\vartheta^{m}(v^{n}_{-}(x))\) and \(v^{n}_{+}(x)\) be the legal word such that \(u^{n}_{+}(x)\in\vartheta^{m}(v^{n}_{+}(x))\). Then, it follows by Lemma 2.18 and the definition of local dimension that
\[\liminf_{n\to\infty} \left(-\frac{1}{2n+1}\log\mu_{\boldsymbol{P}}([u^{n}_{-}(x)])- \frac{1}{2n+1}\log\mathbb{P}[\vartheta_{\boldsymbol{P}}(v^{n}_{-}(x))=u^{n}_ {-}(x)]\right)\] \[\leq\underline{\dim}_{\mathrm{loc}}(\mu_{\boldsymbol{P}},x)\leq \overline{\dim}_{\mathrm{loc}}(\mu_{\boldsymbol{P}},x)\] \[\leq\limsup_{n\to\infty}\left(-\frac{1}{2n+1}\log\mu_{\boldsymbol {P}}([u^{n}_{+}(x)])-\frac{1}{2n+1}\log\mathbb{P}[\vartheta_{\boldsymbol{P}}( v^{n}_{+}(x))=u^{n}_{+}(x)]\right).\]
By Proposition 4.4, there exists a constant \(C\geq 0\) such that for all \(x\in X_{\vartheta}\),
\[0\leq\liminf_{n\to\infty}-\frac{1}{2n+1}\log\mu_{\boldsymbol{P}}([u^{n}_{-}(x )])\leq\limsup_{n\to\infty}-\frac{1}{2n+1}\log\mu_{\boldsymbol{P}}([u^{n}_{+}( x)])\leq C.\]
Hence, it follows from the above that
\[\begin{split}\liminf_{n\to\infty}-&\frac{1}{2n+1}\log \mathbb{P}[\vartheta_{\boldsymbol{P}}(v_{-}^{n}(x))=u_{-}^{n}(x)]\\ &\leq\underline{\dim}_{\mathrm{loc}}(\mu_{\boldsymbol{P}},x) \leq\overline{\dim}_{\mathrm{loc}}(\mu_{\boldsymbol{P}},x)\\ &\leq\limsup_{n\to\infty}-\frac{1}{2n+1}\log\mathbb{P}[ \vartheta_{\boldsymbol{P}}(v_{+}^{n}(x))=u_{+}^{n}(x)]+\frac{C}{\lambda^{m}}. \end{split} \tag{4.3}\]
We now show that for \(\mu_{\boldsymbol{Q}}\)-almost all \(x\in X_{\vartheta}\),
\[\begin{split}\liminf_{n\to\infty}-\frac{1}{n}\log\mathbb{P}[ \vartheta_{\boldsymbol{P}}(v_{-}^{n}(x))=u_{-}^{n}(x)]&=\limsup_ {n\to\infty}-\frac{1}{n}\log\mathbb{P}[\vartheta_{\boldsymbol{P}}(v_{+}^{n}(x ))=u_{+}^{n}(x)]\\ &=\frac{1}{\lambda^{m}}\boldsymbol{H}_{\boldsymbol{P},\boldsymbol {Q}}^{m}(\vartheta)\cdot\boldsymbol{R}.\end{split}\]
By compatibility, we can decompose the production probabilities into inflation tiles as
\[\mathbb{P}[\vartheta_{\boldsymbol{P}}^{m}(v_{-}^{n}(x))=u_{-}^{n}(x)]=\prod_{a \in\mathcal{A}}\prod_{w\in\vartheta^{m}(a)}\mathbb{P}[\vartheta_{\boldsymbol {P}}^{m}(a)=w]^{N_{a,w}(x,n)},\]
where, for each \(a\in\mathcal{A}\) and \(w\in\vartheta^{m}(a)\), \(N_{a,w}(x,n)\) denotes the number of \(a\)'s in \(v_{-}^{n}(x)\) which map to \(w\). It follows by Lemma 4.3, applied to \(\vartheta_{\boldsymbol{Q}}^{m}\), that for \(\mu_{\boldsymbol{Q}}\)-almost all \(x\in X_{\vartheta}\), we have
\[\frac{1}{2n+1}N_{a,w}(x,n)\to\frac{1}{\lambda^{m}}R_{a}\mathbb{P}[\vartheta_{ \boldsymbol{Q}}^{m}(a)=w]\]
for all \(a\in\mathcal{A}\) and \(w\in\vartheta^{m}(a)\). Hence, it follows that
\[\begin{split}\lim_{n\to\infty}-\frac{1}{2n+1}\log\mathbb{P}[& \vartheta_{\boldsymbol{P}}^{m}(v_{-}^{n}(x))=u_{-}^{n}(x)]\\ &=\frac{1}{\lambda^{m}}\sum_{a\in\mathcal{A}}R_{a}\sum_{v\in \vartheta^{m}(a)}\mathbb{P}[\vartheta_{\boldsymbol{Q}}^{m}(a)=v]\log\mathbb{ P}[\vartheta_{\boldsymbol{P}}^{m}(a)=v]\\ &=\frac{1}{\lambda^{m}}\boldsymbol{H}_{\boldsymbol{P},\boldsymbol {Q}}^{m}(\vartheta)\cdot\boldsymbol{R},\end{split}\]
with the same convergence holding for \(u_{+}^{n}(x)\) by identical arguments. Thus, it follows from (4.3) that
\[\frac{1}{\lambda^{m}}\boldsymbol{H}_{\boldsymbol{P},\boldsymbol{Q}}^{m}( \vartheta)\cdot\boldsymbol{R}\leq\underline{\dim}_{\mathrm{loc}}(\mu_{ \boldsymbol{P}},x)\leq\overline{\dim}_{\mathrm{loc}}(\mu_{\boldsymbol{P}},x) \leq\frac{1}{\lambda^{m}}\boldsymbol{H}_{\boldsymbol{P},\boldsymbol{Q}}^{m}( \vartheta)\cdot\boldsymbol{R}+\frac{C}{\lambda^{m}}.\]
Since the above holds for all \(m\in\mathbb{N}\), by letting \(m\to\infty\) it follows by Lemma 4.2 that \(\dim_{\mathrm{loc}}(\mu_{\boldsymbol{P}},x)\) exists and
\[\dim_{\mathrm{loc}}(\mu_{\boldsymbol{P}},x)=\frac{1}{\lambda-1}\boldsymbol{H}_ {\boldsymbol{P},\boldsymbol{Q}}^{1}(\vartheta)\cdot\boldsymbol{R},\]
which completes the proof.
### Proof of the multifractal formalism
In this section, we apply the results obtained in the previous section, along with results on the \(L^{q}\)-spectrum under recognisability, to prove Theorem D.
Proof of Theorem D.: We first obtain the results for the \(L^{q}\)-spectrum. Since every recognisable random substitution satisfies the disjoint set condition, Proposition 3.1 gives that \(T_{\vartheta,\boldsymbol{P}}(q)=(\lambda-1)^{-1}\varphi_{1}(q)\) for all \(q\in\mathbb{R}\). If \(q<0\), then by Theorem A and Proposition 3.7,
\[\frac{1}{\lambda-1}\varphi_{1}(q)=T_{\vartheta,\boldsymbol{P}}(q)\leq\tau_{ \mu_{\boldsymbol{P}}}(q)\leq\overline{\tau}_{\mu_{\boldsymbol{P}}}(q)\leq \frac{1}{\lambda-1}\varphi_{1}(q),\]
so we conclude that \(\tau_{\mu_{\boldsymbol{P}}}(q)\) exists and equals \((\lambda-1)^{-1}\varphi_{1}(q)\). For \(q\geq 0\), the result follows already from Corollary B.
We now obtain the results on the multifractal spectrum. In light of Proposition 2.3, it remains to show that \(f_{\mu_{\boldsymbol{P}}}(\alpha)\geq\tau_{\mu_{\boldsymbol{P}}}^{*}(\alpha)\) for each \(\alpha\in\mathbb{R}\). As proven above, for all \(q\in\mathbb{R}\),
\[\tau_{\mu_{\boldsymbol{P}}}(q)=\frac{1}{\lambda-1}\varphi_{1}(q)=\frac{1}{ \lambda-1}\sum_{a\in\mathcal{A}}R_{a}T_{a}(q)\]
where for each \(a\in\mathcal{A}\)
\[T_{a}(q)=-\log\sum_{s\in\vartheta(a)}\mathbb{P}[\vartheta_{\boldsymbol{P}}(a )=s]^{q}.\]
First, fix \(\alpha\in(\alpha_{\min},\alpha_{\max})\) and let \(q\in\mathbb{R}\) be chosen so that \(\tau_{\mu_{\boldsymbol{P}}}^{\prime}(q)=\alpha\). Observe that \(q\alpha-\tau_{\mu_{\boldsymbol{P}}}(q)=\tau_{\mu_{\boldsymbol{P}}}^{*}(\alpha)\). Then define \(\boldsymbol{Q}\) by the rule
\[\mathbb{P}[\vartheta_{\boldsymbol{Q}}(a)=s]=\mathbb{P}[\vartheta_{\boldsymbol {P}}(a)=s]^{q}e^{T_{a}(q)}\]
for all \(a\in\mathcal{A}\) and \(s\in\vartheta(a)\). Then by Corollary C,
\[\dim_{\mathrm{H}}\mu_{\boldsymbol{Q}} =\frac{1}{\lambda-1}\sum_{a\in\mathcal{A}}R_{a}\Big{(}-\sum_{v \in\vartheta(a)}\mathbb{P}[\vartheta_{\boldsymbol{Q}}(a)=v]\log\mathbb{P}[ \vartheta_{\boldsymbol{Q}}(a)=v]\Big{)}\] \[=q\cdot\frac{1}{\lambda-1}\sum_{a\in\mathcal{A}}R_{a}\Big{(}-\sum_ {v\in\vartheta(a)}\mathbb{P}[\vartheta_{\boldsymbol{Q}}(a)=v]\log\mathbb{P}[ \vartheta_{\boldsymbol{P}}(a)=v]\Big{)}\] \[\quad-\frac{1}{\lambda-1}\sum_{a\in\mathcal{A}}R_{a}T_{a}(q)\sum _{v\in\vartheta(a)}\mathbb{P}[\vartheta_{\boldsymbol{Q}}(a)=v]\] \[=q\alpha-\tau_{\mu_{\boldsymbol{P}}}(q)=\tau_{\mu_{\boldsymbol{P }}}^{*}(\alpha)\]
since
\[\tau_{\mu_{\boldsymbol{P}}}^{\prime}(q) =\frac{1}{\lambda-1}\sum_{a\in\mathcal{A}}R_{a}\frac{-\sum_{v\in \vartheta(a)}\mathbb{P}[\vartheta_{\boldsymbol{P}}(a)=v]^{q}\log\mathbb{P}[ \vartheta_{\boldsymbol{P}}(a)=v]}{e^{-T_{a}(q)}}\] \[=\frac{1}{\lambda-1}\sum_{a\in\mathcal{A}}R_{a}\Big{(}-\sum_{v \in\vartheta(a)}\mathbb{P}[\vartheta_{\boldsymbol{Q}}(a)=v]\log\mathbb{P}[ \vartheta_{\boldsymbol{P}}(a)=v]\Big{)}.\]
In fact, this shows that \(\dim_{\mathrm{loc}}(\mu_{\mathbf{P}},x)=\alpha\) for \(\mu\)\(\mathbf{q}\)-almost all \(x\in X_{\vartheta}\) by Proposition4.5. Thus \(f_{\mu_{\mathbf{P}}}(\alpha)\geq\dim_{\mathrm{H}}\mu_{\mathbf{Q}}=\tau_{\mu_{\mathbf{P}}}^{* }(\alpha)\), as required.
The result for \(\alpha=\alpha_{\min}\) (resp. \(\alpha=\alpha_{\max}\)) follows similarly by taking a degenerate probability vector \(\mathbf{Q}\) assigning equal value to the realisations of \(\vartheta(a)\) with maximal (resp. minimal) probabilities given by \(\mathbf{P}\), and zero otherwise. The corresponding non-degenerate sub-substitution is also compatible and recognisable, so the same arguments yield the corresponding bounds.
## 5. Examples, counterexamples and applications
### Failure of bounds for \(q<0\) without recognisability
In the following two examples, we show the results in TheoremA do not extend in general to give an upper bound for the \(L^{q}\)-spectrum in terms of the inflation word \(L^{q}\)-spectrum, for \(q<0\). In Example5.1, we construct a class of frequency measures on the full-shift on two letters for which the \(L^{q}\)-spectrum and inflation word analogue differ in the \(q<0\) case. The random substitutions that give rise to these frequency measures are not compatible, but in Example5.2 we present a compatible analogue.
In contrast, in Example5.3, we give an example showing that the results for \(q<0\) can hold for all \(q\in\mathbb{R}\) under the identical set condition with identical production probabilities.
**Example 5.1**.: Let \(p_{1}<p_{2}\in(0,1)\) such that \(p_{1}+3p_{2}=1\) and let \(\vartheta_{\mathbf{P}}\) be the random substitution defined by
\[\vartheta_{\mathbf{P}}\colon a,b\mapsto\begin{cases}ab&\text{with probability }p_{1}\\ ba&\text{with probability }p_{2}\\ aa&\text{with probability }p_{2}\\ bb&\text{with probability }p_{2}\end{cases}\]
We will show for all sufficiently small \(q<0\) that \(\tau_{\mu_{\mathbf{P}}}(q)>T_{\vartheta,\mathbf{P}}(q)\). Observe that, for each \(k\in\mathbb{N}\), the word \(v^{k}=(ab)^{2^{k}}\in\vartheta^{k+1}(a)\cap\vartheta^{k+1}(b)\) occurs with probability
\[\mathbb{P}[\vartheta_{\mathbf{P}}^{k+1}(a)=v^{k}]=\mathbb{P}[\vartheta_{\mathbf{P}}^{ k+1}(b)=v^{k}]=p_{1}^{2^{k}}.\]
Clearly, this is the minimal possible probability with which a level-\(k\) inflation word can occur, so it follows that
\[\lim_{q\to-\infty}\frac{T_{\vartheta,\mathbf{P}}(q)}{q}=-\frac{1}{2}\log p_{1}.\]
Now, let \(u\in\mathcal{L}_{\vartheta}^{2^{k+1}}\) be arbitrary. We show that \(\mu_{\mathbf{P}}([u])\geq p_{1}^{2^{k-1}}p_{2}^{2^{k-1}}/2\). Since \(\vartheta(a)=\vartheta(b)\) with identical production probabilities, it follows by Lemma2.12 that for any choice of \(w\in\mathcal{L}_{\vartheta}^{2^{k}+1}\)
\[\mu_{\mathbf{P}}([u])=\frac{1}{2}\left(\mathbb{P}[\vartheta_{\mathbf{P}}(w)_{[1,2^{k+1 }]}=u]+[\vartheta_{\mathbf{P}}(w)_{[2,2^{k+1}+1]}=u]\right).\]
If \(\mathbb{P}[\vartheta_{\boldsymbol{P}}(w)_{[1,2^{k+1}]}=u]\geq p_{1}^{2^{k-1}}p_{2} ^{2^{k-1}}\), then we are done, otherwise at least half of the letters in \(v\) must be sent to \(ab\). But then for \(u\) to appear from the second letter, at least half of the letters in \(v\) must be sent to \(ba\) or \(bb\), so \(\mathbb{P}[\vartheta_{\boldsymbol{P}}(w)_{[2,2^{k+1}+1]}=u]\geq p_{1}^{2^{k-1 }}p_{2}^{2^{k-1}}\). Hence, \(\mu_{\boldsymbol{P}}([u])\geq p_{1}^{2^{k-1}}p_{2}^{2^{k-1}}/2\) so, in particular,
\[\min_{u\in\mathcal{L}_{\vartheta}^{2^{k+1}}}\mu_{\boldsymbol{P}}([u])\geq \frac{1}{2}p_{1}^{2^{k-1}}p_{2}^{2^{k-1}}.\]
It follows that
\[\lim_{q\to-\infty}\frac{\tau_{\mu_{\boldsymbol{P}}}(q)}{q}\leq-\frac{1}{4}( \log p_{1}+\log p_{2})<-\frac{1}{2}\log p_{1}=\lim_{q\to-\infty}\frac{T_{ \vartheta,\boldsymbol{P}}(q)}{q}.\]
By a slight modification of this example, we can construct a compatible random substitution for which the two notions do not coincide.
**Example 5.2**.: Let \(p_{1}<p_{2}\in(0,1)\) such that \(p_{1}+3p_{2}=1\) and let \(\vartheta_{\boldsymbol{P}}\) be the random substitution defined by
\[\vartheta_{\boldsymbol{P}}\colon a,b\mapsto\begin{cases}ab\,ba&\text{with probability }p_{1}\\ ba\,ab&\text{with probability }p_{2}\\ ab\,ab&\text{with probability }p_{2}\\ ba\,ba&\text{with probability }p_{2}\end{cases}\]
By similar arguments to the previous example, we obtain
\[\lim_{q\to-\infty}\frac{\tau_{\mu_{\boldsymbol{P}}}(q)}{q}\leq-\frac{1}{8}( \log p_{1}+\log p_{2})<-\frac{1}{4}\log p_{1}=\lim_{q\to-\infty}\frac{T_{ \vartheta,\boldsymbol{P}}(q)}{q}.\]
The random substitution in Example 5.2 satisfies the identical set condition with identical production probabilities. These conditions are also satisfied by the following example. However, here the \(L^{q}\)-spectrum and inflation word analogue coincide for all \(q\in\mathbb{R}\) by a direct argument.
**Example 5.3**.: We will show that for the random substitution
\[\vartheta_{\boldsymbol{P}}\colon a,b\mapsto\begin{cases}ab&\text{with probability }p\\ ba&\text{with probability }1-p\end{cases}\]
the limit defining \(\tau_{\mu_{\boldsymbol{P}}}(q)\) exists for all \(q\in\mathbb{R}\), and
\[\tau_{\mu_{\boldsymbol{P}}}(q)=T_{\vartheta,\boldsymbol{P}}(q)=\frac{1}{ \lambda}\varphi_{1}(q)=-\frac{1}{2}\log(p^{q}+(1-p)^{q}).\]
Corollary B gives the result for all \(q>0\) and that \(\tau_{\mu_{\boldsymbol{P}}}(q)\geq T_{\vartheta,\boldsymbol{P}}(q)=2^{-1} \varphi_{1}(q)\) for all \(q<0\), so it only remains to verify for all \(q<0\) that
\[\overline{\tau}_{\mu_{\boldsymbol{P}}}(q)\leq T_{\vartheta,\boldsymbol{P}}(q).\]
Since \(\vartheta(v^{1})=\vartheta(v^{2})\) for all \(v^{1},v^{2}\in\mathcal{L}_{\vartheta}\), it follows from Lemma 2.12 that for all \(u\in\mathcal{L}_{\vartheta}^{2m}\) and any \(v\in\mathcal{L}_{\vartheta}^{m+1}\),
\[\mu_{\boldsymbol{P}}([u])=\frac{1}{2}\left(\mathbb{P}[\vartheta(v)_{[1,1+2m-1]} =u]+\mathbb{P}[\vartheta(v)_{[2,2+2m-1]}=u]\right).\]
Let \(\mathcal{V}_{2m}=\{(ab)^{m},(ba)^{m}\}\). If \(u\in\mathcal{L}_{\vartheta}^{2m}\setminus\mathcal{V}_{2m}\), then \(u\) must contain \(bb\) as a subword. This uniquely determines the cutting points in any inflation word decomposition, so there exists a unique \(v\) and \(j(u)\in\{1,2\}\) such that \(u\in\vartheta(v)_{[j(u),2m+j(u)-1]}\). It follows that
\[\sum_{u\in\mathcal{L}_{\vartheta}^{2m}}\mu_{\boldsymbol{P}}([u])^ {q} \geq\sum_{u\in\mathcal{L}_{\vartheta}^{2m}\setminus\mathcal{V}_{ 2m}}\left(\frac{1}{2}\mathbb{P}[\vartheta_{\boldsymbol{P}}(v)_{[j(u),j(u)+2m- 1]}=u]\right)^{q}\] \[\geq\frac{1}{2^{q}}\sum_{u\in\mathcal{L}_{\vartheta}^{2m} \setminus\mathcal{V}_{2m}}\mathbb{P}[\vartheta_{\boldsymbol{P}}(v_{2}\cdots v _{m})=u_{[3-j(u),2-j(u)+2m]}]^{q}.\]
Now, for every \(w\in\vartheta(v_{2}\cdots v_{m})\) there is a \(u\) such that \(w=u_{[3-j(u),2-j(u)+2m]}\). Hence,
\[\sum_{u\in\mathcal{L}_{\vartheta}^{2m}}\mu_{\boldsymbol{P}}([u])^{q}\geq\frac{ 1}{2^{q}}\sum_{w\in\vartheta(v_{2}\cdots v_{m})}\mathbb{P}[\vartheta_{ \boldsymbol{P}}(v_{2}\cdots v_{m})=w]^{q}\]
and the conclusion follows by similar arguments to those used in the proofs of the main theorems.
### Examples with recognisability
We first provide examples of random substitutions for which the multifractal formalism holds.
**Example 5.4**.: Let \(p>0\) and let \(\vartheta_{p}\) be the random substitution defined by
\[\vartheta_{p}\colon\begin{cases}a\mapsto\begin{cases}abb&\text{with probability }p\\ bab&\text{with probability }1-p\end{cases}\\ b\mapsto aa\end{cases}\]
Certainly \(\vartheta_{p}\) is compatible, with corresponding primitive substitution matrix
\[M=\begin{pmatrix}1&2\\ 2&0\end{pmatrix},\]
Perron-Frobenius eigenvalue \((1+\sqrt{17})/2\), and (normalised) right Perron-Frobenius eigenvector
\[\left(\frac{-3+\sqrt{17}}{2},\frac{5-\sqrt{17}}{2}\right).\]
One can verify that \(\vartheta\) is recognisable since every occurrence of \(aa\) intersects an image of \(b\) and the adjacent letters then determine the cutting points. Thus by Theorem D, for all \(q\in\mathbb{R}\)
\[\tau_{\mu_{p}}(q)=T_{\vartheta,\boldsymbol{P}}(q)=\frac{1}{\lambda-1}\varphi_ {1}(q)=-\frac{7-\sqrt{17}}{8}\log(p^{q}+(1-p)^{q})\]
and measure \(\mu_{p}\) satisfies the multifractal formalism. The asymptotes have slopes \(-(7-\sqrt{17})\log(p)/8\) and \(-(7-\sqrt{17})\log(1-p)/8\). A plot of the \(L^{q}\)-spectra and multifractal spectra for two choices of \(p\) is given in Figure 1.
For \(p=1/2\), the \(L^{q}\)-spectrum of the measure \(\mu_{p}\) is a straight line and the multifractal spectrum is equal to \(h_{\mathrm{top}}(X_{\vartheta})\) at \(h_{\mathrm{top}}(X_{\vartheta})\), and \(-\infty\) otherwise.
In the following example, we highlight that the multifractal spectrum need not have value \(0\) at the endpoints.
**Example 5.5**.: Let \(\vartheta_{p}\) be the random substitution defined by
\[\vartheta_{p}\colon\begin{cases}a\mapsto\begin{cases}abb&\text{with probability }p\\ bab&\text{with probability }p\\ bba&\text{with probability }1-2p\end{cases}\\ b\mapsto aaa\end{cases}\]
Similarly to Example 5.4, \(\vartheta_{p}\) is primitive, compatible and recognisable. Hence, Theorem D gives that
\[\tau_{\mu_{p}}(q)=-\frac{3}{10}\log(2p^{q}+(1-2p)^{q}).\]
The asymptotes have slopes \(-3\log(p)/10\) and \(-3\log(1-2p)/10\). For \(p=1/5\) and \(p=2/5\), the \(L^{q}\)-spectrum and multifractal spectrum of \(\mu_{p}\) are plotted in Figure 2. Here, we highlight that the endpoints of the multifractal spectrum need not be equal to zero.
Figure 1. \(L^{q}\)-spectra and multifractal spectra corresponding to a recognisable substitution for \(p\in\{1/5,2/5\}\).
**Example 5.6**.: Consider the random substitution on the three-letter alphabet \(\mathcal{A}=\{a,b,c\}\) defined by
\[\vartheta_{\mathbf{P}}\colon\begin{cases}a\mapsto\begin{cases}bbc&\text{with probability }p_{1}\\ cbb&\text{with probability }1-p_{1}\\ b\mapsto\begin{cases}cca&\text{with probability }p_{2}\\ acc&\text{with probability }1-p_{2}\\ c\mapsto\begin{cases}aab&\text{with probability }p_{3}\\ baa&\text{with probability }1-p_{3}\end{cases}\end{cases}\end{cases}\]
for \(p_{1}\), \(p_{2}\), and \(p_{3}\) in \((0,1)\). It is immediate that this substitution is compatible, and by considering the occurrences of 2, 3, or 4 letter repetitions, we observe that this substitution is also recognisable. Moreover, the hypotheses of [15, Theorem 4.8] are satisfied since \(\vartheta\) is constant length and \(\#\vartheta(a)=\#\vartheta(b)=\#\vartheta(c)\). In particular, the corresponding subshift \(X_{\vartheta}\) is intrinsically ergodic with unique measure of maximal entropy given by taking \(p_{1}=p_{2}=p_{3}=1/2\).
It follows from [15, Lemma 4.12] that the measure of maximal entropy is not a Gibbs measure with respect to the zero potential, so the system does not satisfy the usual specification property. For this choice of uniform probabilities, the \(L^{q}\)-spectrum is a straight line passing through the point \((1,0)\) with slope \(h_{\mathrm{top}}(X_{\vartheta})=\log(2)/2\). More generally, the \(L^{q}\)-spectrum is given for all \(q\in\mathbb{R}\) by the formula
\[\tau_{\mu_{\mathcal{P}}}(q)=-\frac{1}{6}\left(\log\bigl{(}(1-p_{1})^{q}+p_{1}^{ q}\bigr{)}+\log\bigl{(}(1-p_{2})^{q}+p_{2}^{q}\bigr{)}+\log\bigl{(}(1-p_{3})^{q}+p_{3}^ {q}\bigr{)}\right)\]
and the multifractal formalism is satisfied.
Figure 2. \(L^{q}\)-spectra and multifractal spectra corresponding to a recognisable substitution for \(p\in\{1/5,2/5\}\).
For an example on an alphabet of size two, one may consider the random substitution
\[\vartheta_{\mathbf{P}}\colon\begin{cases}a\mapsto\begin{cases}ababbb&\text{with probability }p_{1}\\ abbabb&\text{with probability }1-p_{1}\\ baabaa&\text{with probability }p_{2}\\ babaaa&\text{with probability }1-p_{2}\end{cases}\end{cases}\]
for \(p_{1}\) and \(p_{2}\) in \((0,1)\). The analysis of this example proceeds identically as above.
### Examples without recognisability
Finally, we consider the two most commonly studied examples of random substitutions: random period doubling and random Fibonacci.
**Example 5.7**.: Given \(p\in(0,1)\), let \(\vartheta_{p}\) be the random period doubling substitution defined by
\[\vartheta_{p}\colon\begin{cases}a\mapsto\begin{cases}ab&\text{with probability }p\\ ba&\text{with probability }1-p\\ b\mapsto aa\end{cases}\]
and let \(\mu_{p}\) denote the corresponding frequency measure. The substitution \(\vartheta_{p}\) satisfies the disjoint set condition, so for all \(q\in[0,\infty)\),
\[\tau_{\mu_{p}}(q)=-\frac{2}{3}\log(p^{q}+(1-p)^{q}).\]
The asymptote as \(q\to\infty\) has slope \(-2\log(\max\{p,1-p\})/3\), which gives a sharp lower bound on the local dimensions of \(\mu_{p}\).
If \(p=1/2\), then the measure \(\mu_{p}\) has linear \(L^{q}\)-spectrum for \(q\geq 0\) given by
\[\tau_{\mu_{1/2}}(q)=\frac{2}{3}(q-1)\log 2.\]
Since the substitution satisfies the disjoint set condition but is not recognisable, our results do not give the \(L^{q}\)-spectrum for \(q<0\).
**Example 5.8**.: The random Fibonacci substitution \(\vartheta_{p}\) defined by
\[\vartheta_{p}\colon\begin{cases}a\mapsto\begin{cases}ab&\text{with probability }p\\ ba&\text{with probability }1-p\\ b\mapsto a\end{cases}\end{cases}\]
does not satisfy either the identical set condition nor the disjoint set condition. Hence, we cannot apply Corollary B to obtain a closed-form formula for \(\tau_{\mu_{p}}(q)\). However, we can still apply Theorem A to obtain a sequence of lower and upper bounds. The case \(k=1\) gives the following bounds for all \(0<q<1\):
\[-\frac{1}{\phi^{2}}\log(p^{q}+(1-p)^{q})=\frac{1}{\phi}\varphi_{1}(q)\leq\tau_ {\mu_{p}}(q)\leq\frac{1}{\phi-1}\varphi_{1}(q)=-\log(p^{q}+(1-p)^{q}),\]
where \(\phi\) denotes the golden ratio. Reversing the inequalities yields the corresponding bounds for \(q>1\). Of course, by considering larger \(k\) we can obtain better bounds.
For \(p=1/2\), the bounds given by Theorem A for \(k=3,5,7\) are shown in Figure 3.
## Acknowledgements
The authors are grateful to Philipp Gohlke for his detailed comments on a draft version of this manuscript, which helped to remove some technical assumptions from Theorem D. They also thank Dan Rust and Tony Samuel for valuable input. AM thanks SFB1283 and the Universitat Bielefeld for supporting a research visit during the summer of 2022, where some of the work on this project was undertaken. AM was supported by EPSRC DTP and the University of Birmingham. AR was supported by EPSRC Grant EP/V520123/1 and the Natural Sciences and Engineering Research Council of Canada. The authors thank the organisers of the Junior Ergodic Theory Meeting hosted at the ICMS in Edinburgh in March 2022, where this project began.
|
2303.03696 | The study on the structure of exotic states $χ_{c 1}(3872)$ via
beauty-hadron decays in $pp$ collisions at $\sqrt{s}=8\,\mathrm{TeV}$ | A dynamically constrained phase-space coalescence (DCPC) model was introduced
to study the exotic state $\chi_{c 1}(3872)$ yield for three possible
structures: tetraquark state, nuclear-like state, and molecular state
respectively, where the hadronic final states generated by the parton and
hadron cascade model (PACIAE). The $\chi_{c 1}(3872)$/$\psi (2S)$ cross-section
ratio from beauty-hadron decays (non-prompt) based on the $\chi_{c 1}(3872)$ or
$\psi (2S)\to J/\psi{\pi^+}{\pi^-}$ bound state in the decay chains as a
function of charged-particle multiplicity and transverse momentum in $pp$
collisions at $\sqrt{s}=8\,\mathrm{TeV}$ are calculated. A tetraquark state
scenario from PACIAE+DCPC model shows better agreement with the LHCb and ATLAS
measurements for the non-prompt $\chi_{c 1}(3872)$/$\psi(2S)$ cross-section
ratio distributions, indicating that the $\chi_{c 1}(3872)$ is more likely to
be a compact tetraquark state. | Chun-tai Wu, Zhi-Lei She, Xin-Ye Peng, Xiao-Lin Kang, Hong-Ge Xu Dai-Mei Zhou, Gang Chen, Ben-Hao Sa | 2023-03-07T07:18:49Z | http://arxiv.org/abs/2303.03696v1 | The study on the structure of exotic states \(\chi_{c1}(3872)\) via beauty-hadron decays in \(pp\) collisions at \(\sqrt{s}=8\,\mathrm{TeV}\)
###### Abstract
A dynamically constrained phase-space coalescence (DCCP) model was introduced to study the exotic state \(\chi_{c1}(3872)\) yield for three possible structures: tetraquark state, nuclear-like state, and molecular state respectively, where the hadronic final states generated by the parton and hadron cascade model (PACIAE). The \(\chi_{c1}(3872)/\psi(2S)\) cross-section ratio from beauty-hadron decays (non-prompt) based on the \(\chi_{c1}(3872)\) or \(\psi(2S)\to J/\psi^{+}\pi^{-}\) bound state in the decay chains as a function of charged-particle multiplicity and transverse momentum in \(pp\) collisions at \(\sqrt{s}=8\,\mathrm{TeV}\) are calculated. A tetraquark state scenario from PACIAE+DCCP model shows better agreement with the LHCb and ATLAS measurements for the non-prompt \(\chi_{c1}(3872)/\psi(2S)\) cross-section ratio distributions, indicating that the \(\chi_{c1}(3872)\) is more likely to be a compact tetraquark state.
pacs: 25.75.-q, 24.85.+p, 24.10.Lx
## I Introduction
In addition to mesons composed of quark-antiquark pairs and baryons consisting of three quarks, many bound states that are incompatible with traditional hadron frameworks have been observed in the decades since the quark model proposed by Gell-Mann in 1964 [1]. These bound states, also known as exotic states, including multiquark states [2; 3; 4], hadron molecular states [5], hybrid states [6; 7], and glueballs [8], are allowed and expected by the quantum chromodynamics (QCD) and the quark model. While many unconventional hadron candidates containing heavy quarks have been discovered experimentally in recent years [9], the exact nature of even the most well-studied \(\chi_{c1}(3872)\) particle, also known as X(3872), is still unclear.
The \(\chi_{c1}(3872)\) particle as an exotic charmonium state was observed in the exclusive decay process \(B^{\pm}\to K^{\pm}J/\psi\pi^{+}\pi^{-}\) by the Belle Collaboration in 2003, which decays into \(J/\psi\pi^{+}\pi^{-}\)[10]. Later, CDF II, D0, BE-SIII, and BABAR Collaboration confirmed this exotic state's discovery experimentally [11; 12; 13; 14]. Among them, CDF Collaboration proposed that the quantum number of \(\chi_{c1}(3872)\) particle may be \(J^{PC}=1^{++}\) or \(2^{-+}\)[15], and D0 Collaboration suggested that \(\psi(2S)\) state and \(\chi_{c1}(3872)\) state with the same decay mode have the same production and decay properties, which can provide a good benchmark for studying the properties of \(\chi_{c1}(3872)\) particle [12]. Finally, the spin and parity of the \(\chi_{c1}(3872)\) state are determined by the LHCb Collaboration to \(J^{PC}=1^{++}\)[16]. Although there are several measurements on \(\chi_{c1}(3872)\) particle, due to the lack of the understanding of its exact properties, various models have emerged to describe \(\chi_{c1}(3872)\) state as a \(D^{*0}\bar{D}^{0}\) molecular state with small binding energy [17; 18], a compact tetraquark state [19; 20], a hybrid meson [21; 22], or a charmonium-molecule [23; 24].
Recently, The prompt \(\chi_{c1}(3872)/\psi(2S)\) cross-section ratio was measured at midrapidity by CMS Collaboration as a function of transverse momentum (\(p_{\mathrm{T}}\)) in Pb-Pb collisions at \(\sqrt{s_{\mathrm{NN}}}=5\,\mathrm{TeV}\). The central value for the ratio is close to unity and enhanced with respect to the one measured in pp collisions [25; 26]. This provides a unique experimental input to theoretical models understanding the \(\chi_{c1}(3872)\) production mechanism and the nature of its state since the modification of the hadronization mechanism is predicted when a color-deconfined state of the matter called quark-gluon plasma is formatted in heavy-ion collisions. The AMPT transport model [27] with instantaneous coalescence, TAMU model [28] considering only the regeneration processes, the statistical hadronization model (SHM) [29] based on the assumption of thermal equilibrium predict the different magnitude of the ratio with different scenarios of the structure.
In high-multiplicity pp collisions at LHC energies, the charged-particle densities can reach values comparable with those measured in peripheral heavy-ion collisions. Measurements at the such condition in pp collisions showed features that resemble those in heavy-ion collisions [30; 31; 32]. Recently, a multiplicity dependence of the \(p_{\mathrm{T}}\)-differential \(\Lambda_{c}^{+}/\mathrm{D}^{0}\) ratio is observed by ALICE Collaboration, evolving from pp to Pb-Pb collisions smoothly [33; 34]. The prompt \(\chi_{c1}(3872)/\psi(2S)\) cross-section ratio is found to decrease as charged-particle multiplicity increases by the LHCb Collaboration [35], which is well described by the comover interaction model [36]. The \(\chi_{c1}(3872)/\psi(2S)\) cross-section ratio from beauty-hadron decays (non-prompt) showed a slight increase trend as charged-particle multiplicity increases, no the |
2306.09203 | Transferring Knowledge for Food Image Segmentation using Transformers
and Convolutions | Food image segmentation is an important task that has ubiquitous
applications, such as estimating the nutritional value of a plate of food.
Although machine learning models have been used for segmentation in this
domain, food images pose several challenges. One challenge is that food items
can overlap and mix, making them difficult to distinguish. Another challenge is
the degree of inter-class similarity and intra-class variability, which is
caused by the varying preparation methods and dishes a food item may be served
in. Additionally, class imbalance is an inevitable issue in food datasets. To
address these issues, two models are trained and compared, one based on
convolutional neural networks and the other on Bidirectional Encoder
representation for Image Transformers (BEiT). The models are trained and
valuated using the FoodSeg103 dataset, which is identified as a robust
benchmark for food image segmentation. The BEiT model outperforms the previous
state-of-the-art model by achieving a mean intersection over union of 49.4 on
FoodSeg103. This study provides insights into transfering knowledge using
convolution and Transformer-based approaches in the food image domain. | Grant Sinha, Krish Parmar, Hilda Azimi, Amy Tai, Yuhao Chen, Alexander Wong, Pengcheng Xi | 2023-06-15T15:38:10Z | http://arxiv.org/abs/2306.09203v1 | # Transferring Knowledge for Food Image Segmentation using Transformers and Convolutions
###### Abstract
Food image segmentation is an important task that has ubiquitous applications, such as estimating the nutritional value of a plate of food. Although machine learning models have been used for segmentation in this domain, food images pose several challenges. One challenge is that food items can overlap and mix, making them difficult to distinguish. Another challenge is the degree of inter-class similarity and intra-class variability, which is caused by the varying preparation methods and dishes a food item may be served in. Additionally, class imbalance is an inevitable issue in food datasets. To address these issues, two models are trained and compared, one based on convolutional neural networks and the other on Bidirectional Encoder representation for Image Transformers (BEiT). The models are trained and valuated using the FoodSeg103 dataset, which is identified as a robust benchmark for food image segmentation. The BEiT model outperforms the previous state-of-the-art model by achieving a mean intersection over union of 49.4 on FoodSeg103. This study provides insights into transfering knowledge using convolution and Transformer-based approaches in the food image domain.
## 1 Introduction
Every year, malnutrition causes a $10 billion annual burden on the healthcare system in the United States [8]. There is a strong link between poor nutrition and disease, mortality, and impaired quality of life [17, 24]. Older adults are at the greatest risk of health complications due to nutritional deficiencies, being over four times more likely to be hospitalized due to malnutrition. Further, they are at higher risk because one in four adults aged 65 years or older is malourished [9, 10].
Nutrition monitoring has been proposed as a solution for combating malnutrition among the at-risk aging popu
Figure 1: Comparison of downstream task performance of CNNs and ViTs on food image segmentation for the purpose of evaluating feature transferability.
lation [15]. Methods exist for monitoring and measuring nutritional intake, but most of them are not reliable or accurate [18]. For example, food diaries, questionnaires, weighing, and photography each suffer from slowness, the need for trained personnel, or reporting biases.
Recognizing and breaking down the nutritional contents of a plate of food is one of the core tasks in food imaging for nutrition monitoring. Various pipelines using computer vision models have been designed [23, 28]. One promising approach [16] combines semantic segmentation with depth imaging to generate segmentation masks. The depth map is then used to compute estimates for food volumes.
Convolutional Neural Networks (CNN) are one of the mainstream methods for semantic segmentation. CNNs are mostly lightweight in terms of computational and memory requirements and benefit from strong inductive biases. Numerous variations of CNNs have been proposed, which introduce advanced architectural designs and complex convolutional operations. For example, deformable convolutions endow CNNs with adaptive receptive fields [5], and pyramid pooling modules allow CNNs to better make use of global context clues in the scene [29].
More recently, Transformer-based models have achieved great success across computer vision tasks including semantic segmentation [6]. Equipped with multi-headed self-attention mechanisms, these models have a global receptive field that enables them to make better use of global context information, although this comes at the expense of a higher computational and memory cost [25].
In addition to the ubiquity of Transformer-based models, recent years have also seen a shift towards large-scale vision models. BEiT [1, 14] and InternImage [27] are both representatives of this paradigm. They are large-scale models pre-trained on ImageNet and capable of being fine-tuned for numerous downstream vision tasks [7]. After pre-training, a task-specific layer or network is appended, and the whole network is fine-tuned for the desired downstream task.
Semantic segmentation on food images presents a unique set of challenges. One of them is that foods can occlude and mix with one-another in complex ways, preventing the model from accurately deliminating between them [15]. Another challenge is the presence of inter-class similarity and intra-class variability. For example, chicken and ham can be prepared to have a very similar appearance. Different cooking techniques applied to chicken can result in vastly different visual characteristics. An effective food segmentation model must be able to recognize the diverse appearances of chicken without confusing it with other similar-looking foods like ham [28].
Food image datasets are not as large or robust as other domains, especially when it comes to the need of image masks for training semantic segmentation models. UBIMIB2016 [4], UECFoodPixComplete [13], and FoodSeg103 [28] are datasets with fine-grained segmentation masks. Other existing datasets are mainly for food recognition and coarse-grained segmentation tasks. For instance, Recipe1M comprises one-million food images with corresponding recipes [12], and Food101 contains approximately one-hundred thousand food images across 101 classes [11]. However, neither of them includes annotation masks for segmentations. Among the food datasets that support semantic segmentation, we identify FoodSeg103 as the most robust [28]. This is due to its detailed, pixel-level annotations, plus empirical evidence of its relatively high difficulty. Previous works have achieved a mean intersection over union (mIoU) over all classes of up to 45.1 using a ViT-B/16-based Segmentation Transformer [28, 30].
In this study, we aim to gain insight into how convolutional and Transformer-based architectures differ in knowledge transfer to the food image domain. Convolution-based methods are one of the main-stream approaches for segmentation, including the recent InternImage model [27], which is based on a deformable convolution operation, DCN-V3. It has achieved state-of-the-art results on the challenging ADE20k dataset for semantic segmentation [31]. Therefore, we include InternImage [27] as a baseline model.
In this work, we propose utilizing the BEiT v2 model [14], which is a recent Transformer-based model that has yet to be applied to food image segmentation. The BEiT v2 encoder is more sophisticated than prior works, owing to its pre-training procedure, which involves training an image tokenizer and visual codebook (see Figure 1). This promotes the learning of richer, semantic-level information, rather than lower-level, pixel-based information.
Through exploring the distinctions between InternImage and BEiT v2 on knowledge transfer to the domain of food images, we make the following findings and contributions:
* We survey the landscape of food image datasets for the purpose of semantic segmentation. We identify the most robust dataset and explain what makes it strong for semantic segmentation. Additionally, we explain why other datasets may not be suitable for this task.
* We train a tokenizer to investigate the power of vector-quantized knowledge distillation in the food image domain. We find that the tokenizer is learning semantic concepts in the food image dataset.
* We evaluate state-of-the-art models in the new domain of food images. Our experiments will provide insight into how knowledge transferability differs between convolution and Transformer-based architectures.
* We establish top results on the FoodSeg103 dataset. In particular, a BEiT v2 model achieves a mIoU of 49.4 on FoodSeg103, outperforming the previous state-of-the-art model's mIoU of 45.1.
## 2 Methods
### Dataset
The field of food image datasets has expanded in recent years, yet few datasets are specifically designed for semantic segmentation. Furthermore, each of these datasets has its own unique strengths and weaknesses. The datasets we investigate for semantic segmentation include UNIMIB2016, UECFoodPixComplete, and FoodSeg103 [4, 13, 28]. We choose these datasets because they all provide segmentation masks. Larger food image datasets exist but they do not contain segmentation masks. We include a sample image and mask pair from each of UNIMIB2016, UECFoodPixComplete, and FoodSeg103 in Figure 2.
UNIMIB2016 was one of the first datasets in the food domain with ground truth food masks to support food segmentation. It was published in 2017 by researchers from the University of Milano-Bicocca for the purpose of investigating food segmentation, food consumption, and dietary monitoring. The dataset consists of 1027 food images spanning 73 food categories. Food image segmentation masks are created manually with polygonal bounding boxes. Further, masks are not per-item, so all food classes are clubbed into one category in the segmentation masks [4].
UECFoodPixComplete was released in 2020 by researchers from the University of Electro-Communications. It includes 102 dishes and comprises 10,000 food images. The segmentation masks were obtained semi-automatically using GrabCut, which segments images based on user-initialized seeding [21]. The automatically-generated masks were further refined by human annotators based on a set of predefined rules [13].
FoodSeg103 is a recent dataset designed for food image segmentation, consisting of 7,118 images depicting 730 dishes. It was released in 2021 and the dataset's masks are pixel-level. They were obtained through manual annotations. The masks were then inspected for further refinement. Compared to UECFoodPixComplete, FoodSeg103 proves to be a more challenging benchmark for food image segmentation. In experiments conducted with a deeplabv3+ model [2], a lower mIoU of 34.2 was achieved on FoodSeg103 than UECFoodPixComplete [28]. Further, unlike UECFoodPixComplete, which covers entire dishes but lacks fine-grained annotation for individual dish components, FoodSeg103 aims to annotate dishes at a more fine-grained level, capturing the characteristics of each dish's individual ingredients.
We conclude that FoodSeg103 is the most suitable dataset for training and evaluating a model for food image segmentation. In Table 1, we summarize various datasets, including some larger datasets that lack annotation masks, which are essential for training segmentation models using a supervised paradigm.
### InternImage
Neural networks that make use of convolutions have been a longstanding standard in computer vision. The new InternImage model, which uses a variation of convolutions called Deformable Convolution V3 (DCN-V3), has achieved state-of-the-art results in semantic segmentation on notable benchmarks such as ADE20k [31]. The DCN-V3 operation uses a \(3\times 3\) convolution kernel with learnable receptive fields and modulation scalars [27]. By equipping convolutions with a learnable receptive field, they can overcome their weaknesses. In particular, they can learn to utilise long-range dependencies and adaptive spatial aggregation from the data. Further, InternImage also retains the benefits of computational and memory efficiency that most convolutional models have over Transformer-based models.
InternImage processes images in several stages using blocks (see Figure 1). The input is down-scaled between the blocks so that each feature map has a different resolution. Each DCN-V3 block consists of a number of basic blocks, where the basic block is designed to resemble a Vision Transformer, at a high level. The basic block consists of two sub-layers. The first uses the DCN-V3 operation as its core operator, and the second is a simple feedforward network. Both outputs have layer normalization applied.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Dataset & Classes & Images & Masks \\ \hline UNIMIB2016 & 73 & 1027 & Yes \\ UECFoodPixComplete & 102 & 10,000 & Yes \\ FoodSeg103 & 103 & 7118 & Yes \\ Recipe1M & & 887,706 & No \\ Food101 & 101 & 101,000 & No \\ \hline \hline \end{tabular}
\end{table}
Table 1: A summary of several robust food datasets, their number of classes, number of images, and whether it includes masks for segmentation ground truths.
Figure 2: Image, mask pair from UNIMIB2016, UECFoodPixComplete, and FoodSeg103.
Finally, each sub-layer has a residual connection surrounding it [27]. InternImage is trained on ADE20k and equipped with Mask2Former (a unified framework for all segmentation tasks based on masked attention) [3]. For the task layer, an UperNet decoder is appended for semantic segmentation.
### BeiT
The BEiT models [1, 14, 26] are Vision Transformer-based architectures capable of encoding images. They use masked image modeling as the pre-training task. In this process, images with corrupted patches are fed into a vision encoder, with the reconstruction target being visual tokens obtained from an image tokenizer.
There are three versions of the BEiT model. The first version introduces masked image modeling as a pre-training task for ViTs, using a pre-trained tokenizer from DALL-E [14, 20]. The second version improves upon the first by training its own visual tokenizer using ImageNet-1K [1]. Finally, the third version is multi-modal and significantly scales up the number of parameters (1B parameters used on vision tasks) [26].
We chose to use BEiT v2 for several reasons. Although BEiT v3 achieves stronger results via scale and multi-modality, its approach to vision is not upgraded from BEiT v2. We also selected BEiT v2 over the first BEiT due to its learned tokenizer, which provides a richer reconstruction target for the encoder than the pre-trained tokenizer used in the first BEiT model. The BEiT v2 model uses a teacher model to train a tokenizer using vector-quantized knowledge distillation [1]. The tokenizer consists of a Transformer encoder and quantizer. The quantizer works by mapping each vector output from the Transformer encoder to its nearest neighbour in a codebook, which serves as a visual vocabulary. In this way, the continuous semantic space that the Transformer maps to is quantized into discrete codes. A decoder then learns to construct teacher model outputs from the sequence of codebook vectors. Through this, the tokenizer learns a meaningful codebook of semantic concepts from the training set. This is an improvement over the first version of BEiT, which used a pretrained tokenizer from DALL-E [20].
The aim of the tokenizer is to discretize the continuous semantic space into codebook categories. Discretizing the continuous semantic space into compact codes using the codebook allows us to create a semantic-aware tokenizer. Each code in the codebook represents a semantic concept from the dataset used (e.g., unique codes for plates, eyes, foods, etc.). This approach will promote the mask-image modeling from pixel to semantic-level learning, resulting in richer representations. In the food domain, where the same ingredient can have varied appearances and contexts in different images, these stronger representations will prove to be useful.
After training the tokenizer, a network of stacked Transformer blocks is trained as a vision encoder. The encoder is trained using the task of masked image modeling. During this process, the input image is split into patches, and some of the patches are masked using a learnable embedding. This corrupted image is then fed as a sequence into the Transformer blocks, with the goal of recovering visual codebook tokens from the input image. The tokenizer trained in the previous step is used to generate the ground truth values used by the BEiT v2 encoder in this step. Once the encoder has been trained, we append an UperNet decoder as the task layer for semantic segmentation, then fine-tune the model.
## 3 Results
### InternImage Model
Among several sizes of the InternImage model that we considered for experiments, we use the base InternImage-B model to compare against the large BEiT v2 model. They use the same crop size and have a comparable number of parameters. The relative sizes of the various InternImage and BEiT models are summarized in Table 2.
The InternImage encoder is pre-trained before having
\begin{table}
\begin{tabular}{l c c} \hline \hline Model Name & Number of Parameters & Crop Size \\ \hline BEiT Large & 441M & 640\(\times\)640 \\ BEiT v2 Base & 163M & 512\(\times\)512 \\ BEiT v2 Large & 441M & 512\(\times\)512 \\ BEiT-3 & 1B & 896\(\times\)896 \\ InternImage-B & 128M & 512\(\times\)512 \\ InternImage-L & 256M & 640\(\times\)640 \\ InternImage-XL & 368M & 640\(\times\)640 \\ InternImage-H & 1.31B & 896\(\times\)896 \\ \hline \hline \end{tabular}
\end{table}
Table 2: A summary of the model versions considered in experiments, including their sizes and input resolutions.
\begin{table}
\begin{tabular}{l c c} \hline \hline Model Name & Number of Parameters & mIoU \\ \hline
**BEiT v2 Large** & **441M** & **49.4** \\ InternImage-B & 128M & 41.1 \\ SeTR-MLA & 711M & 45.1 \\ SeTR-Naive & 723M & 43.9 \\ Swin-S & 931M & 41.6 \\ CCNet & 381M & 35.5 \\ \hline \hline \end{tabular}
\end{table}
Table 3: An overview of the models applied to the FoodSeg103 dataset, along with their mean intersection over union. InternImage and CCNet are based on convolutions; the remainder are on Transformers
a task-specific layer appended for semantic segmentation. pre-training for InternImage-B is done using classification on ImageNet-1K for 300 epochs. [27].
Following the pre-training, the InternImage models are fine-tuned for semantic segmentation on FoodSeg103. In particular, we append an UperNet decoder as a task layer for semantic segmentation. Subsequently, the model undergoes end-to-end fine-tuning using the AdamW optimizer for 160K iterations. We use a learning rate of \(6e-05\), no weight decay, and betas of \((0.9,0.999)\).
### BEiT Model
#### 3.2.1 BEiT Tokenizer Training
We trained a BEiT v2 tokenizer on Food101 to investigate the power of vector-quantized knowledge distillation. The visual tokenizer is initialized as a ViT-B/16 Transformer and taught by a CLIP-B/16 model [19]. The codebook size is 8192, and the dataset used is Food101, which has 101,000 images. The model is trained for 100 epochs, and the input image resolution is \(224\times 224\), which gives \(14\times 14\) total image patches each with resolution \(16\times 16\).
To evaluate the trained tokenizer, we run inference on food images from several categories in Food101, obtaining their corresponding token sequence. We compare tokenized images by considering the IoU of their sets of tokens. In particular, given tokenized images \(\mathcal{I}_{i}\), \(\mathcal{I}_{j}\subseteq\{1,...,8196\}\), we compute
\[\frac{|\mathcal{I}_{i}\cap\mathcal{I}_{j}|}{|\mathcal{I}_{i}\cup\mathcal{I}_{ j}|}\]
to measure the similarity of the tokenized images. We find that images belonging to the same food category typically have a higher degree of similarity, indicating that the tokenizer is successfully learning semantic concepts for the codebook. Figure 3 and Table 4 show a collection of food images and their corresponding token IoUs. Encouragingly, we see that the highest IoUs come from foods of the same class. However, we also see the inter-class similarity of the food domain in the similarity of the tokenized pork chop and filet magnon.
#### 3.2.2 BEiT model tuning
After learning the visual tokenizer, pre-training of the BEiT v2 encoder is conducted. For the large-size model, it is initialized as a ViT-L/16 large-size Vision Transformer with a 16-pixel patch size. The tokenizer that generates ground truths for the pre-training of masked image modeling is used here. Masked image modeling is performed using ImageNet-1k at a resolution of \(224\times 224\) for 1600 epochs, resulting in \(14\times 14\) image patches. A masking ratio of 40% is used, which means that up to 75 patches will be masked per image.
Given the pre-trained large-size BEiT v2 encoder, we append an UperNet decoder for the task layer. The entire model is then fine-tuned for 160K iterations (approximately 50 epochs) at an input resolution \(512\times 512\) using the AdamW optimizer. We use a learning rate of \(3e-05\), weight decay of 0.05, and betas of \((0.9,0.999)\). The best-performing model during training achieved a mIoU of 49.4
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline & Prime Rib I & Prime Rib II & Filet Mignon I & Filet Mignon II & Pork Chop I \\ \hline Prime Rib I & - & **0.281** & 0.121 & 0.153 & 0.131 \\ Prime Rib II & 0.281 & - & 0.123 & 0.126 & 0.152 \\ Filet Mignon I & 0.121 & 0.123 & - & **0.242** & **0.233** \\ Filet Mignon II & 0.153 & 0.126 & 0.242 & - & 0.171 \\ Pork Chop I & 0.131 & 0.152 & 0.233 & 0.171 & - \\ \hline \hline \end{tabular}
\end{table}
Table 4: Token intersection over unions for images in Figure 3
Figure 3: Food images to be tokenized using the learned tokenizer and codebook. Same-category foods have the highest IoUs, but similarly-looking cross-category foods can also yield high IoUs.
after 160k iterations.
### Performance of Models on FoodSeg103
For the BEiT v2 model, during training, the best-performing model achieved a mIoU of 49.4 at iteration 160K. This is the strongest result reported on FoodSeg103, representing a new state-of-the-art for food segmentation on the dataset. On the other hand, for the InternImage-B model, during training, the best-performing model achieved a mIoU of 41.1 at iteration 48k. Table 3 shows the performance of these models in contrast to the other models previously applied to FoodSeg103. BEiT v2 Large outperforms the prior state-of-the-art model and uses less parameters.
Figure 4 shows a visual comparison of segmentation mask predictions made by our trained models. Although both models make good segmentation prediction masks,
Figure 4: A comparison of inference on FoodSeg103 by the best-performing BEiT v2 and InternImage-B model.
these randomly selected testing cases prove to be challenging in terms of detecting food items. For example, on second row, the InternImage model failed to segment the food item on the top left corner. In the third example, the BEiT model failed to segment meat chunks from neighboring food items. In the last example, both models were only able to segment parts of the bacon in the burger.
## 4 Discussions
Several challenges affect the performance of the model in food image segmentation. The distribution of ingredients in food image segmentation is long-tailed, resulting in sparse data for ingredients in the long-tail, and thus, the model poorly predicts these categories. In the FoodSeg103 dataset used, we identified several problematic foods in the long-tail that the model struggled to segment correctly. hamburger appeared 7 times in the training set and 1 time in the test set, pudding appeared 5 times in the train set and 1 time in the test set, and kelp appeared 4 times in the train set and 5 times in the test set. These foods posed difficulty for each model. Figure 4 shows an example of a hamburger, which both models struggle to accurately segment.
A second challenge unique to food is the degree of inter-class similarity and intra-class variability. Each class in food images represents a food ingredient that can be prepared in numerous ways, resulting in a wide variation in appearance. This can also lead to visually similar-looking foods being mistaken for each other. This can be seen in Figure 3 and Table 4, where a visually similar filet nignon and pork chop are tokenized similarly. The models must be capable of understanding that a food ingredient prepared in any way is still the same food, while also differentiating between similar-looking foods [15, 16, 28].
Owing to the superior abilities of Transformers, BEiT exhibited strong improvement over the state-of-the-art, while the performance of InternImage plateaued at a lower mIoU. One factor we can attribute this to is the global receptive field of Transformers. Even though InternImage uses learnable deformable convolutions, its kernel size is still restricted to \(3\times 3\), which prevents InternImage from having the same level of global understanding as BEiT. Global context undoubtably gives valuable insight into categorizing pixels, as certain food items are more common to occur alongside other specific food items. Another factor we may attribute BEiT v2's performance to is the use of vector-quantized knowledge distillation to train a tokenizer for BEiT's pretraining reconstruction targets. This pretraining routine endows BEiT v2 with a stronger grasp of the dataset semantics than InternImage, as the continuous, high-dimensional semantic space is quantized into codes, enabling the model to learn a visual vocabulary through its codebook.
Although BEiT outperformed InternImage in mIoU, there are instances where BEiT struggles to distinguish between classes where InternImage does not. For instance, in the third sample of Figure 4(b), BEiT groups the mashed potato and meat together, whereas InternImage was able to differentiate between them.
## 5 Conclusion
In this work, we sought to evaluate the knowledge transfer capability of Transformer and convolution-based vision backbones on the downstream task of food image segmentation. We select BEiT v2 and InternImage as strong representatives for Transformer and convolutional approaches respectively. Using these representatives and the benchmark FoodSeg103, we found that Vision Transformers have superior downstream task transferability to convolutional networks for the task of food image segmentation.
For future direction, pre-training BEiT v2 on Food101 could obtain stronger representations for food images than if it were trained on ImageNet-1K [1, 22]. Furthermore, the multi-modal capabilities of BEiT-3 could be used to obtain even stronger food representations [26].
## Acknowledgment
The authors would like to acknowledge support from Aging in Place Challenge Program at the National Research Council of Canada (AiP-006).
|
2307.06219 | Double magnetic transitions and exotic field induced phase in the
triangular lattice antiferromagnets Sr$_3$Co(Nb,Ta)$_2$O$_9$ | Two triangular lattice antiferromagnets Sr$_3$Co(Nb,Ta)$_2$O$_9$ with an
effective $j_{\rm eff}=1/2$ of Co$^{2+}$ are synthesized and their magnetic
properties are investigated via magnetization and heat capacity measurements.
The leading in-plane antiferromagnetic exchange coupling is estimated to be
$J/k_{\rm B} \simeq 4.7$ K and 5.8 K, respectively. Both the compounds feature
two-step magnetic transitions at low temperatures [($T_{\rm N1} \simeq 1.47$ K
and $T_{\rm N2} \simeq 1.22$ K) and ($T_{\rm N1} \simeq 0.88$ K and $T_{\rm N2}
\simeq 0.67$ K), respectively], driven by weak easy-axis anisotropy. Under
magnetic field Sr$_3$CoNb$_2$O$_9$ evinces a plateau at $1/3$ magnetization.
Interestingly, the high field magnetization of Sr$_3$CoTa$_2$O$_9$ reveals an
exotic regime (between $H_{\rm S1}$ and $H_{\rm S2}$), below the fully
polarized state in which the heat capacity at low temperatures is governed by a
power law ($C_{\rm p} \propto T^{\alpha}$) with a reduced exponent $\alpha
\simeq 2$. These results demonstrate an unusual field induced state with
gapless excitations in the strongly frustrated magnet Sr$_3$CoTa$_2$O$_9$. The
complete $T-H$ phase diagram is discussed for both the compounds. | Surender Lal, Sebin J Sebastian, S. S. Islam, M. P. Saravanan, M. Uhlarz, Y. Skourski, R. Nath | 2023-07-12T15:07:34Z | http://arxiv.org/abs/2307.06219v1 | Double magnetic transitions and exotic field induced phase in the triangular lattice antiferromagnets Sr\({}_{3}\)Co(Nb,Ta)\({}_{2}\)O\({}_{9}\)
###### Abstract
Two triangular lattice antiferromagnets Sr\({}_{3}\)Co(Nb,Ta)\({}_{2}\)O\({}_{9}\) with an effective \(j_{\rm eff}=1/2\) of Co\({}^{2+}\) are synthesized and their magnetic properties are investigated via magnetization and heat capacity measurements. The leading in-plane antiferromagnetic exchange coupling is estimated to be \(J/k_{\rm B}\simeq 4.7\) K and 5.8 K, respectively. Both the compounds feature two-step magnetic transitions at low temperatures [(\(T_{\rm Ni}\simeq 1.47\) K and \(T_{\rm Ni}\simeq 1.22\) K) and (\(T_{\rm Ni}\simeq 0.88\) K and \(T_{\rm Ni}\simeq 0.67\) K), respectively], driven by weak easy-axis anisotropy. Under magnetic field Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) evinces a plateau at 1/3 magnetization. Interestingly, the high field magnetization of Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\) reveals an exotic regime (between \(H_{\rm Si}\) and \(H_{\rm Si}\)), below the fully polarized state in which the heat capacity at low temperatures is governed by a power law (\(C_{\rm p}\propto T^{\alpha}\)) with a reduced exponent \(\alpha\simeq 2\). These results demonstrate an unusual field induced state with gapless excitations in the strongly frustrated magnet Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\). The complete \(T-H\) phase diagram is discussed for both the compounds.
## I Introduction
Geometrically frustrated magnets have attracted a revived interest since frustration effect may cause the absence of magnetic long-range-order (LRO), leading to an abundance of novel states of matter [1; 2]. The size of local magnetic moment is also a convenient tuning parameter that controls the magnitude of quantum fluctuations and has broad implications on the ground state properties. For instance, a reduced spin value, especially spin-1/2 amplifies the effect of quantum fluctuations and precipitates more non-trivial ground states. A renowned testimony of magnetic frustration and quantum fluctuations is the quantum spin-liquid (QSL), a highly entangled and dynamically disordered many-body state [3]. Over the years, relentless efforts are made to experimentally devise appropriate model compounds with spin-1/2 that may promote this disordered state.
Frustrated spin-1/2 triangular lattice antiferromagnet (TLAF) is widely believed to be a model system to host QSL driven by ground state degeneracy in two-dimension (2D) [4]. In an isotropic Heisenberg TLAF with only nearest-neighbour (NN) interaction, the spins order antiferromagnetically forming 120\({}^{\circ}\) spin structure in zero magnetic field, known as 3-sublattice N\(\acute{e}\)el state [5; 6; 7]. When external magnetic field is applied, the N\(\acute{e}\)el order is subverted and an "up-up-down" (\(uud\)) configuration is stabilized over a wide field range before reaching saturation. This results in a magnetization plateau at 1/3 of the magnetic saturation stemming from quantum and/or thermal fluctuations [8; 9]. Magnetic anisotropy and interactions beyond nearest-neighbour are also two crucial parameters, inherently present in majority of the experimental systems, influence the ground state significantly and give rise to more complex low temperature phases, including QSL [10; 11; 12; 13].
The Co\({}^{2+}\) (\(3d^{7}\))-based TLAFs are a special class of compounds that manifests various exotic phases of matter similar to the \(4f\) systems (e.g. Ce\({}^{3+}\) and Yb\({}^{3+}\)) [14; 15; 16]. In most of these compounds, the impact of crystal electric field (CEF) and spin-orbit coupling (SOC) in a non-cubic environment lead to a Kramers doublet and the effective magnetic moment of Co\({}^{2+}\) ions, which possess the true spin \(S=3/2\), can be described by the pseudo-spin \(j_{\rm eff}=1/2\) at low temperatures, well below the energy scale of the spin-orbit coupling constant (\(\lambda/k_{\rm B}\)) [17; 18]. This allows one to study the combined effects of magnetic frustration and quantum fluctuations due to reduced spin, at very low temperatures [19]. In the past few years, a series of triple perovskites with general formula \(A_{3}\)Co\(B_{2}\)O\({}_{9}\) (\(A\)= Sr, Ba and \(B\) = Sb, Ta, Nb) have been rigorously investigated in which the magnetic Co\({}^{2+}\) ions are embedded onto 2D triangular layers, separated by layers of non-magnetic \(A\) and \(B\) atoms [17; 18; 20; 21; 22]. The most celebrated compound in this family is Ba\({}_{3}\)CoSb\({}_{2}\)O\({}_{9}\) which shows successive magnetic transitions, magnetization plateau at 1/3 and 3/5 of saturation magnetization, and very recently the QSL is claimed [9; 17; 23]. Several other Co\({}^{2+}\) based \(j_{\rm eff}=1/2\) TLAFs are also reported to show diverse physics with complex magnetic orderings [24; 25; 26; 27; 28; 29]. Because of the localized nature of the \(3d\) electrons, cobaltates featuring honeycomb lattice also offer a promising ground to look for Kitaev spin-liquid [30; 31; 32]. Indeed, a field induced Kitaev spin-liquid like behaviour has been observed in Na\({}_{2}\)Co\({}_{2}\)TeO\({}_{6}\)[33] and BaCo\({}_{2}\)(AsO\({}_{4}\))\({}_{2}\)[34]. However, their complete phase diagram remains obscure.
In this work, we report a comprehensive study of the
synthesis and thermodynamic properties of two new frustrated \(j_{\rm eff}=1/2\) iso-structural TLAFs Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) and Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\). Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) is reported to crystallize in a monoclinic structure with space group \(P2_{1}/c\)[35; 36]. Its crystal structure is illustrated in Fig. 1. There are two inequivalent Co atoms residing at the Co(1) 2\(a\)(0,0,0) and Co(2) 2\(d\)(1/2,1/2,0) sites, respectively which are coordinated with O atoms forming slightly distorted CoO\({}_{6}\) octahedra. Two-dimensional (2D) triangular layers are formed by the corner sharing of magnetic CoO\({}_{6}\) and nonmagnetic NbO\({}_{6}\) octahedra in the \(ab\)-plane. Figure 1(b) presents nearly parallel edge-sharing CoO\({}_{6}\) octahedra (NbO\({}_{6}\) omitted) displaying possible superimposed honeycomb lattices. The non-magnetic Sr atoms are located at the interstitial positions. In each layer, an isosceles triangular unit is made up of either one Co(1)\({}^{2+}\) and two Co(2)\({}^{2+}\) or two Co(1)\({}^{2+}\) and one Co(2)\({}^{2+}\) ions. Moreover, the Co\({}^{2+}\) triangular layers are arranged in an \(AAA\)-type stacking perpendicular to the \(c\)-axis which results in minuscule inter-layer frustration [37]. Magnetic measurements reveal double transitions at low temperatures typical for compounds with easy-axis anisotropy. Despite structural similarity, Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) exhibits a 1/3 magnetization plateau which is absent for Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\). Further, Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\) manifests an extended field induced critical regime where the ground state appears to be of QSL type while this regime is found to be narrow for Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\).
## II Experimental details
Polycrystalline samples of Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) and Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\) were synthesized by the traditional solid-state reaction method. Stoichiometric amount of SrCO\({}_{3}\) (99.99%, Sigma Aldrich), CoO (99.999%, Sigma Aldrich), and Nb\({}_{2}\)O\({}_{5}\)/Ta\({}_{2}\)O\({}_{5}\) (99.999%, Sigma Aldrich) were mixed and ground thoroughly for three hours. The mixtures were pressed into disc shaped pellets and sintered at 1100 \({}^{\circ}\)C for 24 hours. In the next step, the heat treatment was repeated at 1250 \({}^{\circ}\)C for 24 hours after regrinding and re-pelletization. The powder x-ray diffraction (XRD) was recorded using a PANalytical powder diffractometer (Cu\(K_{\alpha}\) radiation, \(\lambda=1.54182\) A) at room temperature. Rietveld refinement of the powder XRD data for both the compounds was carried out using the FULPROF software package [38] to check the phase purity of the samples and to calculate the structural parameters.
The \(dc\) magnetization (\(M\)) measurement was performed with a superconducting quantum interference device (SQUID, MPMS-3, Quantum Design) magnetometer as a function of temperature (1.8 K\(\leq T\leq\) 350 K) and magnetic field (0 \(\leq H\leq\) 7 T). High-field magnetization \(M(H)\) was measured in a pulsed magnetic field at the Dresden High Magnetic Field Laboratory [39]. Heat capacity (\(C_{\rm P}\)) as a function of \(T\) (0.1 K\(\leq T\leq\) 300 K) and \(H\) (0 \(\leq H\leq\) 9 T) was measured using thermal relaxation technique in a physical property measurement system (PPMS, Evercool-II, Quantum Design). For Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) and Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\), the measurements were performed down to 0.4 K and 0.1 K using \({}^{3}\)He and dilution inserts, respectively in PPMS.
## III Results
### Powder x-ray Diffraction
Powder XRD data collected at room temperature for Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) and Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\) are shown in Fig. 2(a) and (b), respectively. With the help of Rietveld refinement,
\begin{table}
\begin{tabular}{c c c} Parameters & Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) & Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\) \\ \hline \(a\) (Å) & 9.7867(5) & 9.7790(5) \\ \(b\) (Å) & 5.6460(2) & 5.6643(4) \\ \(c\) (Å) & 17.0057(4) & 16.957(1) \\ \(\beta\) (\({}^{\circ}\)) & 125.32(3) & 125.20(3) \\ \(V_{\rm cell}\) (Å\({}^{3}\)) & 766.67(5) & 767.46(8) \\ Bragg R-factor & 2.2 & 11 \\ Rf-factor & 6.5 & 6.0 \\ \(\chi^{2}\) & 2.5 & 7.7 \\ Co(1) - Co(1) (Å) & 5.6460(1) & 5.6644(5) \\ Co(2) - Co(2) (Å) & 5.6460(1) & 5.6644(5) \\ Co(1) - Co(2) (Å) & 5.6493(3) & 5.6505(3) \\ \end{tabular}
\end{table}
Table 1: Lattice parameters obtained from the Rietveld refinement of the room temperature powder XRD data of Sr\({}_{3}\)Co(Nb,Ta)\({}_{2}\)O\({}_{9}\) (Monoclinic, \(P2_{1}/c\)).
Figure 1: (a) A three-dimensional view of the crystal structure of Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\). The dashed lines guide the triangular layers composed of corner shared CoO\({}_{6}\) and NbO\({}_{6}\) octahedra. The Co(1)O\({}_{6}\) and Co(2)O\({}_{6}\) octahedra are shown in different colours. (b) A section of the triangular layer that depicts nearly parallel edge-shared arrangement of CoO\({}_{6}\) octahedra and presents the honeycomb lattice, after removing NbO\({}_{6}\) octahedra.
all the peaks could be modeled assuming monoclinic structure (\(P2_{1}/c\)) for both the compounds and taking initial parameters from Refs. [35; 36]. This suggests that the new compound obtained replacing Nb by Ta also stabilizes in the same crystal structure. During refinement, the positions of oxygen atoms for Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\) couldn't be refined and kept fixed to the values of Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\). For a comparison, the refined lattice parameters and atomic positions for both the compounds are tabulated in Table 1 and 2, respectively. The obtained structural parameters for Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) are in close agreement with the previous reports [35; 36].
Upon replacing Nb by Ta, the lattice constants \(a\) and \(c\) are found to decrease while \(b\) increases. This results in an overall increase in the unit cell volume (\(V_{\rm cell}\)). In the crystal structure, all the magnetic Co\({}^{2+}\) layers are equally spaced with an inter-layer separation of \(\sim 17.0057\) A and \(\sim 16.9574\) A for Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) and Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\), respectively.
### Magnetization
Temperature dependent magnetic susceptibility \(\chi\) (\(\equiv M/H\)) of Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) and Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\) measured in a magnetic field of \(H=10\) kOe is shown in Fig. 3(a) and (b), respectively. As the temperature is lowered, \(\chi(T)\) increases in a Curie-Weiss (CW) manner. No clear indication of any magnetic long-range ordering (LRO) is observed down to 2 K. When the inverse susceptibility (\(1/\chi\)) is plotted against temperature, it exhibits a linear behaviour at high temperatures and a change in slope at around \(\sim 50\) K. This change in slope is a possible indication of the crossover of spin state of Co\({}^{2+}\) from high temperature \(S=3/2\) state to an effective \(j_{\rm eff}=1/2\) ground state [24; 28; 40]. In order to extract the mag
\begin{table}
\begin{tabular}{c c c c c c} Atom & Wyckoff & \(x\) & \(y\) & \(z\) & Occ. \\ \hline Sr1 & 4e & 0.2500 & 0.500 & 0.08321(6) & 1.0 \\ & & 0.2500 & 0.500 & 0.0803(7) & 1.0 \\ Sr2 & 4e & 0.7500 & 0.000 & 0.08262(3) & 1.0 \\ & & 0.7500 & 0.000 & 0.07348(3) & 1.0 \\ Sr3 & 4e & 0.2500 & 0.00 & 0.2500 & 1.0 \\ & & 0.2500 & 0.00 & 0.2500 & 1.0 \\ Co1 & 2a & 0.000 & 0.000 & 0.000 & 0.5 \\ & & 0.000 & 0.000 & 0.000 & 0.5 \\ Co2 & 2d & 0.500 & 0.500 & 0.000 & 0.5 \\ & & 0.500 & 0.500 & 0.000 & 0.5 \\ Nb1 & 4e & 0.50949(5) & 0.500 & 0.33735(3) & 1.0 \\ Ta1 & & 0.51011(3) & 0.500 & 0.33661(7) & 1.0 \\ Nb2 & 4e & 0.00710(4) & 0.500 & 0.16336(8) & 1.0 \\ Ta2 & & 0.00975(4) & 0.500 & 0.16713(6) & 1.0 \\ O1 & 4e & 0.97200(3) & 0.74250(1) & 0.2430(6) & 1.0 \\ & & 0.97200(3) & 0.74250(1) & 0.2430(6) & 1.0 \\ O2 & 4e & 0.5260(3) & 0.78710(5) & 0.27160(6) & 1.0 \\ & & 0.5260(3) & 0.78710(5) & 0.27160(6) & 1.0 \\ O3 & 4e & 0.2500(0) & 0.5528(4) & 0.26260(2) & 1.0 \\ & & 0.2500(0) & 0.5528(4) & 0.26260(2) & 1.0 \\ O4 & 4e & 0.97210(9) & 0.7330(7) & 0.90670(4) & 1.0 \\ & & 0.97210(9) & 0.7330(7) & 0.90670(4) & 1.0 \\ O5 & 4e & 0.0279(6) & 0.2518(6) & 0.92020(5) & 1.0 \\ & & 0.0279(6) & 0.2518(6) & 0.92020(5) & 1.0 \\ O6 & 4e & 0.4721(6) & 0.2777(0) & 0.89190(1) & 1.0 \\ & & 0.4721(6) & 0.2777(0) & 0.89190(1) & 1.0 \\ O7 & 4e & 0.52790(9) & 0.79650(6) & 0.9352(5) & 1.0 \\ & & 0.52790(9) & 0.79650(6) & 0.9352(5) & 1.0 \\ & & 0.52790(9) & 0.79650(6) & 0.9352(5) & 1.0 \\ O8 & 4e & 0.74060(8) & 0.04280(5) & 0.9110(3) & 1.0 \\ & & 0.74060(8) & 0.04280(5) & 0.9110(3) & 1.0 \\ O9 & 4e & 0.24060(7) & 0.5428(0) & 0.90780(3) & 1.0 \\ & & 0.24060(7) & 0.5428(0) & 0.90780(3) & 1.0 \\ & & 0.24060(7) & 0.5428(0) & 0.90780(3) & 1.0 \\ \end{tabular}
\end{table}
Table 2: Atomic positions obtained from the Rietveld refinement of the room temperature powder XRD data of Sr\({}_{3}\)Co(Nb,Ta)\({}_{2}\)O\({}_{9}\).
Figure 2: Room temperature powder XRD patterns of (a) Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) and (b) Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\). The black circles show the XRD data and the red line represents the calculated pattern. The blue small vertical lines represent the Bragg positions. The green line at the bottom represents the difference between observed and calculated intensities.
netic parameters, \(1/\chi\) in the low and high temperature linear regions is fitted by the modified CW law
\[\chi(T)=\chi_{0}+\frac{C}{T-\theta_{\rm CW}}, \tag{1}\]
where \(\chi_{0}\) represents the temperature independent susceptibility, \(C\) is the CW constant, and \(\theta_{\rm CW}\) is the characteristic CW temperature. The fit in high temperature (\(T>200\) K) range yields (\(\chi_{0}\simeq 5.315\times 10^{-4}\) cm\({}^{3}\)/mol, \(C\simeq 3.37\) cm\({}^{3}\)K/mol, and \(\theta_{\rm CW}\simeq-24\) K) and (\(\chi_{0}\simeq 2.84\times 10^{-4}\) cm\({}^{3}\)/mol, \(C\simeq 3.24\) cm\({}^{3}\)K/mol, and \(\theta_{\rm CW}\simeq-21.2\) K) for Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) and Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\), respectively. These values of \(C\) correspond to an effective moment (\(\mu_{\rm eff}=\sqrt{3k_{\rm B}C/N_{\rm A}}\) where, \(N_{\rm A}\) is the Avogadro number and \(k_{\rm B}\) is the Boltzmann constant) of \(\mu_{\rm eff}\simeq 5.2\)\(\mu_{\rm B}\) and \(\sim 5.1\)\(\mu_{\rm B}\), respectively which are close to the expected spin-only value for a \(S=3/2\) Co\({}^{2+}\) ion.
Similarly, the CW fit [Eq. (1)] to the \(1/\chi\) data was performed in the low temperature region by varying the fitting range between 20 K and 60 K. The obtained parameters are [\(C\simeq 1.85(5)\) cm\({}^{3}\)K/mol and \(\theta_{\rm CW}\simeq-7.5(5)\) K] and [\(C\simeq 1.93(3)\) cm\({}^{3}\)K/mol and \(\theta_{\rm CW}\simeq-8(1)\) K], for Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) and Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\), respectively. During the fitting procedure, \(\chi_{0}\) was fixed to the Van-Vleck susceptibility (\(\chi_{\rm VV}\)) obtained from the high-field magnetization data (discussed later). To visualize the low temperature linear behaviour, we have plotted \(1/(\chi-\chi_{\rm VV})\) vs \(T\) in the right \(y\)-axis for both the compounds. These values of \(C\) provide the effective magnetic moment of \(\mu_{\rm eff}\simeq 3.84(1)\)\(\mu_{\rm B}\) and \(3.92(2)\)\(\mu_{\rm B}\) for Nb and Ta compounds, respectively which are indeed close to the value expected for \(j_{\rm eff}=1/2\), assuming \(g=4\). Such a large value of \(g\) is not unusual and is typically observed for Co\({}^{2+}\) systems from the electron-spin-resonance (ESR) experiments due to dominant spin-orbit coupling, at low temperatures [41; 17]. The negative value of \(\theta_{\rm CW}\) indicates that the dominant interaction between the spins is AFM in nature.
In a spin system, \(\theta_{\rm CW}\) is a measure of the exchange coupling and is given by \(\theta_{\rm CW}=[-zJS(S+1)]/3k_{\rm B}\), where \(J\) is the nearest-neighbour(NN) exchange coupling with the Heisenberg Hamiltonian \(H=J\sum S_{i}\cdot S_{j}\) and \(z\) is the number of NN spins [42]. As both the compounds are having the triangular geometry, we have \(z=6\). Thus, using the value of \(\theta_{\rm CW}\), \(z\), and \(j_{\rm eff}\) in place of \(S\) in the above expression, we obtained \(J/k_{\rm B}\simeq 5\) K and 5.3 K for Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) and Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\), respectively.
Further validation of the \(j_{\rm eff}=1/2\) ground state and the estimation of exchange cou
Figure 3: \(\chi\) vs \(T\) measured in \(\mu_{0}H=1\) T for (a) Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) and (b) Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\). \(1/\chi\) after subtracting \(\chi_{\rm VV}\) is plotted as a function of \(T\) in the right \(y\)-axis to emphasize the low-\(T\) linear portion. Solid line represents the Curie-Weiss fit in the low temperature linear regime, as discussed in the text.
Figure 4: Magnetization (\(M\)) vs \(H\) measured at \(T=1.4\) K using pulsed magnetic field for (a) Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) and (b) Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\). The dash-dotted line represents the linear fit to the data in high fields which is extrapolated down to zero field to obtain the Van-Vleck magnetization. The corrected magnetization \(M_{\rm corr}\) after the subtraction of Van-Vleck contribution is also plotted vs \(H\). The dashed lines mark the critical fields. \(dM_{\rm corr}/dH\) vs \(H\) is plotted in the right \(y\)-axis to highlight the features at the critical fields.
the high-field magnetization data. The high-field magnetization measured as a function of field upto 60 T using the pulsed magnetic field at the base temperature of \(T=1.4\) K is shown in Fig. 4(a) and (b) for Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) and Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\), respectively. The pulsed field data are scaled with respect to the magnetization data measured in the SQUID magnetometer up to 7 T at \(T=1.8\) K [43]. For Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\), \(M\) increases linearly with \(H\) and then show a sharp bend at the saturation field \(H_{\rm S}\simeq 7.2\) T. Such a sharp bend in the powder sample demonstrates isotropic \(g\)-factor and/or exchange interaction, as in the case of Ba\({}_{3}\)Co(Nb,Sb)\({}_{2}\)O\({}_{9}\)[17; 21]. For \(H>8\) T, though it shows the tendency of saturation, still there is a slow increase. This slow increase of magnetization above \(H_{\rm S}\) is attributed to the temperature independent Van-Vleck paramagnetism associated with the Co\({}^{2+}\) ion in the non-cubic environment [18]. The slope of a linear fit for \(H>15\) T and its intercept in the \(y\)-axis result in the Van-Vleck paramagnetic susceptibility \(\chi_{\rm V}\simeq 7.32\times 10^{-3}\) cm\({}^{3}\)/mol and saturation magnetization \(M_{\rm S}\simeq 2.13\)\(\mu_{\rm B}\)/Co\({}^{2+}\), respectively. Using this value of \(M_{\rm S}\), the \(g\)-factor (\(M_{\rm S}=gS\mu_{\rm B}\)) is calculated to be \(g\simeq 4.26\), assuming \(j_{\rm eff}=1/2\). Similar \(g\)-value is also reported for other Co\({}^{2+}\) based \(j_{\rm eff}=1/2\) TLAFs [22; 28; 18; 20]. The corrected magnetization (\(M_{\rm corr}\)) after subtracting the Van-Vleck contribution is also plotted in the same graph. To precisely pinpoint the saturation field we have plotted the derivative \(dM_{\rm corr}/dH\) vs \(H\) in the right \(y\)-axis. The curve exhibits a valley at \(H\simeq 2\) T which corresponds to 1/3 of \(M_{\rm S}\) and then a sharp drop at \(\mu_{0}H_{\rm S}\simeq 7.2\) T (or, a slope change in the \(M_{\rm corr}\) vs \(H\) curve) indicating the saturation field or critical field above which the spin system attains the fully polarized state [18].
Unlike Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\), the magnetic behaviour of Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\) is found to be somewhat different. Magnetization shows a broad bend between two critical fields \(\mu_{0}H_{\rm S1}\simeq 3.6\) T and \(\mu_{0}H_{\rm S2}\simeq 10.5\) T. Above \(H_{\rm S2}\), the magnetization saturates but yet there is a slow increase, similar to Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\). A straight line fit above 20 T yields \(\chi_{\rm VV}\simeq 7.2\times 10^{-3}\) cm\({}^{3}\)/mol and \(M_{\rm S}\simeq 1.8\)\(\mu_{\rm B}\)/Co\({}^{2+}\). This value of \(M_{\rm S}\) corresponds to \(g\simeq 3.6\) with \(j_{\rm eff}=1/2\). The Van-Vleck corrected magnetization \(M_{\rm corr}\) and its derivative \(dM_{\rm corr}/dH\) as a function of \(H\) are plotted in the left and right \(y\)-axes, respectively in Fig. 4(b) which show pronounced features at \(H_{\rm S1}\) and \(H_{\rm S2}\). These features are quite different from the broad contort expected due to \(g\)-factor anisotropy. This is an indication of the existence of an exotic field-induced state between \(H_{\rm S1}\) and \(H_{\rm S2}\), similar to Na\({}_{2}\)Co\({}_{2}\)TeO\({}_{6}\)[33]. No obvious feature is seen at the field corresponding to the 1/3 magnetization.
The saturation field defines the energy required to overcome the antiferromagnetic exchange energy and polarize the spins in the direction of magnetic field. In particular, in a Heisenberg TLAF, \(H_{\rm S}\) can be written in terms of the intralayer exchange coupling as \(\mu_{0}H_{s}=9JS/g\mu_{\rm B}\)[44]. For Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\), our experimental value of \(\mu_{0}H_{\rm s}\simeq 7.2\) T yields an average exchange coupling of \(J/k_{\rm B}\simeq 4.7\) K. Similarly, for Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\), \(\mu_{0}H_{\rm S2}\simeq 10.5\) T gives \(J/k_{\rm B}\simeq 5.8\) K. These values of \(J/k_{\rm B}\) are in reasonable agreement with the ones obtained from the analysis of \(\theta_{\rm CW}\). The slight difference in magnitude can be attributed to the magnetic anisotropy present in the compounds.
Heisenberg TLAFs typically show a 1/3 magnetization plateau in an intermediate field range where the spins evolve from a conventional 120\({}^{\circ}\) spin structure to a \(uud\) state [45; 46; 47]. Though a weak anomaly is visible for Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) but it is completely smeared for Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\). The absence of 1/3 magnetization plateau is likely due to the polycrystalline nature of the sample where random orientation of the crystallites average out this effect. Moreover, our \(M(H)\) is measured at \(T=1.4\) K (above \(T_{\rm N1}\)) at which the impact of magnetic anisotropy is minimal. Therefore, \(M(H)\) measurement below \(T_{\rm N2}\) would reveal these features more clearly.
### Heat Capacity
To delineate the low-energy excitations, temperature-dependent heat capacity [\(C_{\rm p}(T)\)] measured in different applied fields is shown in Fig. 5(a) and (b) for Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) and Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\), respectively. The overall temperature variation of \(C_{\rm p}\) for both the compounds is found to be nearly same. In zero-field, as the temperature decreases, \(C_{\rm p}\) decreases monotonically and below \(\sim 10\) K it displays a weak and broad maximum, a possible indication of short-range ordering due to two-dimensionality of the spin-lattice. With further decrease in temperature, two well defined peaks appear at \(T_{\rm N1}\simeq 1.47\) K and \(T_{\rm N2}\simeq 1.22\) K for Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) and at \(T_{\rm N1}\simeq 0.88\) K and \(T_{\rm N2}\simeq 0.67\) K for Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\), indicating the onset of two successive magnetic transitions at low temperatures. When external magnetic field is applied the height of the peaks is reduced substantially and the peak positions shift towards low temperatures. At higher fields, both the transition peaks are merged into a broad peak and gradually vanishes from the measurement window. This reflects AFM nature of both the transitions. In addition, as the field increases, the broad maxima initially shifts towards low temperatures as expected for a short-range magnetic order. For higher fields, the position of the maxima shifts in the reverse direction (to high temperatures), its height increases, and shows a drastic broadening, reminiscent of a Schottky anomaly due to CEF splitting. Furthermore, the zero-field \(C_{\rm p}(T)\) of Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\) shows an upturn below \(\sim 0.2\) K which moves towards high temperatures with increasing field. This is a typical behaviour of nuclear Schottky arising due to a quadrupole splitting of \({}^{59}\)Co (\(I=7/2\)) nuclear levels [48].
In a magnetic insulator, \(C_{\rm p}\) in zero-field has three major contributions: lattice heat capacity (\(C_{\rm ph}\)) due to phonon vibrations which usually dominates at high tem
peratures, magnetic heat capacity (\(C_{\rm mag}\)) due to spins which becomes predominant at low temperatures, and the nuclear Schottky contribution (\(C_{\rm n}\)) which is effective only at very low temperatures (below \(\sim 10^{-2}\) K). In the absence of a non-magnetic analogue, \(C_{\rm ph}(T)\) is evaluated by fitting \(C_{\rm p}(T)\) in the high temperature region by a linear combination of one Debye and four Einstein terms [49]
\[C_{\rm ph}(T)=f_{\rm D}C_{\rm D}(\theta_{\rm D},T)+\sum_{i=1}^{4}g_{i}C_{\rm E _{i}}(\theta_{\rm E_{i}},T). \tag{2}\]
The first term in Eq. (2) is the Debye model
\[C_{\rm D}(\theta_{\rm D},T)=9nR\left(\frac{T}{\theta_{\rm D}}\right)^{3}\int_{ 0}^{\frac{\theta_{\rm D}}{\theta_{\rm D}}}\frac{x^{4}e^{x}}{(e^{x}-1)^{2}}dx, \tag{3}\]
where \(R\) is the universal gas constant, \(n\) is the number of atoms in the formula unit, and \(\theta_{\rm D}\) is the characteristic Debye temperature. The flat optical modes in the phonon spectra are accounted for by the second term in Eq. (2), called the Einstein term
\[C_{\rm E}(\theta_{\rm E},T)=3nR\left(\frac{\theta_{\rm E}}{T}\right)^{2}\frac {e^{\left(\frac{\theta_{\rm E}}{T}\right)}}{[e^{\left(\frac{\theta_{\rm E}}{T} \right)}-1]^{2}}, \tag{4}\]
where \(\theta_{\rm E}\) is the characteristic Einstein temperature. The coefficients \(f_{\rm D}\), \(g_{1}\), \(g_{2}\), \(g_{3}\), and \(g_{4}\) are the weight factors, which take into account the number of atoms per formula unit (\(n\)) and are chosen in such a way that \(f_{\rm D}+g_{1}+g_{2}+g_{3}+g_{4}=1\) and the Dulong-Petit value (\(\sim 3nR\)) is satisfied at high temperatures.
The zero-field \(C_{\rm p}(T)\) above 40 K is fitted by Eq. (2) and the obtained fitting parameters are (\(f_{\rm D}\simeq 0.066\), \(g_{1}\simeq 0.066\), \(g_{2}\simeq 0.20\), \(g_{3}\simeq 0.13\), \(g_{4}\simeq 0.533\), \(\theta_{\rm D}\simeq 178\) K, \(\theta_{\rm E1}\simeq 117\) K, \(\theta_{\rm E2}\simeq 206\) K, \(\theta_{\rm E3}\simeq 328\) K and \(\theta_{\rm E4}\simeq 520\) K) and (\(f_{\rm D}\simeq 0.066\), \(g_{1}\simeq 0.066\), \(g_{2}\simeq 0.20\), \(g_{3}\simeq 0.13\), \(g_{4}\simeq 0.533\), \(\theta_{\rm D}\simeq 163\) K, \(\theta_{\rm E1}\simeq 117\) K, \(\theta_{\rm E2}\simeq 195\) K, \(\theta_{\rm E3}\simeq 294\) K and \(\theta_{\rm E4}\simeq 490\) K) for Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) and Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\), respectively. The fit was extrapolated down to the lowest measured temperature (see Fig. 5) to obtain \(C_{\rm ph}(T)\). The estimation of \(C_{\rm n}\) is discussed later. The magnetic and/or Schottky contributions, \(\Delta C\), obtained by subtracting \(C_{\rm ph}\) and \(C_{\rm n}\) from total \(C_{\rm p}\) are plotted as a function of \(T\) for different fields in Fig. 6(a) and (b) for the Nb and Ta compounds, respectively. The respective change in magnetic entropies evaluated by integrating \(\Delta C/T\) with respect to \(T\) [i.e. \(\Delta S=\int\limits_{0}^{T}\frac{\Delta C(T^{\prime})}{T^{\prime}}dT^{\prime}\)] for different fields are shown in Fig. 6(c) and (d). The saturation value of \(\Delta S\) approaches \(R\ln 2\) above \(\sim 30\) K irrespective of the applied field for both the compounds, further evidencing \(j_{\rm eff}=1/2\) ground state for Co\({}^{2+}\).
\(\Delta C(T)\) in different fields can be fitted by the following
Figure 5: Temperature dependent heat capacity [\(C_{\rm p}(T)\)] measured in different magnetic fields for (a) Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) and (b) Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\). The red dash-dotted line represents the lattice heat capacity using Eq. (2). The black solid lines are the power law and power law + nuclear Schottky fits to the low-\(T\) data of Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) and Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\), respectively. The dashed line in (b) marks the power law with \(\alpha=2\). Lower insets: Variation of \(\alpha\) and \(\gamma\) with \(H\) in the left and right \(y\)-axes, respectively, obtained from the low temperature data fit. Upper inset in (b) is the plot of \(A\) vs \(H^{2}\) with solid line being a linear fit.
two-level Schottky function
\[C_{\rm S}=n_{\rm S}R\left(\frac{\Delta}{k_{B}T}\right)^{2}\frac{e^{-\frac{\Delta} {k_{\rm B}T}}}{\left(1+e^{-\frac{\Delta}{k_{\rm B}T}}\right)^{2}}, \tag{5}\]
where \(n_{\rm S}\) is the number of free spins per f.u. contributing to the Schottky behaviour and \(\Delta/k_{\rm B}\) is the Zeeman gap in magnetic fields. We fitted the data in different magnetic fields by Eq. (5) making \(n\) and \(\Delta/k_{\rm B}\) as fitting parameters [Fig. 6(a) and (b)]. The fit reproduces the experimental data reasonably well, especially in the broad maximum regime. However, the deviation in the high temperature regime can be ascribed to the unreliable subtraction of phonon contribution and the presence of magnetic contribution. The obtained \(\Delta/k_{\rm B}\) and \(n\) are plotted as a function of field in the inset of Fig. 6(b) for Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\). With increasing \(H\), \(n\) increases systematically suggesting the excitation of more free spins to the higher energy levels. It is expected to reach the maximum value (\(\sim 1\)) in the fully polarized state, above \(H_{\rm S2}\simeq 10.5\) T. Similarly, \(\Delta/k_{\rm B}\) varies linearly with \(H\). Taking the value of \(\Delta/k_{\rm B}\simeq 21.65\) K at \(H=9\) T which is close to the saturation field, we obtained \(g\simeq 3.58\). This value of \(g\) is indeed close to the one obtained from the magnetization analysis. This further elucidates that the Schottky effect in heat capacity is arising from the Co\({}^{2+}\) Kramers' doublets with \(j_{\rm eff}=1/2\).
## IV Discussion
For both the compounds, double magnetic transitions are observed at \(T_{\rm N1}\) and \(T_{\rm N2}\) (\(T_{\rm N2}<T_{\rm N1}\)) in zero-field. It is predicted that double transitions can occur in TLAFs
Figure 6: Difference in heat capacity \(\Delta C\) (\(=C_{\rm p}-C_{\rm ph}-C_{\rm n}\)) vs \(T\) in different magnetic fields for (a) Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) and (b) Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\). The solid lines are the fits using Eq. (5). Inset of (b) shows the plot of \(\Delta/k_{\rm B}\) and \(n\) as a function of \(H\) for Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\) in the left and right \(y\)-axes, respectively. The change in magnetic entropy \(\Delta S\) vs \(T\) in different magnetic fields for (c) Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) and (d) Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\).
when the magnetic anisotropy is of easy-axis type, while the TLAFs with easy-plane type anisotropy may show only single transition [50; 51; 52; 53]. On lowering the temperature, the collinear _uud_ state appears before \(120^{\circ}\) state in TLAFs with easy-axis anisotropy. When magnetic field is applied, an additional high-field 2:1 canted phase is also stabilized [46]. The temperature range of the intermediate phase \((T_{\rm N1}-T_{\rm N2})/T_{\rm N1}\) reflects the relative strength of easy-axis anisotropy with respect to the isotropic intralayer coupling. Similar type of double transitions are reported for Ba\({}_{3}\)Co(Nb,Ta,Sb)\({}_{2}\)O\({}_{9}\) and some other compounds [20; 21; 22; 23]. Thus, the two consecutive magnetic transitions at low temperatures is attributed to the easy-axis anisotropy in both our compounds and the \(120^{\circ}\) ordering below \(T_{\rm N2}\) is preceded by a collinear order between \(T_{\rm N1}\) and \(T_{\rm N2}\). The observed narrow temperature regime between \(T_{\rm N1}\) and \(T_{\rm N2}\) in both the compounds suggests considerably weak anisotropy compared to the isotropic exchange coupling in the Hamiltonian [18]. This implies that these compounds are more close to Heisenberg model in contrast to strong anisotropy anticipated for Co-based compounds [54] and the local environment of Co\({}^{2+}\) is close to a cubic symmetry as in KCoF\({}_{3}\) and Ba\({}_{3}\)CoSb\({}_{2}\)O\({}_{9}\)[18]. Recent theoretical studies claim that the Heisenberg model on a triangular lattice with no anisotropy can also show double phase transition [46]. It is also suggested that the transition from _uud_ to \(120^{\circ}\) state (at \(T_{\rm N2}\)) can be described by the conventional Berezinskii-Kosterlitz-Thouless (BKT) universality class. The magnetic \((T-H)\) phase diagram constructed using the transition temperatures from \(C_{\rm p}(T)\) and saturation fields from the magnetization data are presented in Fig. 7(a) and (b) for Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) and Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\), respectively. Both the compounds exhibit two common phase regimes I and II known to be \(120^{\circ}\) and collinear spin states, respectively, as typically observed in majority of the TLAFs. On increasing field, both the transitions are merged into one and are expected to approach zero temperature at around \(\sim 7\) T for Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) and \(\sim 3\) T for Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\), respectively. These fields are very close to their respective saturation fields.
The most striking difference between these two compounds is that Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\) shows the appearance of an exotic phase in an extended field range [regime III in Fig. 7(b)] while for Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) this phase is either absent or exists over a narrow field range. In order to understand the peculiar behaviour in the magnetization data, we fitted \(C_{\rm p}(T)\) in the low temperature regime by a power law of the form \(C_{\rm p}=\gamma T^{\alpha}\) for Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\). To accommodate the low-\(T\) upturn, \(C_{\rm p}(T)\) data of Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\) were fitted with a sum of \(A/T^{2}\) and \(\gamma T^{\alpha}\) from the lowest temperature to 1 K [55]. Here, \(A/T^{2}\) accounts for the high temperature part of the nuclear Schottky anomaly, which is nothing but the high temperature approximation of Eq. (5) [56]. The constant \(A\) is related to the nuclear level splitting \(\Delta\) (both quadrupolar and Zeeman) with \(A\propto\Delta^{2}\) for \(T>>\Delta\). \(A\) is found to increase quadratically with \(H\) [upper inset of Fig. 5(b)], which is consistent with the theoretical prediction [48].
The obtained values of \(\gamma\) and \(\alpha\) are plotted against \(H\) in the lower insets of the respective figures. For Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\), the value of \(\alpha\) is found to be close to 3 for all the measured fields [inset of Fig. 5(a)], as expected in a 3D AFM ordered state. Only at \(H=9\) T which is well above \(H_{\rm S}\), the fitted parameters were not reliable, possibly because of large scattering in the low temperature data and low heat capacity values. On the other hand, for Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\), the value of \(\alpha\) is found to be close to 3 (with \(\gamma\simeq 1042\) mJ/mol.K\({}^{3.75}\)) for \(H=3\) T, as expected [lower inset of Fig. 5(b)]. For \(H>4\) T, it is reduced significantly to about \(\sim 2\) and remains almost field independent. Interestingly, these fields fall within the exotic regime (between \(\mu_{0}H_{\rm S1}\simeq 3.6\) T and \(\mu_{0}H_{\rm S2}\simeq 10.5\) T) found from the magnetization measurements. In these fields, the value of \(\gamma\) varies from \(\sim 300\) to 90 mJ/mol.K\({}^{3}\) which is a significant reduction from the 3 T value but it is still considerably large.
Typically, one expects a \(T^{3}\) behaviour for \(C_{\rm p}(T)\) due to 3D spin-wave dispersion in the AFM ordered state [57]. On the other hand, a more conventional characteristic
Figure 7: \(T\) vs \(H\) phase diagram for (a) Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) and (b) Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\) constructed using the heat capacity and magnetization data. The exotic field induced regime (III) between \(H_{\rm S1}\) and \(H_{\rm S2}\) for Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\) is highlighted in a different color.
feature of QSL is the linear temperature dependency of low temperature \(C_{\rm p}(T)\) i.e. \(C_{\rm p}=\gamma T\) where, \(\gamma\) is related to the spinon density of states at the spinon Fermi surface. Likewise, a quadratic temperature dependency (\(C_{\rm p}\propto T^{2}\)) is predicted theoretically for the gapless Dirac QSLs at low temperatures [58; 59]. Indeed, several QSL candidates with triangular, kagome, and hyperkagome geometries are reported to evince such a behaviour at low temperatures [55; 60; 61; 62; 63]. Thus, the suppression of magnetic LRO and power law behaviour of \(C_{\rm p}\) with a reduced value of \(\alpha\sim 2\) in the critical field region unambiguously point towards an exotic phase regime, possibly the field induced QSL in Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\)[64; 65]. Moreover, the obtained large value of \(\gamma\) can also be considered as a generic feature of low-energy gapless excitations, as observed for gapless QSL candidates Ba\({}_{3}\)CuSb\({}_{2}\)O\({}_{9}\) and Sr\({}_{2}\)Cu(Te\({}_{0.5}\)W\({}_{0.5}\))O\({}_{6}\)[66].
This type of field induced behaviour is observed in some honeycomb lattices with strong Kitaev interaction. The Kitaev physics demands the presence of bond-dependent anisotropy which can be realized in honeycomb lattices as well as spin dimers with strong spin-orbit coupling where the metal octahedra are either edge shared or parallelly edge shared [67]. Further, it is predicted that spin-orbit coupled honeycomb systems with \(j_{\rm eff}=1/2\) are ideal boulevard to host a Kitaev spin liquid [68]. Indeed, \(j_{\rm eff}=1/2\) honeycomb lattices \(\alpha\)-RuCl\({}_{3}\)[69], Na\({}_{2}\)Co\({}_{2}\)TeO\({}_{6}\)[33], BaCo\({}_{2}\)(AsO\({}_{4}\))\({}_{2}\)[34], and BaCo\({}_{2}\)(P\({}_{1-\rm x}\)V\({}_{x}\))O\({}_{8}\)[70] with strong anisotropy show the onset of conventional magnetic LRO at low temperatures. Under magnetic field, the LRO is suppressed (i.e. the anisotropic bond dependent coupling dominates over the isotropic coupling) and the system eventually crosses over to a low temperature non-magnetic or disordered state which is understood to be the field induced Kitaev spin-liquid [67]. A triangular lattice can also be viewed as a superposition of honeycomb layers in certain stacking sequence. This analogy can be extended to the TLAFs Sr\({}_{3}\)Co(Nb,Ta)\({}_{2}\)O\({}_{9}\). A careful inspection of the crystal structure reveals that the CoO\({}_{6}\) octahedra in both the compounds are parallel edge shared, when the non-magnetic (Nb,Ta)O\({}_{6}\) octahedra are removed. In Fig. 1(b), we have schematized the ways in which parallel edge sharing could be feasible in Sr\({}_{3}\)Co(Nb,Ta)\({}_{2}\)O\({}_{9}\). As one can clearly see that the identical Co-octahedra are perfectly parallel edge shared, which might facilitate a bond-dependent anisotropy and hence Kitaev interaction [10; 41]. Meanwhile, due to low symmetry crystal structure, there is an inherent distortion in the metal octahedra due to which the edge sharing between two dissimilar octahedra [Co(1)O\({}_{6}\) and Co(2)O\({}_{6}\)] is deviating slightly from the perfect parallel edge sharing. It is to be noted that the recent theoretical work on TLAFs has also predicted a transition from an ordered magnetic state in the isotropic case into a QSL state in the easy-axis regime, with increasing easy-axis anisotropy [71]. Nevertheless, to conclusively establish this novel phenomena in these TLAFs, inelastic neutron scattering and \(\mu\)SR experiments in dilution temperatures and under magnetic fields on good quality single crystals would be imperative.
Experimentally, the extent of frustration in any magnetically frustrated material is quantified by the frustration ratio \(f=|\theta_{\rm CW}|/T_{\rm N}\)[72]. From the value of \(\theta_{\rm CW}\) and \(T_{\rm N1}\), the frustration parameter is calculated to be \(f\sim 5\) and \(\sim 9\) for Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) and Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\), respectively. Despite same crystal structure, the latter shows stronger frustration and a fluctuating regime over an extended field range while for the former, the extent of frustration is less and hence the fluctuating regime is narrowed. The possible origin of such a drastic difference in magnetic properties can be attributed to the hierarchy of the exchange couplings. It is to be noted that the Co\({}^{2+}\) - Co\({}^{2+}\) superexchange involves Co\({}^{2+}\)(\(3d\)) - O\({}^{2-}\)(\(2p\)) - Nb\({}^{5+}\)(\(4p\)) - O\({}^{2-}\)(\(2p\)) - Co\({}^{2+}\)(\(3d\)) and Co\({}^{2+}\)(\(3d\)) - O\({}^{2-}\)(\(2p\)) - Ta\({}^{5+}\)(\(4f\)) - O\({}^{2-}\)(\(2p\)) - Co\({}^{2+}\)(\(3d\)) pathways for Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) and Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\), respectively. Hence, the difference is coupling strength is likely due to the participation of different orbitals of Nb\({}^{5+}\)(\(4p\)) and Ta\({}^{5+}\)(\(4f\)) in the interaction mechanism [73; 21].
## V Conclusion
We report a detailed study of the thermodynamic properties of two new quantum magnets Sr\({}_{3}\)Co(Nb,Ta)\({}_{2}\)O\({}_{9}\) possessing Co\({}^{2+}\) triangular layers. Their low temperature properties are unequivocally described by \(j_{\rm eff}=1/2\) Kramers doublet of Co\({}^{2+}\) ions interacting antiferromagnetically. Similar to many other TLAFs with easy-axis anisotropy, both the compounds undergo two sequential magnetic transitions at low temperatures. Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) exhibits a weak plateau type feature at the 1/3 magnetization while this feature is smeared for Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\), likely due to the effect of magnetic anisotropy and/or random orientation of the crystallites. Despite same crystal structure, Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\) with \(f\simeq 9\) is found to be more frustrated than Sr\({}_{3}\)CoNb\({}_{2}\)O\({}_{9}\) with \(f\simeq 5\), which can be attributed to the involvement of different orbitals of Ta\({}^{5+}\) and Nb\({}^{5+}\) in the interaction paths. The existence of an exotic field induced quantum phase is detected for Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\) over a wide field range, below the fully polarized state while this behaviour is either absent or exists over a narrow field range for the Nb analogue. Because of this non-trivial character, Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\) is a model system for further experimental as well as theoretical investigations at low temperatures and under magnetic fields.
_Note added_: During the course of our work, we became aware of an independent study on Sr\({}_{3}\)CoTa\({}_{2}\)O\({}_{9}\) [crystal structure: trigonal (\(P\bar{3}m1\))] [74]. They have also observed double transitions at low temperatures.
## Acknowledgements
We would like to acknowledge SERB, India, for financial support bearing sanction Grant No. CRG/2022/000997. We also acknowledge the support of HLD-HZDR, member of the European Magnetic Field Laboratory (EMFL). S.J.S. is supported by the Prime Minister's Research Fellowship (PMRF) scheme, Government of India.
|
2305.10202 | A characterization result of cofinite local cohomology modules | Let $R$ be a commutative Noetherian ring, $\fa$ an ideal of $R$, $M$ an
arbitrary $R$-module and $N$ a finite $R$-module. We prove that \cite[Theorem
2.1]{Mel} and \cite[Proposition 3.3 (i)$\Leftrightarrow$(ii)]{B1} are true for
any Serre subcategory of $R$-modules. We also prove a characterization theorem
for $\lc_{\fa}^{i}(M)$ and $\lc_{\fa}^{i}(N,M)$ to be $\fa$-cofinite for all
$i$, whenever one of the following cases holds: (a) $\ara (\fa)\leq 1$, (b)
$\dim R/\fa \leq 1$ or (c) $\dim R\leq 2$. In the end we study Artinianness and
Artinian $\fa$-cofiniteness of local cohomology modules. | Moharram Aghapournahr, Leif Melkersson | 2023-05-17T13:25:13Z | http://arxiv.org/abs/2305.10202v1 | # A characterization result of cofinite local cohomology modules
###### Abstract.
Let \(R\) be a commutative Noetherian ring, \(\mathfrak{a}\) an ideal of \(R\), \(M\) an arbitrary \(R\)-module and \(N\) a finite \(R\)-module. We prove that [23, Theorem 2.1] and [2, Proposition 3.3 (i)\(\Leftrightarrow\)(ii)] are true for any Serre subcategory of \(R\)-modules. We also prove a characterization theorem for \(\mathrm{H}^{i}_{\mathfrak{a}}(M)\) and \(\mathrm{H}^{i}_{\mathfrak{a}}(N,M)\) to be \(\mathfrak{a}\)-cofinite for all \(i\), whenever one of the following cases holds: (a) \(\mathrm{ara}(\mathfrak{a})\leq 1\), (b) \(\dim R/\mathfrak{a}\leq 1\) or (c) \(\dim R\leq 2\). In the end we study Artinianness and Artinian \(\mathfrak{a}\)-cofiniteness of local cohomology modules.
Key words and phrases:Local cohomology, \(\mathrm{FD}_{<n}\) modules, cofinite modules, weakly Laskerian modules 2010 Mathematics Subject Classification: 13D45, 13E05, 14B15
## 1. Introduction
Throughout this paper \(R\) is a commutative Noetherian ring with non-zero identity and \(\mathfrak{a}\) an ideal of \(R\). For an \(R\)-module \(M\), the \(i^{th}\) local cohomology module \(M\) with respect to ideal \(\mathfrak{a}\) is defined as
\[\mathrm{H}^{i}_{\mathfrak{a}}(M)\cong\varinjlim_{n}\mathrm{Ext}^{i}_{R}(R/ \mathfrak{a}^{n},M).\]
Hartshorne in [13] defined a module \(M\) to be \(\mathfrak{a}\)-cofinite if \(\mathrm{Supp}_{R}(M)\subseteq\mathrm{V}(\mathfrak{a})\) and \(\mathrm{Ext}^{i}_{R}(R/\mathfrak{a},M)\) is finite for all \(i\geq 0\). He also asked the following question:
**Hartshorne's question:**_Let \(M\) be a finite \(R\)-module and \(\mathfrak{a}\) be an ideal of \(R\). When are \(\mathrm{H}^{i}_{\mathfrak{a}}(M)\)\(\mathfrak{a}\)-cofinite for all \(i\geq 0\)?_
The answer is negative in general, see [13] for a counterexample, but it is true in the following cases:
**Theorem 1.1**.: _Let \(M\) be a finite \(R\)-module and suppose one of the following cases holds:_
* \(\mathrm{cd}(\mathfrak{a},R)\leq 1\) _or_ \(\dim R\leq 2\)_. See_ _[_23_]__._
* \(\dim R/\mathfrak{a}\leq 1\)_. See_ _[_5_]__._
_Then \(\mathrm{H}^{i}_{\mathfrak{a}}(M)\) is \(\mathfrak{a}\)-cofinite for all \(i\geq 0\)._
Note that \(\mathrm{cd}(\mathfrak{a},R)\leq 1\) (the cohomological dimension of \(\mathfrak{a}\) in \(R\)) is the smallest integer \(n\) such that the local cohomology modules \(\mathrm{H}^{i}_{\mathfrak{a}}(S)\) are zero for all \(R\)-modules \(S\) and for all \(i>n\).
The generalized local cohomology module
\[\operatorname{H}^{i}_{\mathfrak{a}}(M,N)\cong\varinjlim_{n}\operatorname{Ext}^{i }_{R}(N/\mathfrak{a}^{n}N,M).\]
for all \(R\)-module \(M\) and \(N\) was introduced by Herzog in [16]. Clearly it is a generalization of ordinary local cohomology module. In [25] Yassemi asked whether the Hartshorne's question hold for generalized local cohomology. It is answered in the following cases:
**Theorem 1.2**.: _Let \(M,N\) be two finite \(R\)-modules and suppose one of the following cases holds:_
1. \(\operatorname{cd}(\mathfrak{a},R)\leq 1\) _or_ \(\dim R\leq 2\)_. See_ _[_20_]__._
2. \(\dim R/\mathfrak{a}\leq 1\)_. See_ _[_10_]__._
_Then \(\operatorname{H}^{i}_{\mathfrak{a}}(N,M)\) is \(\mathfrak{a}\)-cofinite for all \(i\geq 0\)._
Recall that an arbitrary \(R\)-module \(X\) is said to be weakly Laskerian if the set of associated prime ideals of any quotient module of \(X\) is finite (see [11]). Recently, in [17], the author introduced the class of extension modules of finitely generated modules by the class of all modules of finite support and named it FSF modules. Bahmanpour in [3, Theorem 3.3] proved that over a Noetherian ring \(R\) an \(R\)-module \(X\) is weakly Laskerian if and only if it is FSF.
Recall too that an arbitrary \(R\)-module \(X\) is said to be an \(\operatorname{FD}_{<n}\)\(R\)-module if there exists a finite submodule \(X^{\prime}\) of \(X\) such that \(\dim_{R}(X/X^{\prime})<n\) (see [1]). Recall also that a class of \(R\)-modules is said to be a _Serre subcategory of the category of \(R\)-modules_, when it is closed under taking submodules, quotients and extensions.
The second author of present paper in [23, Theorem 2.1] proved that for any ideal \(\mathfrak{a}\) of \(R\) and any \(R\)-module \(M\), the \(R\)-module \(\operatorname{Ext}^{i}_{R}(R/\mathfrak{a},M)\) is finite for all \(i\) if and only if the \(R\)-module \(\operatorname{Tor}^{R}_{i}(R/\mathfrak{a},M)\) is finite for all \(i\) if and only if Koszul cohomology \(\operatorname{H}^{i}(x_{1}\ldots,x_{s};M)\) is finite for all \(i\) where \(\mathfrak{a}=(x_{1},\ldots,x_{s})\). Our main result in Section 2 is in this direction. More precisely, we prove the following result that show [23, Theorem 2.1] and [2, Proposition 3.3 (i)\(\Leftrightarrow\)(ii)] are true for any Serre subcategory of modules:
**Theorem 1.3**.: _Let \(\mathcal{S}\) be a Serresubcategory of the category of modules over the Noetherian ring \(R\) and let \(\mathfrak{a}=(x_{1},\ldots,x_{s})\) be an ideal of \(R\). Given an \(R\)-module \(M\) the following conditions are equivalent:_
1. \(\operatorname{Ext}^{i}_{R}(R/\mathfrak{a},M)\) _belongs to_ \(\mathcal{S}\) _for all_ \(i\)_._
2. \(\operatorname{Tor}^{R}_{i}(R/\mathfrak{a},M)\) _belongs to_ \(\mathcal{S}\) _for all_ \(i\)_._
3. \(\operatorname{H}^{i}(x_{1}\ldots,x_{s};M)\) _belongs to_ \(\mathcal{S}\) _for all_ \(i\)_._
4. \(\operatorname{H}_{i}(x_{1},\ldots,x_{s};M)\) _belongs to_ \(\mathcal{S}\) _for all_ \(i\)_._
In section 3, we study relationship between cofiniteness of local cohomology and generalized local cohomology modules and we prove that the cofiniteness of these two kinds of local cohomolgy under the following assumptions depends to each other. More generally we prove a characterization result (see Corollary 3.8), in the cases (a) \(\operatorname{ara}(\mathfrak{a})\leq 1\), (b) \(\dim R/\mathfrak{a}\leq 1\) or (c) \(\dim R\leq 2\) for \(\operatorname{H}^{i}_{\mathfrak{a}}(M)\) and \(\operatorname{H}^{i}_{\mathfrak{a}}(N,M)\) to be \(\mathfrak{a}\)-cfinite for all \(i\).
One of the important question in the study of local cohomology modules is determine when the local cohomology is Artinian (see [14])? In this direction, in Section 4, we study the Artinianness and Artinian \(\mathfrak{a}\)-cofiniteness of local cohomology modules from upper bound when \(\dim R/\mathfrak{a}=1\). More precisely, we show that:
**Theorem 1.4**.: _Let \(\mathfrak{a}\) be an ideals of a Noetherian ring \(R\), with \(\dim R/\mathfrak{a}=1\). For an \(R\)-module \(M\) and integer an \(r\) the following conditions are equivalent;_
1. \(\operatorname{Supp}_{R}(\operatorname{H}^{i}_{\mathfrak{a}}(M))\subseteq \operatorname{Max}(R)\) _for each_ \(i>r\)_._
2. \(\operatorname{H}^{i}_{\mathfrak{a}}(M)\) _is an Artinian_ \(R\)_-module for each_ \(i>r\)_._
3. \(\operatorname{H}^{i}_{\mathfrak{a}}(M)\) _is an Artinian_ \(\mathfrak{a}\)_-cofinite_ \(R\)_-module for each_ \(i>r\)_._
Throughout this paper, \(R\) will always be a commutative Noetherian ring with non-zero identity and \(\mathfrak{a}\) will be an ideal of \(R\). We denote \(\{\mathfrak{p}\in\operatorname{Spec}R:\,\mathfrak{p}\supseteq\mathfrak{a}\}\) by \(\operatorname{V}(\mathfrak{a})\). For any unexplained notation and terminology we refer the reader to [6], [7] and [21].
## 2. Equivalence of \(\operatorname{Ext}\) and \(\operatorname{Tor}\) in Serre subcategories
We begin this section with a useful remark that we need in the proof of main result of this section.
**Remark 2.1**.: _Observe that if \(\mathcal{S}\) is a Serre subcategory of the category of \(R\)-modules then for any \(R\)-module \(M\), such that the submodule \(0\underset{M}{\underset{M}{\underset{M}{\underset{M}{\underset{M}{\underset{M}{ \underset{M}{\underset{M}{\underset{M}{\underset{M}{\underset{M}{ \underset{M}{\underset{M}{\underset{M}{\underset{M}{\underset{M}{ \underset{M}{\underset{M}{\underset{M}{\underset{M}{\underset{M}{ \underset{\overset{\overset{\overset{\oversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoversetoverset {\oversetoversetoversetoversetoverset {\oversetoversetoversetoversetoverset {\oversetoversetoversetoverset {\oversetoversetoversetoverset {\oversetoversetoverset \cdot \cdot \cdot \cdot \oversetoverset {\oversetoverset \cdot \cdot \cdot \overset \cdot \overset{\overset \cdot \cdot \overset{\overset \cdot \cdot \overset{\oversetoverset{\oversetoverset{\cdot \cdot \cdot \cdot \cdot \
* \(\mathrm{H}^{i}(x_{1}\dots,x_{s};M)\) _belongs to_ \(\mathcal{S}\) _for all_ \(i\)_._
* \(\mathrm{H}_{i}(x_{1},\dots,x_{s};M)\) _belongs to_ \(\mathcal{S}\) _for all_ \(i\)_._
The equivalence between (iii) and (iv) follows from the symmetry \(\mathrm{H}^{i}(x_{1},\dots,x_{s};M)\cong\mathrm{H}_{s-i}(x_{1},\dots,x_{s};M)\) for \(0\leq i\leq s\). The proof of the other equivalences is provided by a series of lemmas.
**Lemma 2.3**.: _Let \(\mathfrak{a}\) be an ideal of \(R\) and let \(\mathcal{C}\) be a class of \(R\)-modules with the follwing three properties:_
1. \(\mathcal{S}\subset\mathcal{C}\)_._
2. \(0\underset{M}{\ \colon\ }\mathfrak{a}\in\mathcal{S}\) _for each_ \(M\in\mathcal{C}\)_._
3. _If_ \(0\to M^{\prime}\to M\to M^{\prime\prime}\to 0\) _is a short exact sequence with_ \(M^{\prime}\) _and_ \(M\) _in_ \(\mathcal{C}\)_, then also_ \(M^{\prime\prime}\) _is in_ \(\mathcal{C}\)_._
_Let \(X^{\bullet}:0\to X^{0}\to X^{1}\to X^{2}\to\dots\) be a cochain complex such that for every \(i\), \(X^{i}\in\mathcal{C}\) and \(\mathfrak{a}\subset\sqrt{\operatorname{Ann}\mathrm{H}^{i}(X^{\bullet})}\). Then \(\mathrm{H}^{i}(X^{\bullet})\in\mathcal{S}\) for all \(i\)._
In order to prove Lemma 2.3 we prove two lemmas. In those \(\mathcal{C}\) satisfies the conditions in Lemma 2.3. These lemmas are also of some independent interest.
**Lemma 2.4**.: _Let \(f:M\to N\) be \(R\)-linear, where both \(M\) and \(N\) are in \(\mathcal{C}\). Assume that \(\mathfrak{a}^{l}\operatorname{Ker}f=0\) for some \(l\). Then \(\operatorname{Ker}f\) belongs to \(\mathcal{S}\)\((\)hence also to \(\mathcal{C})\) and \(\operatorname{Coker}f\) belongs to \(\mathcal{C}\)._
Proof.: Since \(\operatorname{Ker}f\subset 0\underset{M}{\ \colon\ }\mathfrak{a}^{l}\), which belongs to \(\mathcal{S}\) by remark 2.1 and the second condition in Lemma 2.3. Using the third condition in Lemma 2.3 applied to the exact sequences \(0\to\operatorname{Ker}f\to M\to\operatorname{Im}f\to 0\) and \(0\to\operatorname{Im}f\to N\to\operatorname{Coker}f\to 0\) we first get that \(\operatorname{Im}f\in\mathcal{C}\) (Note that we assumed that \(\mathcal{S}\subset\mathcal{C}\)) and then that \(\operatorname{Coker}f\in\mathcal{C}\).
**Lemma 2.5**.: _Let \(0\to X^{0}\overset{d^{0}}{\to}X^{1}\to\dots\to X^{n-1}\overset{d^{n-1}}{\to }X^{n}\overset{d^{n}}{\to}X^{n+1}\) be a cocomplex, where each \(X^{i}\) belongs to \(\mathcal{C}\). Suppose that there is some \(l\), such that \(\mathfrak{a}^{l}\,\mathrm{H}^{i}\) are in \(\mathcal{S}\) for \(i=0,\dots,n\). Then the cohomology modules \(\mathrm{H}^{i}\) are in \(\mathcal{S}\) for \(i=0,\dots,n\) and \(\operatorname{Coker}d^{n}\in\mathcal{C}\)._
Proof.: We use induction on \(n\). The case \(n=0\) follows from Lemma 2.4. Let \(n>0\) and assume that we have proved the result for \(n-1\). The map \(d^{n}\) induces a map \(f:\operatorname{Coker}d^{n-1}\to X^{n+1}\). Here both modules belong to \(\mathcal{C}\). The first one by the induction hypothesis and the second one by the hypothesis in the lemma. Since \(\operatorname{Ker}f=\mathrm{H}^{n}\) we can apply Lemma 2.4 to prove the case \(n\).
Lemma 2.3 now follows from Lemma 2.5.
**Lemma 2.6**.: _Let \(\mathcal{C}\) be a class of \(R\)-modules satisfying:_
1. \(\mathcal{S}\subset\mathcal{C}\)_._
2. \(M/\mathfrak{a}M\in\mathcal{S}\) _for all_ \(M\in\mathcal{C}\)_._
3. _If_ \(0\to M^{\prime}\to M\to M^{\prime\prime}\to 0\) _is a short exact with_ \(M\) _and_ \(M^{\prime\prime}\) _in_ \(\mathcal{C}\)_, then also_ \(M^{\prime}\in\mathcal{C}\)_._
_Let \(X_{\bullet}:\cdots\to X_{2}\xrightarrow{d_{1}}X_{1}\xrightarrow{d_{0}}X_{0}\to 0\) be a chain complex such that for all \(i\), \(X_{i}\in\mathcal{C}\) and \(\mathfrak{a}\subset\operatorname{Ann}\sqrt{\operatorname{H}_{i}(X_{\bullet})}\). Then the homology modules \(\operatorname{H}_{i}(X_{\bullet})\) belong to \(\mathcal{S}\) for all \(i\)._
In the next two lemmas \(\mathcal{C}\) is as in Lemma 2.6.
**Lemma 2.7**.: _Let \(f:M\to N\) be an \(R\)-linear map, where \(M\) and \(N\) both are in \(\mathcal{C}\). Assume that \(\mathfrak{a}^{l}\operatorname{Coker}f=0\) for some \(l\). Then \(\operatorname{Ker}f\) and \(\operatorname{Coker}f\) belong to \(\mathcal{C}\)._
Proof.: Since \(\operatorname{Coker}f\) is a homomorhic image of \(N/\mathfrak{a}^{l}N\), it follows from (2) in Lemma 2.6 and remark 2.1 that \(\operatorname{Coker}f\in\mathcal{S}\) and hence by (1) in Lemma 2.6 that \(\operatorname{Coker}f\in\mathcal{C}\). By property (3) in Lemma 2.6 using the exact sequences \(0\to\operatorname{Im}f\to N\to\operatorname{Coker}f\to 0\) and \(0\to\operatorname{Ker}f\to M\to\operatorname{Im}f\to 0\) we first get that \(\operatorname{Im}f\) belongs to \(\mathcal{C}\) and then that \(\operatorname{Ker}f\) belongs to \(\mathcal{C}\).
**Lemma 2.8**.: _Let \(X_{n+1}\xrightarrow{d_{0}}X_{n}\to\cdots\to X_{1}\xrightarrow{d_{0}}X_{0}\to 0\) be a complex. Assume that \(X_{i}\in\mathcal{C}\) for all \(i=0,\dots,n\) and that there is \(l\), such that \(\mathfrak{a}^{l}\) annihilates the homology modules \(\operatorname{H}_{i}\) for \(i=0,\dots,n\). Then \(\operatorname{Ker}d_{n}\) belongs to \(\mathcal{S}\)\((\)hence also to \(\mathcal{C})\) and the homology modules \(\operatorname{H}_{i}\) belong to \(\mathcal{S}\) for \(i=0,\dots,n\)._
Proof.: The proof is by induction on \(n\). The case case \(n=0\) is just Lemma 2.7. Assume that \(n>0\) and that we know that \(\operatorname{Ker}d_{n-1}\) belongs to \(\mathcal{C}\) and that the homology modules \(\operatorname{H}_{i}\) belong to \(\mathcal{S}\) for \(i=0,\dots,n-1\). Consider the map \(f:X_{n+1}\to\operatorname{Ker}d_{n-1}\) induced by \(d_{n}\). Since \(\operatorname{Ker}f=\operatorname{Ker}d_{n}\) and \(\operatorname{Coker}f=\operatorname{H}_{n-1}\) we get by 2.7 that \(\operatorname{Ker}d_{n}\in\mathcal{C}\) and that \(\operatorname{H}_{n-1}\in\mathcal{S}\).
We now prove the equivalences in Theorem 2.2.
Proof of Theorem 2.2.: We have already noticed that the conditions (iii) and (iv) are equivalent. Let \(\mathcal{C}_{i}\), \(\mathcal{C}_{ii}\), \(\mathcal{C}_{iii}\) and \(\mathcal{C}_{iv}\) be the classes of modules which satisfy the respective condition in Theorem 2.2. We have already noticed that \(\mathcal{C}_{iii}\) and \(\mathcal{C}_{iv}\) are equal. Let \(F_{\bullet}\to R/\mathfrak{a}\) be a resolution of \(R/\mathfrak{a}\) consisting of finite free modules and let \(\operatorname{K}_{\bullet}=\operatorname{K}_{\bullet}(x_{1},\dots,x_{s})\) be the Koszul complex of the sequence \(x_{1},\dots,x_{s}\). The classes \(\mathcal{C}_{i}\) and \(\mathcal{C}_{iii}\) both satisfy the three conditions in 2.3. If \(M\) is in \(\mathcal{C}_{i}\) then by 2.3 the cohomology modules of the cochain complex \(X^{\bullet}=\operatorname{Hom}_{R}(\operatorname{K}_{\bullet},M)\) are in \(\mathcal{S}\) and therefore \(M\) is in \(\mathcal{C}_{iii}\). Conversely let \(M\) be in \(\mathcal{C}_{iii}\). Consider the cochain complex \(X^{\bullet}=\operatorname{Hom}_{R}(F_{\bullet},M)\). We get using Lemma 2.3, that \(M\) belongs to \(\mathcal{C}_{i}\). To prove that the classes \(\mathcal{C}_{ii}\) and \(\mathcal{C}_{iv}\) are the same we consider the complexes \(\operatorname{K}_{\bullet}\otimes M\) and \(F_{\bullet}\otimes M\) and use Lemma 2.6.
**Theorem 2.9**.: _For each \(R\)-module \(M\) which satisfies the equivalent conditions of Theorem 2.2 and each finite \(R\)-module \(N\) with support in \(\operatorname{V}(\mathfrak{a})\), the modules \(\operatorname{Ext}^{i}_{R}(N,M)\) and \(\operatorname{Tor}^{R}_{i}(N,M)\) belong to \(\mathcal{S}\)._
Proof.: Let \(\mathcal{C}\) be the class of \(R\)-modules which satidfy the equivalent conditions in Theorem 2.2. Apply Lemma 2.3 and Lemma 2.6 to the complexes \(X^{\bullet}=\operatorname{Hom}_{R}(F_{\bullet},M)\) and \(X_{\bullet}=F_{\bullet}\otimes M\).
## 3. Cofiniteness of local cohomology and generalized local cohomology
The following two result generalize and decrease the assumptions on \(M\) in [1, Theorem 3.4 (i) and Corollary 3.5 (i)].
**Theorem 3.1**.: _Let \(R\) be a Noetherian ring and \(\mathfrak{a}\) an ideal of \(R\). Let \(t\in\mathbb{N}_{0}\) be an integer and \(M\) an \(R\)-module such that \(\operatorname{Ext}^{i}_{R}(R/\mathfrak{a},M)\) are finite for all \(i\leq t+1\). Let the \(R\)-modules \(\operatorname{H}^{i}_{\mathfrak{a}}(M)\) are \(\operatorname{FD}_{<2}\)\(R\)-modules for all \(i<t\). Then, the \(R\)-modules \(\operatorname{H}^{i}_{\mathfrak{a}}(M)\) are \(\mathfrak{a}\)-cofinite for all \(i<t\)._
Proof.: We use induction on \(t\). The exact sequence
\[0\longrightarrow\Gamma_{\mathfrak{a}}(M)\longrightarrow M\longrightarrow M/ \,\Gamma_{\mathfrak{a}}(M)\longrightarrow 0,\quad(*)\]
induces the following exact sequence:
\[0\longrightarrow\operatorname{Hom}_{R}(R/\mathfrak{a},\Gamma_{ \mathfrak{a}}(M))\longrightarrow\operatorname{Hom}_{R}(R/\mathfrak{a},M) \longrightarrow\operatorname{Hom}_{R}(R/\mathfrak{a},M/\,\Gamma_{\mathfrak{a} }(M))\] \[\longrightarrow\operatorname{Ext}^{1}_{R}(R/\mathfrak{a},\Gamma_{ \mathfrak{a}}(M))\longrightarrow\operatorname{Ext}^{1}_{R}(R/\mathfrak{a},M).\]
Since \(\operatorname{Hom}_{R}(R/\mathfrak{a},M/\,\Gamma_{\mathfrak{a}}(M))=0\) so \(\operatorname{Hom}_{R}(R/\mathfrak{a},\Gamma_{\mathfrak{a}}(M))\) and \(\operatorname{Ext}^{1}_{R}(R/\mathfrak{a},\Gamma_{\mathfrak{a}}(M))\) are finite. Assume inductively that \(t>0\) and that we have established the result for non-negative integers smaller than \(t\). Using inductive assumption, the rest of proof is the same as [1, Theorem 3.4 (i)].
**Corollary 3.2**.: _Let \(R\) be a Noetherian ring and \(\mathfrak{a}\) an ideal of \(R\). Let \(M\) be an \(R\)-module such that the \(R\)-module \(\operatorname{Ext}^{i}_{R}(R/\mathfrak{a},M)\) is finite for all \(i\) and the \(R\)-module \(\operatorname{H}^{i}_{\mathfrak{a}}(M)\) is \(\operatorname{FD}_{<2}\)\((\)or weakly Laskerian\()\)\(R\)-modules for all \(i\). Then the \(R\)-modules \(\operatorname{H}^{i}_{\mathfrak{a}}(M)\) are \(\mathfrak{a}\)-cofinite for all \(i\)._
Proof.: Follows by Theorem 3.1.
The following corollary is a characterization of cofiniteness of local cohomology modules which are \(\operatorname{FD}_{<2}\)\(R\)-module.
**Corollary 3.3**.: _Let \(R\) be a Noetherian ring and \(\mathfrak{a}\) an ideal of \(R\). Let \(M\) be an \(R\)-module such that the \(R\)-modules \(\mathrm{H}^{i}_{\mathfrak{a}}(M)\) are \(\mathrm{FD}_{<2}\)\((\)or weakly Laskerian\()\)\(R\)-modules for all \(i\). Then, the following conditions are equivalent:_
(i) _The \(R\)-module \(\mathrm{Ext}^{i}_{R}(R/\mathfrak{a},M)\) is finite for all \(i\)._
(ii) _The \(R\)-module \(\mathrm{H}^{i}_{\mathfrak{a}}(M)\) is \(\mathfrak{a}\)-cfinite for all \(i\)._
Proof.: (i)\(\Rightarrow\)(ii) Follows by Corollary 3.2.
(ii)\(\Rightarrow\)(i) It follows by [23, Proposition 3.9].
**Theorem 3.4**.: _Let \(M\) be an \(R\)-module and suppose one of the following cases holds:_
1. \(\mathrm{ara}(\mathfrak{a})\leq 1\)_;_
2. \(\dim R/\mathfrak{a}\leq 1\)_;_
3. \(\dim R\leq 2\)_._
_Then, \(\mathrm{H}^{i}_{\mathfrak{a}}(M)\) is \(\mathfrak{a}\)-cofinite for all \(i\) if and only if \(\mathrm{Ext}^{i}_{R}(R/\mathfrak{a},M)\) is a finite \(R\)-module for all \(i\)._
Proof.: The case (c) follows by Theorem [23, Theorem 7.10].
In the cases (a) and (b), suppose \(\mathrm{H}^{i}_{\mathfrak{a}}(M)\) is \(\mathfrak{a}\)-cofinite for all \(i\). It follows from [23, Corollary 3.10] that \(\mathrm{Ext}^{i}_{R}(R/\mathfrak{a},M)\) is a finite \(R\)-module for all \(i\).
To prove the converse, in the case (b), it follows by [22, Theorem 2.10]. In the case (a), use [4, Theorem 3.3] and the method of proof of [22, Theorem 2.10].
**Lemma 3.5**.: _Let \(X\) be a finite \(R\)-module, \(M\) be an arbitrary \(R\)-module. Then the following statements hold true._
1. \(\Gamma_{\mathfrak{a}}(X,M)\cong\mathrm{Hom}_{R}(X,\Gamma_{\mathfrak{a}}(M))\)_._
2. _If_ \(\mathrm{Supp}_{R}(X)\cap\mathrm{Supp}_{R}(M)\subseteq\mathrm{V}(\mathfrak{a})\)_, then_ \(\mathrm{H}^{i}_{\mathfrak{a}}(X,M)\cong\mathrm{Ext}^{i}_{R}(X,M)\) _for all_ \(i\)_._
Proof.: See [24, Lemma 2.5].
**Theorem 3.6**.: _Let \(M\) be an \(R\)-module and suppose one of the following cases holds:_
1. \(\mathrm{ara}(\mathfrak{a})\leq 1\)_;_
2. \(\dim R/\mathfrak{a}\leq 1\)_;_
3. \(\dim R\leq 2\)_._
_Then, for any finite \(R\)-module \(N\), \(\mathrm{H}^{i}_{\mathfrak{a}}(N,M)\) is \(\mathfrak{a}\)-cofinite for all \(i\geq 0\) if and only if \(\mathrm{Ext}^{i}_{R}(R/\mathfrak{a},M)\) is finite \(R\)-module for all \(i\geq 0\)._
Proof.: First suppose for any finite \(R\)-module \(N\), \(\mathrm{H}^{i}_{\mathfrak{a}}(N,M)\) is \(\mathfrak{a}\)-cofinite for all \(i\geq 0\). Let \(N=R\), then it follows by Theorem 3.4, that \(\mathrm{Ext}^{i}_{R}(R/\mathfrak{a},M)\) is finite \(R\)-module for all \(i\geq 0\).
To prove the converse we use induction on \(i\). Let \(i=0\), then it follows by Lemma 3.5 (a) that
\[\operatorname{H}_{\mathfrak{a}}^{0}(N,M)=\Gamma_{\mathfrak{a}}(N,M)\cong \operatorname{Hom}_{R}(N,\Gamma_{\mathfrak{a}}(M)).\]
Since \(\Gamma_{\mathfrak{a}}(M)\) is \(\mathfrak{a}\)-cofinite by Theorem 3.4, so the assertion follows by [4, Theorem 4.1], [22, Corollary 2.7] and [23, Theorem 7.4], respectively in the cases (a), (b) and (c). Now assume that \(i>0\) and that the claim holds for \(i-1\). Since \(\Gamma_{\mathfrak{a}}(M)\) is \(\mathfrak{a}\)-cofinite and \(\operatorname{Ext}_{R}^{i}(R/\mathfrak{a},M)\) is finite \(R\)-module for all \(i\geq 0\). Using the short exact sequence
\[0\longrightarrow\Gamma_{\mathfrak{a}}(M)\longrightarrow M\longrightarrow M/ \,\Gamma_{\mathfrak{a}}(M)\longrightarrow 0,\]
it is easy to see that the \(R\)-modules \(\operatorname{Ext}_{R}^{i}(R/\mathfrak{a},M/\,\Gamma_{\mathfrak{a}}(M))\) are finite for all \(i\geq 0\). Now, by applying the derived functor \(\Gamma_{\mathfrak{a}}(N,-)\) to the same short exact sequence and using Lemma 3.5 (b), we obtain the long exact sequence
\[\ldots\longrightarrow\operatorname{Ext}_{R}^{i}(N,\Gamma_{ \mathfrak{a}}(M))\stackrel{{ f_{i}}}{{\longrightarrow}} \operatorname{H}_{\mathfrak{a}}^{i}(N,M)\stackrel{{ g_{i}}}{{ \longrightarrow}}\operatorname{H}_{\mathfrak{a}}^{i}(N,M/\,\Gamma_{ \mathfrak{a}}(M))\stackrel{{ h_{i}}}{{\longrightarrow}} \operatorname{Ext}_{R}^{i}(N,\Gamma_{\mathfrak{a}}(M))\stackrel{{ f_{i+1}}}{{ \longrightarrow}}\] \[\operatorname{H}_{\mathfrak{a}}^{i+1}(N,M)\longrightarrow\ldots\]
which yields short exact sequences
\[0\longrightarrow\operatorname{Ker}f_{i}\longrightarrow\operatorname{Ext}_{R}^{ i}(N,\Gamma_{\mathfrak{a}}(M))\longrightarrow\operatorname{Im}f_{i} \longrightarrow 0,\]
\[0\longrightarrow\operatorname{Im}f_{i}\longrightarrow\operatorname{H}_{ \mathfrak{a}}^{i}(N,M)\longrightarrow\operatorname{Im}g_{i}\longrightarrow 0\]
and
\[0\longrightarrow\operatorname{Im}g_{i}\longrightarrow\operatorname{H}_{ \mathfrak{a}}^{i}(N,M/\,\Gamma_{\mathfrak{a}}(M))\longrightarrow\operatorname{ Ker}f_{i+1}\longrightarrow 0.\]
Since \(\operatorname{Ext}_{R}^{i}(N,\Gamma_{\mathfrak{a}}(M))\) is \(\mathfrak{a}\)-cofinite \(R\)-module for all \(i\geq 0\) by [4, Theorem 4.1], [22, Corollary 2.7] and [23, Theorem 7.4], respectively in the cases (a), (b) and (c), it follows by definition that, \(\operatorname{H}_{\mathfrak{a}}^{i}(N,M)\) is \(\mathfrak{a}\)-cofinite \(R\)-module if and only if \(\operatorname{H}_{\mathfrak{a}}^{i}(N,M/\,\Gamma_{\mathfrak{a}}(M))\) is \(\mathfrak{a}\)-cofinite \(R\)-module for all \(i\geq 0\). Therefore it suffices to show that \(\operatorname{H}_{\mathfrak{a}}^{i}(N,M/\,\Gamma_{\mathfrak{a}}(M))\) is \(\mathfrak{a}\)-cofinite \(R\)-module for all \(i\geq 0\). To this end consider the exact sequence
\[0\longrightarrow M/\,\Gamma_{\mathfrak{a}}(M)\longrightarrow E\longrightarrow L \longrightarrow 0,\ \ \ \ (\dagger)\]
in which \(E\) is an injective \(\mathfrak{a}\)-torsion free module. Since \(\Gamma_{\mathfrak{a}}(M/\,\Gamma_{\mathfrak{a}}(M))=0=\Gamma_{\mathfrak{a}}(E)\), thus \(\operatorname{Hom}_{R}(R/\mathfrak{a},E)=0\) and \(\Gamma_{\mathfrak{a}}(N,M/\,\Gamma_{\mathfrak{a}}(M))=0=\Gamma_{\mathfrak{a}}( N,E)\) by Lemma 3.5 (a). Applying the derived functors of \(\operatorname{Hom}_{R}(R/\mathfrak{a},-)\) and \(\Gamma_{\mathfrak{a}}(N,-)\) to the short exact sequence \((\dagger)\) we obtain, for all \(i>0\), the isomorphisms
\[\operatorname{H}_{\mathfrak{a}}^{i-1}(N,L) \cong\operatorname{H}_{\mathfrak{a}}^{i}(N,M/\,\Gamma_{\mathfrak{ a}}(M)).\] \[\text{and}\] \[\operatorname{Ext}_{R}^{i-1}(R/\mathfrak{a},L) \cong\operatorname{Ext}_{R}^{i}(R/\mathfrak{a},M/\,\Gamma_{ \mathfrak{a}}(M)).\]
From what has already been proved, we conclude that \(\operatorname{Ext}_{R}^{i-1}(R/\mathfrak{a},L)\) is finite for all \(i>0\). Hence \(\operatorname{H}_{\mathfrak{a}}^{i-1}(N,L)\) is \(\mathfrak{a}\)-cofinite by induction hypothesis for all \(i>0\), which yields that \(\operatorname{H}_{\mathfrak{a}}^{i}(N,M/\operatorname{\Gamma}_{\mathfrak{a}}(M))\) is \(\mathfrak{a}\)-cofinite for all \(i\geq 0\), this completes the inductive step.
**Lemma 3.7**.: _Let \(N\) be a finitely generated \(R\)-module and \(M\) be an arbitrary \(R\)-module. Let \(t\) be a non-negative integer such that \(\operatorname{Ext}_{R}^{i}(N,M)\) is finite for all \(0\leq i\leq t\). Then for any finitely generated \(R\)-module \(L\) with \(\operatorname{Supp}_{R}(L)\subseteq\operatorname{Supp}_{R}(N)\), \(\operatorname{Ext}_{R}^{i}(L,M)\) is finite for all integer \(0\leq i\leq t\)._
Proof.: See [8, Proposition 1].
The following corollary is our main result of this section which is a characterization of cofinite local cohomology and generalized local cohomology modules under the assumptions (a) \(\operatorname{ara}(\mathfrak{a})\leq 1\), (b) \(\dim R/\mathfrak{a}\leq 1\) and (c) \(\dim R\leq 2\). This corollary shows that with the above assumptions, cofiniteness of local cohomology and generalized local cohomology modules are equivalent.
**Corollary 3.8**.: _Let \(M\) be an \(R\)-module and suppose one of the following cases holds:_
1. \(\operatorname{ara}(\mathfrak{a})\leq 1\)_;_
2. \(\dim R/\mathfrak{a}\leq 1\)_;_
3. \(\dim R\leq 2\)_._
_Then the following conditions are equivalent:_
1. \(\operatorname{Ext}_{R}^{i}(R/\mathfrak{a},M)\) _is finite for all_ \(i\)_._
2. \(\operatorname{Ext}_{R}^{i}(R/\mathfrak{a},M)\) _is finite for_ \(i=0,1\) _in the cases_ \((a)\) _and_ \((b)\)__\((\)_resp. for_ \(i=0,1,2\) _in the case_ \((c)\)_\()\)_;_
3. \(\operatorname{H}_{\mathfrak{a}}^{i}(M)\) _is_ \(\mathfrak{a}\)_-cofinite for all_ \(i\)_;_
4. \(\operatorname{H}_{\mathfrak{a}}^{i}(N,M)\) _is_ \(\mathfrak{a}\)_-cofinite for all_ \(i\) _and for any finite_ \(R\)_-module_ \(N\)_;_
5. \(\operatorname{Ext}_{R}^{i}(N,M)\) _is finite for all_ \(i\) _and for any finite_ \(R\)_-module_ \(N\) _with_ \(\operatorname{Supp}_{R}(N)\subseteq\operatorname{V}(\mathfrak{a})\)_;_
6. \(\operatorname{Ext}_{R}^{i}(N,M)\) _is finite for all_ \(i\) _and for some finite_ \(R\)_-module_ \(N\) _with_ \(\operatorname{Supp}_{R}(N)=\operatorname{V}(\mathfrak{a})\)_;_
7. \(\operatorname{Ext}_{R}^{i}(N,M)\) _is finite for_ \(i=0,1\) _in the cases_ \((a)\) _and_ \((b)\)__\((\)_resp. for_ \(i=0,1,2\) _in the case_ \((c)\)\()\) _and for any finite_ \(R\)_-module_ \(N\) _with_ \(\operatorname{Supp}_{R}(N)\subseteq\operatorname{V}(\mathfrak{a})\)_;_
8. \(\operatorname{Ext}_{R}^{i}(N,M)\) _is finite for_ \(i=0,1\) _in the cases_ \((a)\) _and_ \((b)\)__\((\)_resp. for_ \(i=0,1,2\) _in the case_ \((c)\)\()\) _and for some finite_ \(R\)_-module_ \(N\) _with_ \(\operatorname{Supp}_{R}(N)=\operatorname{V}(\mathfrak{a})\)_._
Proof.: In order to prove (i)\(\Leftrightarrow\)(ii), use [4, Theorem 3.3], [22, Theorem 2.3] and [23, Theorem 7.10], respectively in the cases (a), (b) and (c).
(i)\(\Leftrightarrow\)(iii) follows by Theorem 3.4.
(i)\(\Leftrightarrow\)(iv) follows by Theorem 3.6.
In order to prove (i)\(\Leftrightarrow\)(v) and (ii)\(\Leftrightarrow\)(vii) use [19, Lemma 1].
(v)\(\Rightarrow\)(vi) and (vii)\(\Rightarrow\)(viii) are trivial.
In order to prove (vi)\(\Rightarrow\)(v) and (viii)\(\Rightarrow\)(vii), let \(L\) be a finitely generated \(R\)-module with \(\operatorname{Supp}_{R}(L)\subseteq\operatorname{V}(\mathfrak{a})\). Then \(\operatorname{Supp}_{R}(L)\subseteq\operatorname{Supp}_{R}(N)\). Now the assertion follows by Lemma 3.7.
It is easy to see that \(\operatorname{cd}(\mathfrak{a},R)\leq\operatorname{ara}\mathfrak{a}\) but the equality is not true in general (see [9, Example 2.3]). We close this section by offering a question and problem for further research.
**Question:** Let \(R\) be a commutative Noetherian ring with non-zero identity and \(\mathfrak{a}\) an ideal of \(R\). Is the charaterization in Corollary 3.8 true, when we change \(\operatorname{ara}\mathfrak{a}\leq 1\) with \(\operatorname{cd}(\mathfrak{a},R)\leq 1\) in the case (a)?
## 4. Artinianness of local cohomology modules
In general it is not true that when \(M\) is a finite \(R\)-module over the Noetherian ring \(R\), such that the local cohomology modules \(\operatorname{H}^{i}_{\mathfrak{a}}(M)\) have support in \(\operatorname{Max}R\) for all \(i>r\), then they are Artinian \(R\)-module for all \(i>r\). For this see the example by Hartshorne in [13, page 151]. However we prove that this is true in the case when \(\dim R/\mathfrak{a}=1\). To do this we need the following lemma.
**Lemma 4.1**.: _Let \(\mathfrak{a}\) be an ideals of a Noetherian ring \(R\), such that \(\dim R/\mathfrak{a}=1\). Let \(M\) be a finite \(R\)-module and \(r\) an integer. Then \(\dim M_{\mathfrak{p}}\leq r\) for each prime ideal \(\mathfrak{p}\) minimal over \(\mathfrak{a}\) if and only if \(\operatorname{Supp}_{R}(\operatorname{H}^{i}_{\mathfrak{a}}(M))\subseteq \operatorname{Max}(R)\) for all \(i>r\)._
Proof.: This is the direct consequence of Grothendieck's vanishing theorem and the isomorphism \(\operatorname{H}^{i}_{\mathfrak{a}}(M)_{\mathfrak{p}}\cong\operatorname{H}^{ i}_{\mathfrak{a}R_{\mathfrak{p}}}(M_{\mathfrak{p}})\) for each prime ideal \(\mathfrak{p}\) minimal over \(\mathfrak{a}\).
**Theorem 4.2**.: _Let \(\mathfrak{a}\) be an ideals of a Noetherian ring \(R\), with \(\dim R/\mathfrak{a}=1\). For an \(R\)-module \(M\) and integer an \(r\) the following conditions are equivalent;_
1. \(\operatorname{Supp}_{R}(\operatorname{H}^{i}_{\mathfrak{a}}(M))\subseteq \operatorname{Max}(R)\) _for each_ \(i>r\)_._
2. \(\operatorname{H}^{i}_{\mathfrak{a}}(M)\) _is an Artinian_ \(R\)_-module for each_ \(i>r\)_._
3. \(\operatorname{H}^{i}_{\mathfrak{a}}(M)\) _is an Artinian_ \(\mathfrak{a}\)_-cofinte_ \(R\)_-module for each_ \(i>r\)_._
Proof.: The implications (iii)\(\Rightarrow\)(ii)\(\Rightarrow\)(i) always hold, so let us assume (i) and prove that (iii) hold. We do induction on \(r\). First we note that if \(N\) is a finite \(R\)-module, such that \(\operatorname{Supp}_{R}(\operatorname{H}^{i}_{\mathfrak{a}}(N))\subseteq \operatorname{Max}(R)\), then \(\operatorname{H}^{i}_{\mathfrak{a}}(N)\) is \(\mathfrak{a}\)-cofinite Artinian for all \(i\). Let \(r=0\) and put \(N=M/\operatorname{\Gamma}_{\mathfrak{a}}(M)\). Then \(\operatorname{H}^{0}_{\mathfrak{a}}(N)=0\) and \(\operatorname{H}^{i}_{\mathfrak{a}}(N)\cong\operatorname{H}^{i}_{\mathfrak{a} }(M)\) for all \(i>0\). Hence by the
above remark \(\mathrm{H}^{i}_{\mathfrak{a}}(N)\) is \(\mathfrak{a}\)-cofinite Artinian for all \(i\). It follows that \(\mathrm{H}^{i}_{\mathfrak{a}}(M)\) is Artinian and \(\mathfrak{a}\)-cofinite for all \(i>0\).
Next let \(r>0\) and put \(N=M/\operatorname{\Gamma_{\mathfrak{a}}}(M)\). Then \(\mathrm{H}^{0}_{\mathfrak{a}}(N)=0\) and we can take \(x\in\mathfrak{a}\) such that \(x\) is \(N\)-regular.
Then by Lemma 4.1, \(\dim(N/xN)_{\mathfrak{p}}\leq r-1\) for each prime ideal \(\mathfrak{p}\) minimal over \(\mathfrak{a}\). Applying the induction hypothesis to \(N/xN\), we have that \(\mathrm{H}^{i}_{\mathfrak{a}}(N/xN)\) is \(\mathfrak{a}\)-cofinite Artinian for all \(i>r-1\). Next applying local cohomology functor to the exact sequence \(0\longrightarrow N\longrightarrow N\longrightarrow N/xN\longrightarrow 0\), we can conclude that \((0\operatorname{:_{\mathrm{H}^{i}_{\mathfrak{a}}(N)}}x)\) is \(\mathfrak{a}\)-cofinite Artinian for all \(i>r\). It follows that \(\mathrm{H}^{i}_{\mathfrak{a}}(N)\) is Artinian and \(\mathfrak{a}\)-cofinite for all \(i>r\). Therefore also \(\mathrm{H}^{i}_{\mathfrak{a}}(M)\), for all \(i>r\).
|
2304.00338 | Scientific Computing Algorithms to Learn Enhanced Scalable Surrogates
for Mesh Physics | Data-driven modeling approaches can produce fast surrogates to study
large-scale physics problems. Among them, graph neural networks (GNNs) that
operate on mesh-based data are desirable because they possess inductive biases
that promote physical faithfulness, but hardware limitations have precluded
their application to large computational domains. We show that it is
\textit{possible} to train a class of GNN surrogates on 3D meshes. We scale
MeshGraphNets (MGN), a subclass of GNNs for mesh-based physics modeling, via
our domain decomposition approach to facilitate training that is mathematically
equivalent to training on the whole domain under certain conditions. With this,
we were able to train MGN on meshes with \textit{millions} of nodes to generate
computational fluid dynamics (CFD) simulations. Furthermore, we show how to
enhance MGN via higher-order numerical integration, which can reduce MGN's
error and training time. We validated our methods on an accompanying dataset of
3D $\text{CO}_2$-capture CFD simulations on a 3.1M-node mesh. This work
presents a practical path to scaling MGN for real-world applications. | Brian R. Bartoldson, Yeping Hu, Amar Saini, Jose Cadena, Yucheng Fu, Jie Bao, Zhijie Xu, Brenda Ng, Phan Nguyen | 2023-04-01T15:42:18Z | http://arxiv.org/abs/2304.00338v1 | # Scientific Computing Algorithms to Learn Enhanced Scalable Surrogates for Mesh Physics
###### Abstract
Data-driven modeling approaches can produce fast surrogates to study large-scale physics problems. Among them, graph neural networks (GNNs) that operate on mesh-based data are desirable because they possess inductive biases that promote physical faithfulness, but hardware limitations have precluded their application to large computational domains. We show that it is _possible_ to train a class of GNN surrogates on 3D meshes. We scale MeshGraphNets (MGN), a subclass of GNNs for mesh-based physics modeling, via our domain decomposition approach to facilitate training that is mathematically equivalent to training on the whole domain under certain conditions. With this, we were able to train MGN on meshes with _millions_ of nodes to generate computational fluid dynamics (CFD) simulations. Furthermore, we show how to enhance MGN via higher-order numerical integration, which can reduce MGN's error and training time. We validated our methods on an accompanying dataset of 3D CO\({}_{2}\)-capture CFD simulations on a 3.1M-node mesh. This work presents a practical path to scaling MGN for real-world applications.
## 1 Introduction
Understanding physical systems and engineering processes often requires extensive numerical simulations of their underlying models. However, these simulations are typically computationally expensive to generate, which can hinder their applicability to large-scale problems. In such cases, surrogates can be used to emulate the original models and produce computationally efficient simulations.
Surrogate approaches can suggest efficiency-fidelity tradeoffs (Willard et al., 2020; Wang, 2021). For example, while physically faithful and generalizable to unseen domains, mesh-based GNNs offer smaller speedups (11-290x) (Pfaff et al., 2020) than those of less generalizable CNNs (125-716x) (Kim et al., 2019). Moreover, hardware limitations and memory requirements of mesh-based GNNs prevent their application to large graphs. To the best of our knowledge, they have not been applied to graphs with more than 20K nodes (Fortunato et al., 2022). Despite this tradeoff, many applications of surrogates require _both_ generalization to unseen domains and efficient processing on large datasets.
Thus, we seek to improve the applicability of GNNs without harming the inductive biases critical to their accuracy by building on scientific computing algorithms. Focusing on MeshGraphNets (MGN) (Pfaff et al., 2020), we enhance MGN learning via domain decomposition and higher-order numerical integration--methods that fall into a framework we call **Scientific Computing Algorithms to Learn Enhanced Scalable Surrogates (SCALES2)**. Our contributions are the following:
1. We train GNN surrogates on CFD meshes with millions of nodes, roughly 100 times larger than the largest meshes used in prior work (Pfaff et al., 2020; Fortunato et al., 2022).
2. We present patch training, a domain decomposition method for training on large meshes, and conditions for its mathematical equivalence to training on a large mesh directly.
3. We show that MGN accuracy and efficiency can be greatly improved by using higher-order numerical integration schemes that can augment surrogate faithfulness.
4. We release C3FD, a large-scale dataset of Carbon Capture CFD simulations on 3D meshes containing 3.1M nodes, to facilitate benchmarking of graph-based surrogates.
## 2 Approach
### MeshGraphNets
MGN uses message-passing (MP) GNNs to learn from mesh-based simulations. In each MP step, MGN updates each edge's features by aggregating the features of its incident nodes, and then updates each node's features by aggregating the features of its incident edges. After \(k\) iterations, each node and edge will be influenced by its \(k\)-hop neighborhood. The final features of each node are then used to predict the node's next state using the following update rule: \(y_{t+h}=y_{t}+\text{MGN}(y_{t})\), where \(y_{t}\) is the system's numerically estimated state at time \(t\) and \(h\) is the timestep size.
### Domain decomposition for scaling
We develop a domain decomposition framework that allows mesh-based GNN surrogates to be trained on large domains (see Figure 1). Domain decomposition is useful when application of a surrogate to a domain produces out-of-memory errors on the hardware used for inference or learning. Broadly, it divides a simulation problem defined on a domain into a jointly equivalent (or similar) set of problems, each of which is defined on its own subdomain (Tang et al., 2021).
For our datasets, we create evenly-sized subdomains by dividing each dimension of the domain into equal parts. Inspired by overlapping domain decomposition approaches (Tang et al., 2021), we augment each subdomain with its \(k\)-hop neighborhood, which we call the ghost zone with width \(k\). We call the union of the subdomain and its ghost zone a patch. Because each subdomain node can access its \(k\)-hop neighborhood in its patch, an MGN with \(m\) MP steps, where \(m\leq k\), will update each subdomain node as if the entire domain had been available. Thus, training on patches is equivalent to training on the full domain, under three conditions: (1) \(m\leq k\); (2) we only compute the loss on the subdomain nodes (excluding the ghost nodes); and (3) we accumulate the gradients over multiple batches until all patches in the domain have been visited before updating the weights. In practice, we violate the third condition by updating the weights after a random subset of a domain's patches is seen--we call the resulting procedure "patch training".
### Higher-order integration for enhanced learning
MGN's update rule is based on the forward Euler (FE) method, which introduces first-order global error and second-order local truncation error by ignoring quadratic and higher-order terms in the following Taylor series: \(y(t+h)=y(t)+hf(y(t))+\mathcal{O}(h^{2})\), where \(y(t)\) is the true state of a system at time \(t\). Accordingly, MGN's prediction error is \(|y(t+h)-y_{t+h}|=\big{|}[hf(y(t))-\text{MGN}(y(t))]+\mathcal{O}(h^{2})\big{|}\)
Figure 1: **Scaling MGN via domain decomposition.** We partition graph \(\mathcal{G}_{t}\) into \(P\) subdomains (squares), augment the subdomains with ghost nodes to get overlapping patches (e.g., Patch1 and Patch4 have ghost nodes from each other, shown as pink and yellow nodes), then run MGN on the patches \(\mathcal{G}_{t}^{(p)}\) to obtain \(\tilde{\mathcal{G}}_{t+1}^{(p)}\). For inference, nodes with incorrect states (red)—due to “incomplete” message-passing with a subset of their neighbors—are updated by communicating across patches to obtain \(\mathcal{G}_{t+1}^{(p)}\). When training, ghost node updates are not needed as we predict only the next timestep.
which can be decomposed into a prediction error of the (scaled) time derivative, \([hf(y)-\text{MGN}(y)]\), and a second-order local error, \(\mathcal{O}(h^{2})\).1
Footnote 1: In practice, this analysis is not exact because MGN error uses CFD data that only approximates \(y(t+h)\).
While MGN may learn to offset the \(\mathcal{O}(h^{2})\) term when learning to estimate the time derivative, local error can be reduced by changing how MGN updates \(y_{t}\) to \(y_{t+h}\), increasing the benefit of correctly predicting the time derivative. Specifically, we replace FE with Heun's second- and third-order methods (Heun et al., 1990; J.C.Butcher, 1996), which have smaller local errors (\(\mathcal{O}(h^{3})\) and \(\mathcal{O}(h^{4})\), respectively). In Appendix A, we hypothesize that this can speed up and enhance learning.
## 3 Datasets
**CylinderFlow** CylinderFlow contains 2D simulations of inviscid, incompressible flow around a cylinder, generated by numerical solvers with Navier-Stokes equations on an Eulerian mesh (Pfaff et al., 2020). We train surrogates to predict the fluid momenta at each node of the mesh.
**C3FD** Our Carbon Capture CFD (C3FD) dataset contains STAR-CCM+ simulations of a CO\({}_{2}\)-capturing solvent flowing across packing structures in a column. The dataset contains 50 2D and 50 3D simulations, each spanning 500 timesteps. The 2D and 3D simulation meshes have 150K and 3.1M nodes, respectively. Each node contains information about its physical location, type, liquid volume fraction (a quantity related to capture efficiency), momentum per unit volume, and contact angle. We train surrogates to predict liquid volume fraction and momentum at each node. Additional dataset details and visualizations are available in Appendix B. The C3FD dataset is available at [https://doi.org/10.25584/1963908](https://doi.org/10.25584/1963908).
## 4 Experiments
As a sanity check, we first replicate experiments from Pfaff et al. (2020) on the full 2D C3FD domain. In Figures 3 and 5, we show that we can train MGNs to low error and generate 500-timestep rollouts that are similar to the ground truth. Further, we show that MGNs trained on the 2D C3FD domain generalize to a distinct 2D domain (i.e., a new slice of the 3D domain).
These initial experiments on 2D C3FD required roughly 50 GB of accelerator memory to apply the original MGN. Applying MGN to larger datasets like the 3D C3FD dataset, which requires roughly 1 TB of accelerator memory, is not possible with commonly available hardware. Thus, aiming to make mesh-based GNN surrogates more efficient, we test algorithms to scale and enhance their learning. Ultimately, our patch training successfully scales MGN training to learn from the 3D C3FD data. We also show that MGN training can be enhanced by using higher-order integration.
### Patch training is key to scaling surrogates
In Section 2.2, we outlined patch training. However, it remains unclear how mesh surrogates like MGN perform on very large domains. Training on a full domain with domain decomposition may be wasteful when the domain is very large, because each patch in the domain must be visited before the weights are updated. We explore if it may be just as effective to use patch training, which updates the weights after visiting a small sample of a domain's patches.
Unlike training on smaller domains and then generalizing to a large test domain (Figure 6), we find that patch training induces no performance drop relative to training on the large domain directly. This holds for CylinderFlow (Figure 8) and 2D C3FD (Figure 7). Based on this, we use patch training to efficiently train MGNs on a 3D mesh with millions of nodes (3D C3FD) for the first time (Figure 2). To optimize this model for practical deployment, further work (e.g., hyperparameter tuning) remains. We provide additional experiment details and results in Appendices C and D, respectively.
### Higher-order integration enhances learning
Following discussion from Section 2.3, we want to investigate: How significant are the \(\mathcal{O}(h^{2})\) truncation errors from FE? If significant, how effective is MGN in learning to correct these errors?
While it is clear that higher-order integration methods should reduce the truncation errors, these higher-order integrators may not provide benefits if MGN can easily correct for these errors.
Using the 2D C3FD data, we train MGNs using three different integrators: FE, Heun's second-order method (H2), and Heun's third-order method (H3). Comparing FE and H2, we see that test error can be roughly halved with H2 (Figure 3_left_). Using more MP steps generally improves surrogate performance, so we also compare the integrators across a range of MP steps (Figure 3_right_). The H2-surrogate using 7 MP steps achieves roughly the same error as the FE-surrogate using 15 steps. Furthermore, H2 with 10 MP steps reaches even lower error than FE with 15 MP steps. This suggests that higher-order integration is an important tool for improving MGN performance, with the added bonus of requiring fewer MP steps to achieve the same level of accuracy. We provide additional discussion and results in Appendix D and review related work in Appendix E.
## 5 Discussion
We introduced scientific computing algorithms (SCALES2) to enhance and scale training of GNN surrogates on mesh physics. Our approach enabled MGN to train on 3D meshes with millions of nodes. We also showed that higher-order integration can halve the error in our 2D experiments. To stimulate further research on scaling and improving surrogate simulation, we released C3FD, a benchmark dataset with CO\({}_{2}\) capture CFD simulations on both 2D and 3D meshes (with 150K nodes and 3.1M nodes, respectively). As this study focuses on next-step predictions, it remains to be seen the overall improvement that SCALES2 may provide to longer rollouts on large domains; we discuss additional limitations and extensions to our work in Appendix F.
Figure 3: **Higher-order integration enhances learning. Experiments run on 2D C3FD data with 150K nodes per timestep. (_Left_) An MGN using Heun’s second-order method (H2) with 7 message-passing (MP) steps is able to halve the error—relative to forward Euler (FE)—on both test (as shown) and training (see Figure 9) data. (_Right_) H2 with 7 MP steps is comparable to FE with 15 MP steps, despite the benefits associated with more MP steps. H2 with 10 MP steps can outperform FE with 15 MP steps. In Figure 10, we show higher-order integration’s benefits are stable across training runs.**
Figure 2: **Patch training on 3D C3FD. Experiments run on data with 3.1M nodes per timestep.**
#### Acknowledgments
This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231 using NERSC award BES-ERCAP0024437. This work was conducted as part of the Carbon Capture Simulation for Industry Impact (CCSI2) project. LLNL Release Number (Paper): LLNL-CONF-845156. PNNL Release Number (Dataset): PNNL-SA-182276.
|
2306.16323 | Jacobi Beta Ensemble and $b$-Hurwitz Numbers | We express correlators of the Jacobi $\beta$ ensemble in terms of (a special
case of) $b$-Hurwitz numbers, a deformation of Hurwitz numbers recently
introduced by Chapuy and Dolega. The proof relies on Kadell's generalization of
the Selberg integral. The Laguerre limit is also considered. All the relevant
$b$-Hurwitz numbers are interpreted (following Bonzom, Chapuy, and Dolega) in
terms of colored monotone Hurwitz maps. | Giulio Ruzza | 2023-06-28T15:56:31Z | http://arxiv.org/abs/2306.16323v3 | # Jacobi beta ensemble and b-Hurwitz numbers
###### Abstract
We express correlators of the Jacobi \(\beta\) ensemble in terms of (a special case of) \(b\)-Hurwitz numbers, a deformation of Hurwitz numbers recently introduced by Chapuy and Dolega. The proof relies on a generalized Selberg integral. The Laguerre limit is also considered. All the relevant \(b\)-Hurwitz numbers are interpreted (following Bonzom, Chapuy, and Dolega) in terms of colored monotone Hurwitz maps.
AMS Subject Classification (2020): 15B52, 05E05, 05E16
Keywords: beta ensembles, Jack polynomials, Hurwitz numbers, maps
## 1 Introduction and statement of results
Hurwitz numbers count branched coverings of the sphere by a Riemann surface with prescribed ramification profiles. Hurwitz himself [20] showed that this geometric counting problem boils down, via monodromy representation, to a combinatorial one. The latter is the problem of counting factorizations of the identity in the symmetric group with factors in prescribed conjugacy classes. Today, Hurwitz numbers have been generalized in various directions and are the subject of renewed interest because of their connections to integrable systems [24, 17, 19] and enumerative geometry [8, 25, 6].
There are many matrix models connected with (various versions of) Hurwitz numbers, e.g. the Harish-Chandra-Itzykson-Zuber integral [14] and the Brezin-Gross-Witten model [23], as well as externally coupled Brezin-Hikami type models with a Meijer-G weight [1]. Moreover, it has been shown [12] that correlators1 of a random Hermitian matrix distributed according to the Jacobi unitary ensemble are generating functions for a type of Hurwitz numbers (_triple monotone_ Hurwitz numbers); this result extends the combinatorial interpretation of correlators for the Gaussian [21] and Laguerre [18, 15, 4, 11] unitary ensembles.
Footnote 1: Cf. (1.7) below.
Recently, a deformation of Hurwitz numbers has been constructed in [3] (see also [13]), termed \(b\)-Hurwitz number, which count (non-orientable) generalized branched coverings of the sphere. Generating functions of (non-deformed, \(b=0\)) Hurwitz numbers [24, 17, 19] admit explicit expansion in terms of Schur functions (from which it can be shown that they are tau functions of integrable systems), whereas (roughly speaking) for \(b\)-Hurwitz numbers one replaces Schur functions with Jack symmetric functions.
In a certain sense, this deformation mimics the deformation of unitary-invariant ensembles of random matrices to \(\beta\) ensembles, where we always consider the following relation of parameters:
\[b\;=\;\frac{2}{\beta}\,-\,1\,. \tag{1.1}\]
For instance, in [2] it is shown that an orthogonal (i.e., \(\beta=1\)) version of the Brezin-Gross-Witten integral is a generating function for \(b\)-Hurwitz numbers with \(b=1\). The standard \(\beta=2\) version of this fact is well-known, see e.g. [2, Section 6.1], and relates the Brezin-Gross-Witten integral proper to monotone (usual) Hurwitz numbers.
The first aim of this note is to prove another result in this direction: correlators of the Jacobi \(\beta\) ensemble are generating functions for (a type) of \(b\)-Hurwitz numbers. This recovers the aforementioned result of [12] when \(\beta=2\). Moreover, it implies analogous results for the Laguerre \(\beta\) ensemble, by taking a suitable limit. Further, we investigate a combinatorial interpretation of a general class of \(b\)-Hurwitz numbers (namely, that of multiparametric \(b\)-Hurwitz numbers with rational weight generating function) which covers all the cases relevant for the aforementioned \(\beta\) ensembles.
We proceed to a detailed formulation of the results.
### Jacobi \(\beta\) ensemble
The _Jacobi \(\beta\) ensemble_, \(\mathrm{J}\beta\mathrm{E}\) for short, (of size \(n\)) is the random point process on the unit interval \((0,1)\) with (almost surely) \(n\) particles, the location of which is governed by the joint probability distribution given by
\[w^{\mathrm{J}\beta\mathrm{E}}(\underline{x})=\frac{1}{\mathcal{Z}_{\beta}^{ \beta\mathrm{E}}}\prod_{1\leq i\leq n}\bigl{(}x_{i}^{\frac{\beta}{2}c-1}\,(1 \!-\!x_{i})^{\frac{\beta}{2}d-1}\bigr{)}\prod_{1\leq i<j\leq n}|x_{i}\!-\!x_{ j}|^{\beta}\,,\qquad\quad\underline{x}=(x_{1},\ldots,x_{n})\in(0,1)^{n}\,, \tag{1.2}\]
where \(\beta,c,d>0\) and the normalization is explicitly given as
\[\mathcal{Z}_{\beta}^{\mathrm{J}\beta\mathrm{E}}\,=n!\prod_{1\leq i<j\leq n} \frac{\Gamma\bigl{(}\frac{\beta}{2}(j-i+1)\bigr{)}}{\Gamma\bigl{(}\frac{\beta }{2}(j-i)\bigr{)}}\,\prod_{1\leq i\leq n}\frac{\Gamma\bigl{(}\frac{\beta}{2}(c +n-i)\bigr{)}\,\Gamma\bigl{(}\frac{\beta}{2}(d+n-i)\bigr{)}}{\Gamma\bigl{(} \frac{\beta}{2}(c+d+2n-i-1)\bigr{)}}\,, \tag{1.3}\]
cf. Theorem 2.1 below. It arises as a natural deformation of the case \(\beta=2\), the latter being particularly relevant as it describes the eigenvalues of an \(n\times n\) random Hermitian matrix \(M\), positive-definite and bounded above by the identity, distributed according to the following unitary-invariant probability measure with Jacobi weight [10, 5]:
\[\frac{1}{\mathcal{Z}^{\prime}}\,\det(M)^{c-1}\,\det(1-M)^{d-1}\,\mathrm{d}M. \tag{1.4}\]
Here, \(\mathrm{d}M\) is the Lebesgue measure on the space of \(n\times n\) Hermitian matrices, namely
\[\mathrm{d}M\,=\,\prod_{1\leq i<j\leq n}\mathrm{d}X_{ij}\,\mathrm{d}Y_{ij}\, \prod_{1\leq i\leq n}\mathrm{d}X_{ii},\qquad M=X+\mathrm{i}\,Y, \tag{1.5}\]
and the normalization is
\[\mathcal{Z}^{\prime}\,=\,\frac{\pi^{n(n-1)/2}}{1!\ 2!\ \cdots\ n!}\ \mathcal{Z}_{2}^{ \mathrm{J}\beta\mathrm{E}}\,. \tag{1.6}\]
This ensemble of random matrices is known as the Jacobi unitary ensemble; analogous models are well-known for \(\beta=1\) and \(4\) as well, namely the Jacobi orthogonal and symplectic ensembles, respectively [10]; for general \(\beta\), a model of tridiagonal random matrices has been given in [7].
We shall be in particular interested in the \(\mathrm{J}\beta\mathrm{E}\) correlators, defined as the following expectation values, for integers \(k_{1},\ldots,k_{\ell}\):
\[\mathcal{C}_{k_{1},\ldots,k_{\ell}}^{\mathrm{J}\beta\mathrm{E}}(n,\beta,c,d) \,=\,\int_{(0,1)^{n}}\biggl{(}\prod_{1\leq i\leq\ell}(x_{1}^{k_{i}}+\cdots+x_{ n}^{k_{i}})\biggr{)}\,w^{\mathrm{J}\beta\mathrm{E}}(\underline{x})\,\mathrm{d}x_{1} \cdots\mathrm{d}x_{n}\,. \tag{1.7}\]
(In terms of matrix models, this is the expectation of a product of traces of integer powers of the random matrix.) We will only consider the cases where the \(k_{i}\)'s are all positive or all negative.
**Remark 1.1**.: _It follows from Theorem 2.1 and Corollary 2.3 below that \(\mathcal{C}_{k_{1},\ldots,k_{\ell}}^{\mathrm{J}\beta\mathrm{E}}(n,\beta,c,d)\) are rational functions of \(n,\beta,c,d\)._
### \(b\)-Hurwitz numbers
Let \(\mathscr{P}\) be the set of all _partitions_, i.e. \(\lambda\in\mathscr{P}\) is a decreasing sequence \(\lambda=(\lambda_{1},\lambda_{2},\dots)\) of non-negative integers which stabilizes to \(0\). We recall the following common notations, for a given \(\lambda\in\mathscr{P}\):
\[|\lambda| \;=\;\sum_{i\geq 1}\lambda_{i}\,, \ell(\lambda) \;=\;\min\{i\geq 0:\,\lambda_{i+1}=0\}\,,\] \[\mathfrak{m}_{c}(\lambda) \;=\;|\{i\geq 1:\,\lambda_{i}=c\}|\,, \mathsf{z}_{\lambda} \;=\;\prod_{c\geq 1}\mathfrak{m}_{c}(\lambda)!\,c^{\mathfrak{m}_{c}( \lambda)}\,. \tag{1.8}\]
The _diagram_ of \(\lambda\in\mathscr{P}\) is the set
\[\mathsf{D}(\lambda)\;=\;\{(i,j)\in\mathbb{Z}^{2}:\ 1\leq i\leq\ell(\lambda),\ 1 \leq j\leq\lambda_{i}\}. \tag{1.9}\]
Elements of \(\mathsf{D}(\lambda)\) are customarily denoted \(\square\). Under the involution \((i,j)\mapsto(j,i)\) of \(\mathbb{Z}^{2}\), \(\mathsf{D}(\lambda)\) is mapped into the diagram of another partition \(\lambda^{\prime}\), the _conjugate partition_. For any \(\square=(i,j)\in\mathsf{D}(\lambda)\), we set
\[\mathsf{arm}_{\lambda}(\square)=\lambda_{i}-j\,,\qquad\mathsf{leg}_{\lambda} (\square)=\lambda^{\prime}_{j}-i\,. \tag{1.10}\]
The _dominance order_\(\preceq_{\mathsf{d}}\) is the partial order relation on \(\mathscr{P}\) defined by declaring \(\mu\preceq_{\mathsf{d}}\lambda\) if and only if \(\sum_{i=1}^{r}\mu_{i}\leq\sum_{i=1}^{r}\lambda_{i}\) for all \(r\geq 1\).
Partitions provide a convenient set of labels for various bases of the ring \(\Lambda\) of symmetric functions [22, Chapter I]. Concretely, we can think of \(\Lambda\) as the ring \(\mathbb{C}[\mathbf{p}]\) of polynomials in infinitely many indeterminates \(\mathbf{p}=(p_{1},p_{2},p_{3},\dots)\), graded by \(\deg p_{k}=k\). A basis of \(\Lambda\) is given by \(p_{\lambda}:=p_{\lambda_{1}}\cdots p_{\lambda_{(\lambda)}}\), for \(\lambda\in\mathscr{P}\).
More precisely, as explained in [22, Chapter I], \(\Lambda\) is the inverse limit as \(n\to\infty\) of the inverse system formed by the rings \(\Lambda_{n}=\mathbb{C}_{n}[x_{1},\dots,x_{n}]^{S_{n}}\) of symmetric polynomials in \(x_{1},\dots,x_{n}\) and by the maps \(\Lambda_{m}\to\Lambda_{n}\) for \(m\geq n\) which send \(x_{i}\mapsto 0\) for \(n<i\leq m\). Then, the \(p_{k}\)'s are the elements of \(\Lambda\) that project to the power sum symmetric polynomials \(x_{1}^{k}+\cdots+x_{n}^{k}\).
We also need the elements \(m_{\lambda}\in\Lambda\) (for any \(\lambda\in\mathscr{P}\)) which project to the monomial symmetric polynomials \((\prod_{c\geq 1}\mathfrak{m}_{c}(\lambda)!)^{-1}\sum_{\sigma\in S_{n}}x_{ \sigma(1)}^{\lambda_{1}}\cdots x_{\sigma(n)}^{\lambda_{n}}\). The \(m_{\lambda}\) also form a basis of \(\Lambda\), called the monomial basis.
Another basis of \(\Lambda\) is given by the Jack functions2\(\mathrm{P}_{\lambda}^{(a)}(\mathbf{p})\) (see [22, Chapter VI] and [26]), which additionally depend on a parameter3\(a>0\). They reduce to Schur functions when \(a=1\) and to Zonal functions when \(a=2\). In general, they are uniquely determined by the following properties.
Footnote 2: We use the P-normalization, rather then the J-normalization adopted in [3, 2]; to compare the two, one has \(\mathrm{J}_{\lambda}^{(a)}=\mathsf{h}_{a}(\lambda)\mathrm{P}_{\lambda}^{(a)}\), with the notation (1.15).
Footnote 3: More precisely, we should say that they form a basis of \(\mathbb{C}(a)[\mathbf{p}]\).
* An _orthogonality_ property: \[\big{\langle}\mathrm{P}_{\lambda}^{(a)},\mathrm{P}_{\mu}^{(a)}\big{\rangle}_{a }\;=\;0\,,\qquad\lambda,\mu\in\mathscr{P}\,,\ \lambda\neq\mu\,,\] (1.11) where \(\langle,\rangle_{a}\) is the deformed Hall scalar product on \(\Lambda\), defined by \[\big{\langle}\,p_{\lambda},\,p_{\mu}\big{\rangle}_{a}\;=\;\delta_{\lambda\mu} \,\mathsf{z}_{\lambda}\,a^{\ell(\lambda)}\,.\] (1.12)
* A _triangularity_ condition with respect to the monomial basis: if we write \[\mathrm{P}_{\lambda}^{(a)}\;=\;\sum_{\mu\in\mathscr{P},\ |\mu|=|\lambda|}v_{\lambda\mu}^{(a)}\,m_{\mu}\,,\] (1.13) we have \(v_{\lambda\mu}^{(a)}=0\) unless \(\mu\preceq_{\mathsf{d}}\lambda\).
* A _normalization_ condition: \(v_{\lambda\lambda}^{(a)}=1\,.\)
Further, we have
\[\left\langle\mathrm{P}_{\lambda}^{(a)},\mathrm{P}_{\lambda}^{(a)}\right\rangle_{a} \,=\,\frac{\mathsf{h}_{a}^{\prime}(\lambda)}{\mathsf{h}_{a}(\lambda)}\,, \tag{1.14}\]
where
\[\mathsf{h}_{a}(\lambda)\,=\prod_{\square\in\mathsf{D}(\lambda)}\bigl{(}a\, \mathsf{arm}(\square)+\mathsf{leg}(\square)+1\bigr{)},\qquad\mathsf{h}_{a}^{ \prime}(\lambda)\,=\prod_{\square\in\mathsf{D}(\lambda)}\bigl{(}a\,\mathsf{arm }(\square)+\mathsf{leg}(\square)+a\bigr{)}\,. \tag{1.15}\]
Following [3, Section 6], and specializing to the case of present interest, the multiparametric \(b\)-Hurwitz numbers associated with a formal power series \(G(z)=1+\sum_{i\geq 1}g_{i}z^{i}\in\mathbb{C}[\![\bar{z}]\!]\) are defined through their formal generating series
\[\tau_{G}^{b}(\epsilon;\mathbf{p})\,=\,\sum_{\lambda\in\mathscr{P}}\sum_{r\geq 0 }H_{G}^{b}(\lambda;r)\,\epsilon^{r}\,p_{\lambda}, \tag{1.16}\]
by the explicit formula
\[\tau_{G}^{b}(\epsilon;\mathbf{p})\,=\,\sum_{\lambda\in\mathscr{P}}\frac{1}{ \mathsf{h}_{b+1}^{\prime}(\lambda)}\,\mathrm{P}_{\lambda}^{(b+1)}(\mathbf{p}) \prod_{\square\in\mathsf{D}(\lambda)}G\bigl{(}\epsilon\,\mathsf{c}_{b+1}( \square)\bigr{)}, \tag{1.17}\]
where, for \(\square=(i,j)\in\mathsf{D}(\lambda)\),
\[\mathsf{c}_{a}(\square)\,=\,a(j-1)-(i-1)\,. \tag{1.18}\]
The geometric meaning of these deformed Hurwitz numbers has been unveiled in [3]. Below we report a different combinatorial interpretation which closely follows [2] instead, see Theorem 1.12 and Section 3.
**Remark 1.2**.: _The definition in [3] is more general. An additional set of times \(\mathbf{q}=(q_{1},q_{2},\dots)\) is considered and the Jack expansion (1.17) reads_
\[\widetilde{\tau}_{G}^{b}(\epsilon;\mathbf{p},\mathbf{q})\,=\,\sum_{\lambda \in\mathscr{P}}\frac{\mathsf{h}_{b+1}(\lambda)}{\mathsf{h}_{b+1}^{\prime}( \lambda)}\,\mathrm{P}_{\lambda}^{(b+1)}(\mathbf{p})\,\mathrm{P}_{\lambda}^{(b +1)}(\mathbf{q})\prod_{\square\in\mathsf{D}(\lambda)}G\bigl{(}\epsilon\, \mathsf{c}_{b+1}(\square)\bigr{)} \tag{1.19}\]
_which then allows to define more general \(b\)-Hurwitz numbers depending on two partitions \(\lambda,\mu\), by extracting the coefficient in front of \(p_{\lambda}q_{\mu}\). The reduction to (1.17) is performed by setting \(q_{1}=1\) and \(q_{i}=0\) for all \(i\geq 2\), and by using the special value \(\mathrm{P}_{\lambda}^{(a)}(1,0,0,\dots)=1/\mathsf{h}_{a}(\lambda)\), cf. eq. 10.29 in [22, Chapter VI]._
**Remark 1.3** (Connections to integrable systems).: _The case \(\beta=2\), i.e. \(b=0\), falls back to the theory of multiparametric weighted Hurwitz numbers of Guay-Paquet, Harnad, Orlov [17, 19]. It is known in this case that the generating function \(\tau_{G}^{b}\) satisfies the KP hierarchy in the times \(\mathbf{p}\) (in the same case \(b=0\), the more general version in (1.19) satisfies the 2D Toda hierarchy in the times \(\mathbf{p},\mathbf{q}\)), a far-reaching generalization of Okounkov's seminal result [24]. When \(\beta=1\), i.e. \(b=1\), a relation to the BKP hierarchy has been established by Bonzom, Chapuy, and Dolega [2]._
### Jacobi \(\beta\) ensemble and \(b\)-Hurwitz numbers
**Theorem 1.4**.: _Let \(\lambda\in\mathscr{P}\). We have the Laurent expansions as \(n\to+\infty\)_
\[\frac{1}{\mathsf{z}_{\lambda}}\biggl{(}\frac{\beta\,(\gamma+\delta )}{2\,\gamma\,n}\biggr{)}^{|\lambda|}\,\mathcal{C}_{\lambda_{1},\dots,\lambda_{ \ell(\lambda)}}^{J\!\beta\mathrm{E}}\bigl{(}n,\beta,c=n(\gamma-1)+1,d=n(\delta -1)+1\bigr{)}\,=\,\sum_{r\geq 0}n^{-r}\,H_{G_{+}^{1}}^{b}(\lambda;r)\,, \tag{1.20}\] \[\frac{1}{\mathsf{z}_{\lambda}}\biggl{(}\frac{\beta\,\gamma}{2\,( \gamma+\delta)\,n}\biggr{)}^{|\lambda|}\,\mathcal{C}_{-\lambda_{1},\dots,- \lambda_{\ell(\lambda)}}^{J\!\beta\mathrm{E}}\bigl{(}n,\beta,c=n\gamma+\tfrac{2 }{\beta},d=n(\delta-1)+1\bigr{)}\,=\,\sum_{r\geq 0}n^{-r}\,H_{G_{-}^{1}}^{b}(\lambda;r)\,, \tag{1.21}\]
_where \(b=\frac{2}{\beta}-1\) and_
\[G^{\rm J}_{+}(z)\,=\,\frac{(1+z)(1+\frac{z}{\gamma})}{1+\frac{z}{\gamma+\delta}} \,,\qquad G^{\rm J}_{-}(z)\,=\,\frac{(1+z)(1-\frac{z}{\gamma+\delta})}{1-\frac{ z}{\gamma}}\,. \tag{1.22}\]
The proof is contained in Section 2.
### Laguerre limit
The _Laguerre \(\beta\) ensemble_, \({\rm L}\beta{\rm E}\) for short, (of size \(n\)) is the random point process on the positive half-line \((0,+\infty)\) with (almost surely) \(n\) particles, the location of which is governed by the joint probability distribution \(w^{{\rm L}\beta{\rm E}}(\underline{y})\,{\rm d}y_{1}\cdots{\rm d}y_{n}\) given by
\[w^{{\rm L}\beta{\rm E}}(\underline{y})\,=\,\frac{1}{\mathcal{Z}^{{\rm L}\beta {\rm E}}_{\beta}}\prod_{1\leq i\leq n}\big{(}y_{i}^{\frac{\beta}{2}c-1}\,{\rm e }^{-y_{i}}\big{)}\prod_{1\leq i<j\leq n}|y_{i}-y_{j}|^{\beta}\,,\qquad\quad \underline{y}=(y_{1},\ldots,y_{n})\in(0,+\infty)^{n}\,, \tag{1.23}\]
where \(\beta,c>0\) and the normalization is explicitly given as
\[\mathcal{Z}^{{\rm L}\beta{\rm E}}_{\beta}=n!\prod_{1\leq i<j\leq n}\frac{ \big{(}\frac{\beta}{2}(j-i+1)\big{)}}{\Gamma\big{(}\frac{\beta}{2}(j-i)\big{)} }\prod_{1\leq i\leq n}\Gamma\big{(}\frac{\beta}{2}(c+n-i)\big{)}\,. \tag{1.24}\]
Again, the cases \(\beta=1,2,4\) correspond to eigenvalue distributions of well-known random matrix ensembles (respectively, orthogonal, unitary, and symplectic Laguerre ensembles) while for general \(\beta\) a model of tridiagonal random matrices has been given in [7].
We analogously define \({\rm L}\beta{\rm E}\) correlators as the following expectation values, for integers \(k_{1},\ldots,k_{\ell}\):
\[\mathcal{C}^{{\rm L}\beta{\rm E}}_{k_{1},\ldots,k_{\ell}}(n,\beta,c)\,=\,\int _{(0,+\infty)^{n}}\!\bigg{(}\prod_{1\leq i\leq\ell}(y_{1}^{k_{i}}+\cdots+x_{n} ^{y_{i}})\bigg{)}\,w^{{\rm L}\beta{\rm E}}(\underline{y})\,{\rm d}y_{1}\cdots{ \rm d}y_{n}\,. \tag{1.25}\]
The \({\rm L}\beta{\rm E}\) can be regarded as a limit of the \({\rm J}\beta{\rm E}\). Indeed, (1.24) can be deduced from (1.3) by a change of integration variables and the limit \(d\to+\infty\), and more generally we have
\[\lim_{d\to+\infty}\!\!\left(\frac{\beta}{2}d\right)^{k_{1}+\cdots+k_{\ell}} \mathcal{C}^{{\rm J}\beta{\rm E}}_{k_{1},\ldots,k_{\ell}}(n,\beta,c,d)\,=\, \mathcal{C}^{{\rm L}\beta{\rm E}}_{k_{1},\ldots,k_{\ell}}(n,\beta,c) \tag{1.26}\]
for all integers \(k_{1},\ldots,k_{\ell}\). Hence, Theorem 1.4 implies the following expansions for \({\rm L}\beta{\rm E}\) correlators.
**Theorem 1.5**.: _Let \(\lambda\in\mathscr{P}\). We have the Laurent expansions as \(n\to+\infty\)_
\[\frac{1}{{\sf z}_{\lambda}}(\gamma\,n^{2})^{-|\lambda|}\,\mathcal{C}^{{\rm L} \beta{\rm E}}_{\lambda_{1},\ldots,\lambda_{\ell(\lambda)}}\big{(}n,\beta,c=n( \gamma-1)+1\big{)}\,=\,\sum_{r\geq 0}n^{-r}\,H^{b}_{G^{\rm L}_{+}}(\lambda;r)\,, \tag{1.27}\]
\[\frac{1}{{\sf z}_{\lambda}}\!\left(\frac{\beta^{2}}{4}\gamma\right)^{|\lambda |}\mathcal{C}^{{\rm L}\beta{\rm E}}_{-\lambda_{1},\ldots,-\lambda_{\ell( \lambda)}}\big{(}n,\beta,c=n\gamma+\frac{2}{\beta}\big{)}\,=\,\sum_{r\geq 0}n^{-r} \,H^{b}_{G^{\rm L}_{-}}(\lambda;r)\,, \tag{1.28}\]
_where \(b=\frac{2}{\beta}-1\) and_
\[G^{\rm L}_{+}(z)\,=\,(1+z)(1+\frac{z}{\gamma})\,,\qquad G^{\rm L}_{-}(z)\,=\, \frac{1+z}{1-\frac{z}{\gamma}}\,. \tag{1.29}\]
**Remark 1.6**.: _The expansions of Theorem 1.5 are known for \(\beta=2\) in terms of usual Hurwitz numbers [18, 15, 4]. The result for the positive \({\rm L}\beta{\rm E}\) correlators is also mentioned in [2, Appendix A] (in this case, the relevant Hurwitz numbers are given an equivalent combinatorial meaning in terms of bipartite non-orientable maps). It would be interesting to compare with the results of [16] for the Laguerre orthogonal ensemble (i.e. \(\beta=1\)) and the hyperoctahedral group._
### Colored monotone Hurwitz maps
We now give a combinatorial model for the above expansions, inspired by [2]. We will consider arbitrary rational weight generating functions \(G\), covering all cases considered above.
**Remark 1.7**.: _In principle, a combinatorial interpretation for \(b\)-Hurwitz numbers for arbitrary \(G\) is implicit from [3, Theorem 6.2] in terms of a weighted count of (non-orientable) generalized branched coverings of the sphere, but we prefer to give a (perhaps) more explicit description when \(G\) is rational following [2]._
The notion of monotone non-orientable Hurwitz map has been proposed in [2, Section 3]. We recall it here for the reader's convenience and refer to loc. cit. for more details.
**Definition 1.8** (Monotone Hurwitz maps [2]).: _A "monotone Hurwitz map" \(\Gamma\) is an embedding of a loopless multigraph on a compact surface4 with the following properties._
Footnote 4: Not necessarily orientable
* _The complement of the multigraph in the surface is homeomorphic to a disjoint union of disks, called "faces"._
* _The_ \(n\) _vertices of_ \(\Gamma\) _are labeled with the numbers in_ \(\{1,\ldots,n\}\) _and the_ \(r\) _edges of_ \(\Gamma\) _are labeled_ \(e_{1},\ldots,e_{r}\)_. For each_ \(e_{i}\) _we denote_ \(a_{i},b_{i}\) _the vertices connected by_ \(e_{i}\) _such that_ \(a_{i}<b_{i}\)_._
* _We have_ \(b_{1}\leq\cdots\leq b_{r}\)_._
* _A neighborhood of each vertex of_ \(\Gamma\) _is equipped with an orientation._
* _Each vertex of_ \(\Gamma\) _is equipped with a distinguished sector between consecutive half-edges, which is termed "active corner"._
* _For each_ \(i\in\{1,\ldots,r\}\)_, let_ \(\Gamma_{i}\) _be obtained from_ \(\Gamma\) _by removing the edges_ \(e_{i+1},\ldots,e_{r}\)5_. In the map_ \(\Gamma_{i}\) _the following conditions must be met:_ Footnote 5: Note that \(\Gamma_{i}\) might be embedded in a different compact surface.
* _the active corner at_ \(b_{i}\) _immediately follows_ \(e_{i}\)_;_
* _the active corner at_ \(a_{i}\) _is opposite (with respect to_ \(e_{i}\)_) to the active corner at_ \(b_{i}\)_;_
* _if the edge_ \(e_{i}\) _is disconnecting in_ \(\Gamma_{i}\)_, the local orientations at_ \(a_{i},b_{i}\) _are compatible, in the sense that they extend to an orientation of a neighborhood of_ \(e_{i}\)_._
_The "degree" of a face is the number of active corners in that face. The "profile" of a monotone Hurwitz map is the partition (of weight \(n\)) whose nonzero parts are the degrees of its faces._
See Figure 1 for an example. Note that the definitions allows for maps containing connected components consisting of a single point embedded on the sphere.
**Remark 1.9**.: _This definition allows one to iteratively construct monotone Hurwitz maps by adding an extra vertex of maximum label and edges incident to it with increasing labels. This key property is used in the proof of [2, Proposition 3.2], and we will similarly employ it in the proof of Theorem 1.12 below. Moreover, it can be shown by this inductive construction that any face has at least one active corner. Therefore, a monotone Hurwitz map with \(r\) edges and profile \(\lambda\) has Euler characteristic_
\[\chi\,=\,|\lambda|-r+\ell(\lambda)\,. \tag{1.30}\]
One of the main constructions in [3] is the definition of a weight of monotone Hurwitz maps6 which is monomial in a variable \(b\). It is worth remarking that the \(b\)-weight of a map which is not orientable vanishes when \(b=0\).
Footnote 6: Actually, in loc. cit. the weight is defined for more general combinatorial objects, namely non-orientable constellations. The restriction to this case is done in [2].
Let us first introduce some terminology: an edge is "twisted" if the local orientations of the two vertices which it joins do not extend to an orientation of a neighborhood of the edge, otherwise we say it is "untwisted"; given an edge we can always twist or untwist it (thus obtaining a different map of course).
**Definition 1.10** (\(b\)-weight of monotone Hurwitz maps [2]).: _Let a monotone non-orientable Hurwitz map \(\Gamma\) be given. We iteratively remove either the vertex of maximum label (if it is isolated) or the edge of maximum label (which is necessarily incident to the vertex of maximum label). Doing so we also record a weight \(1\) or \(b\) each time we delete an edge, chosen as follows. Let \(e\) the edge being deleted and \(\Gamma^{\prime}\) the graph after deletion of \(e\). The weight is:_
* \(1\) _if_ \(e\) _joins two distinct connected components of_ \(\Gamma^{\prime}\)_;_
* \(1\) _if_ \(e\) _splits a face of_ \(\Gamma^{\prime}\) _into two;_
* \(b\) _if_ \(e\) _is a twisted diagonal in a face of_ \(\Gamma^{\prime}\)_, i.e., if_ \(e\) _is twisted and joins two corners in the same face;_
* \(1\) _if_ \(e\) _joins two distinct faces in the same component of_ \(\Gamma^{\prime}\) _and_ \(\Gamma^{\prime}\cup e\) _is orientable;_
* \(b\) _if_ \(e\) _joins two distinct faces in the same component of_ \(\Gamma^{\prime}\) _and_ \(\Gamma^{\prime}\cup\widetilde{e}\) _is orientable where_ \(\widetilde{e}\) _is_ \(e\) _un/twisted;_
* \(1\) _or_ \(b\) _whenever no previous rules apply, with the caveat that during this procedure of deletions, whenever this situation occurs we always assign different weights to different twists of_ \(e\)_._7__ Footnote 7: The final weight is independent of the choice we make.
_The product of all weights recorded during this procedure (performed until the graph is empty) is the "\(b\)-weight" of \(\Gamma\), which is a power of \(b\) denoted \(b^{\nu(\Gamma)}\). For more details, see [2, 3]._
The last definition we need is the following one.
**Definition 1.11** (Colored monotone Hurwitz maps).: _Let \(L,M\) be non-negative integers. An "\((L|M)\)-coloring" of a monotone Hurwitz map \(\Gamma\) is a mapping \(c:\{1,\dots,r\}\to\{1,\dots,L+M\}\) such that:_
* _if_ \(1\leq i<j\leq r\) _and_ \(1\leq c(i)=c(j)\leq L\)_, then_ \(b_{i}<b_{j}\)_;_
* _if_ \(1\leq i<j\leq r\) _and_ \(b_{i}=b_{j}\)_, then_ \(c(i)\leq c(j)\)_._
_An "\((L|M)\)-colored monotone Hurwitz map" is a monotone Hurwitz map with an \((L|M)\)-coloring._
E.g., a \((0|1)\)-colored monotone Hurwitz map is just a monotone Hurwitz map. See Figure 1 for another example.
**Theorem 1.12**.: _For a set of parameters \(u_{1},\dots,u_{L+M}\), let_
\[G(z)\,=\,\frac{\prod_{i=1}^{L}(1+u_{i}z)}{\prod_{i=1}^{M}(1-u_{L+i}z)}\,. \tag{1.31}\]
_Then_
\[H^{b}_{G}(\lambda;r)\,=\,\frac{1}{|\lambda|!}\sum_{(\Gamma,c)}\frac{b^{\nu( \Gamma)}}{(1+b)^{|\pi_{0}(\Gamma)|}}\,u_{c(1)}\cdots u_{c(r)} \tag{1.32}\]
_where \((\Gamma,c)\) runs in the set of \((L|M)\)-colored monotone Hurwitz maps with \(r\) edges and profile \(\lambda\) (hence, \(|\lambda|\) vertices). Moreover, \(b^{\nu(\Gamma)}\) is the \(b\)-weight of \(\Gamma\) (Definition 1.10) and \(\pi_{0}(\Gamma)\) the set of connected components of \(\Gamma\)._
The proof is contained in Section 3 and is a generalization of the argument in [2, Proposition 3.2].
**Remark 1.13**.: _We consider the redundant parameter \(\epsilon\) in (1.17) (instead of absorbing it into \(G\)) to make clear the large \(n\) expansion of correlators (1.20)-(1.21) and (1.27)-(1.28). Because of (1.30), these large \(n\) expansions can be interpreted as topological expansions (over not necessarily orientable geometries) after a global rescaling of correlators by a power of \(n\)._
We consider the reduction to the orientable case (\(b=0\)) in Section 3.1.
## 2 Proof of Theorem 1.4
In this section, we will denote
\[\widetilde{\mathrm{P}}^{(a)}_{\lambda}(x_{1},\dots,x_{n})\,=\,\mathrm{P}^{(a)}_{ \lambda}(\mathbf{p})\big{|}_{p_{k}=\sum_{i=1}^{n}x_{i}^{k}}\,. \tag{2.1}\]
A crucial ingredient in the proof is the following formula, which computes the expectation of Jack polynomials with respect to the Jacobi weight and which can be found in [22, Chapter VI.10].
**Theorem 2.1** (Generalized Selberg integral).: _For all \(\beta,c,d>0\) and all \(\lambda\in\mathscr{P}\), we have_
\[\frac{1}{n!}\int_{(0,1)^{n}}\widetilde{\mathrm{P}}^{(2/\beta)}_{ \lambda}(\underline{x})\,w^{\mathrm{J}\beta\mathrm{E}}(\underline{x})\, \mathrm{d}x_{1}\cdots\mathrm{d}x_{n}\\ =\,\prod_{1\leq i<j\leq n}\frac{\Gamma\big{(}\lambda_{i}-\lambda _{j}+\frac{\beta}{2}(j-i+1)\big{)}}{\Gamma\big{(}\lambda_{i}-\lambda_{j}+\frac {\beta}{2}(j-i)\big{)}}\,\prod_{1\leq i\leq n}\frac{\Gamma\big{(}\lambda_{i}+ \frac{\beta}{2}(c+n-i)\big{)}\,\Gamma\big{(}\frac{\beta}{2}(d+n-i)\big{)}}{ \Gamma\big{(}\lambda_{i}+\frac{\beta}{2}(c+d+2n-i-1)\big{)}}\,. \tag{2.2}\]
When \(\lambda\) is the empty partition, i.e. \(\lambda=(0,0,\dots)\), we obtain the value for \(\mathcal{Z}^{\mathrm{J}\beta\mathrm{E}}_{\beta}\) claimed in (1.3).
To consider _negative_ correlators, a further property is needed. Let us introduce the notation
\[\underline{x}^{-1}=(x_{1}^{-1},\dots,x_{n}^{-1})\,. \tag{2.3}\]
**Lemma 2.2**.: _Let \(n\geq 0\) be an integer and let \(\lambda\in\mathscr{P}\) be such that \(\ell(\lambda)\leq n\). Define another partition \(\widehat{\lambda}\in\mathscr{P}\) by \(\widehat{\lambda}_{i}=\lambda_{1}-\lambda_{n-i+1}\). Then_
\[\widetilde{P}^{(a)}_{\lambda}(\underline{x}^{-1})\,=\,\big{(}x_{1}\cdots x_{n} \big{)}^{-\lambda_{1}}\,\widetilde{P}^{(a)}_{\widehat{\lambda}}(\underline{x }). \tag{2.4}\]
Proof.: It is known [26, Theorem 3.1] that \(\widetilde{P}^{(a)}_{\lambda}(\underline{x})\) are uniquely characterized as the eigenvectors of the Calogero-Sutherland operator
\[\mathscr{H}^{(a)}\,=\,\frac{a}{2}\,\sum_{i=1}^{n}D_{x_{i}}^{2}\,+\,\frac{1}{ 2}\,\sum_{1\leq i<j\leq n}\frac{x_{i}+x_{j}}{x_{i}-x_{j}}\,(D_{x_{i}}-D_{x_{j}} )\,,\qquad D_{z}\,=\,z\,\frac{\partial}{\partial z}, \tag{2.5}\]
also satisfying the triangularity and normalization properties mentioned above. Namely, they are uniquely characterized by
\[\mathscr{H}^{(a)}\,\,\widetilde{\mathrm{P}}^{(a)}_{\lambda}( \underline{x})\,=\,\mathcal{E}^{(a)}_{\lambda}\,\widetilde{\mathrm{P}}^{(a)}_{ \lambda}(\underline{x})\,,\qquad\mathcal{E}^{(a)}_{\lambda}\,=\,\sum_{i=1}^{n} \biggl{(}\frac{a}{2}\lambda_{i}^{2}+\frac{n+1-2i}{2}\biggr{)} \tag{2.6}\]
Figure 1: On the left, a monotone Hurwitz map with \(n=3\) vertices and \(r=4\) edges embedded on the Klein bottle. Active corners are indicated by red arrows and local orientation around vertices are indicated by blue arrows. It has weight \(b\) and profile (3). On the right, the same non-orientable labeled Hurwitz map depicted as a “ribbon graph”, which is a small open neighborhood of the graph in the surface. For this example, the mapping \(c(1)=1=c(3)\), \(c(2)=2=c(4)\) is simultaneously a \((2|0)\)-, \((1|1)\)-, or \((0|2)\)-coloring; the mapping \(c(1)=1=c(2)\), \(c(3)=2=c(4)\) is a \((0|2)\)-coloring.
and the fact that \(\widetilde{\mathrm{P}}^{(a)}_{\lambda}(\underline{x})\) is a linear combination of monomial symmetric functions of \(x_{1},\dots,x_{n}\) associated with partitions smaller than \(\lambda\) in the dominance relation \(\preceq_{\mathsf{d}}\).
It is clear that \(\widetilde{\mathrm{P}}^{(a)}_{\lambda}(\underline{x}^{-1})\left(x_{1}\cdots x_{ n}\right)^{\lambda_{1}}\) is a symmetric polynomial of \(x_{1},\dots,x_{n}\) satisfying the same triangularity and normalization conditions as \(\widetilde{P}^{(a)}_{\widehat{\lambda}}(\underline{x})\). Moreover, a direct computation shows that
\[\mathscr{H}^{(a)}\,\left(\widetilde{\mathrm{P}}^{(a)}_{\lambda}(\underline{x }^{-1})\left(x_{1}\cdots x_{n}\right)^{\lambda_{1}}\right)\;=\;\left(\mathcal{E }^{(a)}_{\lambda}+\frac{a}{2}n\lambda_{1}^{2}-a\lambda_{1}|\lambda|\right) \widetilde{\mathrm{P}}^{(a)}_{\lambda}(\underline{x}^{-1})\left(x_{1}\cdots x _{n}\right)^{\lambda_{1}}, \tag{2.7}\]
and it is straightforward to check that \(\mathcal{E}^{(a)}_{\lambda}+\frac{a}{2}n\lambda_{1}^{2}-a\lambda_{1}|\lambda|= \mathcal{E}^{(a)}_{\widehat{\lambda}}\). Hence, we conclude that \(\widetilde{\mathrm{P}}^{(a)}_{\lambda}(\underline{x}^{-1})\left(x_{1}\cdots x _{n}\right)^{\lambda_{1}}=\widetilde{P}^{(a)}_{\widehat{\lambda}}( \underline{x})\).
**Corollary 2.3**.: _For all \(\beta,d>0\), \(\lambda\in\mathscr{P}\), and \(c>\frac{2}{\beta}\lambda_{1}\), we have_
\[\frac{1}{n!}\int_{(0,1)^{n}}\widetilde{\mathrm{P}}^{(2/\beta)}_{ \lambda}(\underline{x}^{-1})\,w^{J\beta\mathrm{E}}(\underline{x})\,\mathrm{d} x_{1}\cdots\mathrm{d}x_{n}\\ =\;\prod_{1\leq i<j\leq n}\frac{\Gamma\big{(}\lambda_{i}-\lambda _{j}+\frac{\beta}{2}(j-i+1)\big{)}}{\Gamma\big{(}\lambda_{i}-\lambda_{j}+\frac {\beta}{2}(j-i)\big{)}}\,\prod_{1\leq i\leq n}\frac{\Gamma\big{(}-\lambda_{i}+ \frac{\beta}{2}(c+i-1)\big{)}\,\Gamma\big{(}\frac{\beta}{2}(d+n-i)\big{)}}{ \Gamma\big{(}-\lambda_{i}+\frac{\beta}{2}(c+d+n+i-2)\big{)}}\,. \tag{2.8}\]
We consider an infinite sequence of variables \(\mathbf{t}=(t_{1},t_{2},\dots)\) and denote \(t_{\lambda}=t_{\lambda_{1}}\cdots t_{\lambda_{\ell(\lambda)}}\) for all \(\lambda\in\mathscr{P}\). Let us introduce the following formal generating functions of \(\mathrm{J}\beta\mathrm{E}\) correlators
\[\Upsilon^{\pm}_{\beta}(\mathbf{t};n) =\;\sum_{\lambda\in\mathscr{P}}\mathcal{C}^{\mathrm{J}\beta \mathrm{E}}_{\pm\lambda_{1},\dots,\pm\lambda_{\ell(\lambda)}}(n,\beta,c,d) \left(\frac{\beta}{2}\right)^{|\lambda|}\frac{t_{\lambda}}{\underline{x}_{ \lambda}}\] \[=\;\int_{(0,1)^{n}}\exp\!\left(\frac{\beta}{2}\sum_{k\geq 1} \frac{t_{k}}{k}(x_{1}^{\pm k}+\cdots+x_{n}^{\pm k})\right)w^{\mathrm{J}\beta \mathrm{E}}(\underline{x})\,\mathrm{d}x_{1}\cdots\mathrm{d}x_{n}\,, \tag{2.9}\]
the last equality stemming from the well-known identity (cf. [22, Chapter I.2])
\[\exp\!\left(\sum_{k\geq 1}\frac{t_{k}}{k}(x_{1}^{k}+\cdots+x_{n}^{k})\right)\;=\; \sum_{\lambda\in\mathscr{P}}\frac{t_{\lambda}}{\underline{x}_{\lambda}}\! \left(\sum_{i=1}^{n}x_{i}^{\lambda_{1}}\right)\cdots\left(\sum_{i=1}^{n}x_{i}^ {\lambda_{\ell(\lambda)}}\right). \tag{2.10}\]
Let us also recall the Cauchy identity for Jack polynomials [26, Formula (6)]
\[\exp\!\left(\frac{1}{a}\,\sum_{k\geq 1}\frac{t_{k}}{k}(x_{1}^{k}+\cdots+x_{n}^{ k})\right)\;=\;\sum_{\lambda\in\mathscr{P},\ \ell(\lambda)\leq n}\frac{\mathsf{h}_{a}(\lambda)}{\mathsf{h}^{\prime}_{a}( \lambda)}\,\mathrm{P}^{(a)}_{\lambda}(\mathbf{t})\,\widetilde{\mathrm{P}}^{(a )}_{\lambda}(x_{1},\dots,x_{n})\,. \tag{2.11}\]
**Lemma 2.4**.: _We have_
\[\Upsilon^{\pm}_{\beta}(\mathbf{t};n)\;=\;\sum_{\lambda\in\mathscr{P},\ \ell(\lambda)\leq n}\frac{1}{\mathsf{h}^{\prime}_{2/\beta}(\lambda)}\,f^{\pm}_ {\lambda}\,\mathrm{P}^{(2/\beta)}_{\lambda}(\mathbf{t})\,, \tag{2.12}\]
_with coefficients_
\[f^{+}_{\lambda} =\;\prod_{\square\in\mathsf{D}(\lambda)}\frac{\big{(}n+\mathsf{c }_{2/\beta}(\square)\big{)}\big{(}c-1+n+\mathsf{c}_{2/\beta}(\square)\big{)}}{c +d-2+2n+\mathsf{c}_{2/\beta}(\square)}\,, \tag{2.13}\] \[f^{-}_{\lambda} =\;\prod_{\square\in\mathsf{D}(\lambda)}\frac{\big{(}n+\mathsf{c }_{2/\beta}(\square)\big{)}\big{(}c+d+n-1-\frac{2}{\beta}-\mathsf{c}_{2/\beta}( \square)\big{)}}{c-\frac{2}{\beta}-\mathsf{c}_{2/\beta}(\square)}\,. \tag{2.14}\]
Proof.: By Theorem 2.1 and (2.11) we infer that (2.12) holds, in the \(+\) case, with
\[f^{+}_{\lambda}\;=\;\frac{n!\,\mathsf{h}_{2/\beta}(\lambda)}{\mathcal{Z}^{J \beta\mathrm{E}}_{\beta}}\,\prod_{1\leq i,j\leq n}\frac{\Gamma\big{(}\lambda_{i}- \lambda_{j}+\frac{\beta}{2}(j-i+1)\big{)}}{\Gamma\big{(}\lambda_{i}-\lambda_{j}+ \frac{\beta}{2}(j-i)\big{)}}\,\prod_{i=1}^{n}\frac{\Gamma\big{(}\lambda_{i}+ \frac{\beta}{2}(c+n-i)\big{)}\,\Gamma\big{(}\frac{\beta}{2}(d+n-i)\big{)}}{ \Gamma\big{(}\lambda_{i}+\frac{\beta}{2}(c+d+2n-i-1)\big{)}}\,. \tag{2.15}\]
Recall the Pochhammer symbol
\[(z)_{m}\,=\,\frac{\Gamma(m+z)}{\Gamma(z)}\,=\,\prod_{j=1}^{m}(z+j-1)\,, \tag{2.16}\]
for non-negative integer \(m\). We use (1.3) to rewrite
\[f_{\lambda}^{+}\,=\,\mathsf{h}_{2/\beta}(\lambda)\,\prod_{1\leq i<j\leq n}\frac {\big{(}\frac{\beta}{2}(j-i+1)\big{)}_{\lambda_{i}-\lambda_{j}}}{\big{(}\frac{ \beta}{2}(j-i)\big{)}_{\lambda_{i}-\lambda_{j}}}\,\prod_{1\leq i\leq n}\frac{ \big{(}\frac{\beta}{2}(c+n-i)\big{)}_{\lambda_{i}}}{\big{(}\frac{\beta}{2}(c+d+ 2n-i-1)\big{)}_{\lambda_{i}}}\,. \tag{2.17}\]
We reason separately for the two products. The second one is
\[\prod_{1\leq i\leq n}\frac{\big{(}\frac{\beta}{2}(c+n-i)\big{)}_{ \lambda_{i}}}{\big{(}\frac{\beta}{2}(c+d+2n-i-1)\big{)}_{\lambda_{i}}} \,=\,\prod_{1\leq i\leq(\lambda)}\prod_{1\leq j\leq\lambda_{i}} \frac{\frac{\beta}{2}(c+n-i)+j-1}{\frac{\beta}{2}(c+d+2n-i-1)+j-1}\] \[\,=\,\prod_{\square\in\mathsf{D}(\lambda)}\frac{c-1+n+\mathsf{c}_ {2/\beta}(\square)}{c+d-2+2n+\mathsf{c}_{2/\beta}(\square)}\,. \tag{2.18}\]
About the first one instead, we claim that due to many cancellations we have the identity8,
Footnote 8: Cf. [9, Theorem 5.17.1] for the well-known case \(\beta=2\), where \(\dim\lambda\) is computed in terms of the hook-length formula from the Frobenius determinant formula.
\[\prod_{1\leq i<j\leq n}\frac{\big{(}\frac{\beta}{2}(j-i+1)\big{)}_{\lambda_{i }-\lambda_{j}}}{\big{(}\frac{\beta}{2}(j-i)\big{)}_{\lambda_{i}-\lambda_{j}} }\,=\,\frac{\prod_{\square\in\mathsf{D}(\lambda)}\!\big{(}n+\mathsf{c}_{2/ \beta}(\square)\big{)}}{\mathsf{h}_{2/\beta}(\lambda)}\,, \tag{2.19}\]
which would complete the proof for \(f_{\lambda}^{+}\). In the interest of clarity, this claim is proved below, see Appendix A.
Similarly, thanks to Corollary 2.3 and (2.11) we infer that (2.12) holds, in the \(-\) case, with
\[f_{\lambda}^{-} \,=\,\frac{n!\,\mathsf{h}_{2/\beta}(\lambda)}{\mathcal{Z}_{\beta }^{\,\,\beta\mathrm{JE}}}\prod_{1\leq i<j\leq n}\frac{\Gamma\big{(}\lambda_{i}- \lambda_{j}+\frac{\beta}{2}(j-i+1)\big{)}}{\Gamma\big{(}\lambda_{i}-\lambda_{j }+\frac{\beta}{2}(j-i)\big{)}}\prod_{i=1}^{n}\frac{\Gamma\big{(}-\lambda_{i}+ \frac{\beta}{2}(c+i-1)\big{)}\,\Gamma\big{(}\frac{\beta}{2}(d+n-i)\big{)}}{ \Gamma\big{(}-\lambda_{i}+\frac{\beta}{2}(c+d+n+i-2)\big{)}}\] \[\,=\,\mathsf{h}_{2/\beta}(\lambda)\prod_{1\leq i<j\leq n}\frac{ \big{(}\frac{\beta}{2}(j-i+1)\big{)}_{\lambda_{i}-\lambda_{j}}}{\big{(}\frac{ \beta}{2}(j-i)\big{)}_{\lambda_{i}-\lambda_{j}}}\,\prod_{1\leq i\leq n}\frac{ \big{(}\frac{\beta}{2}(c+d+n+i-2)-\lambda_{i}\big{)}_{\lambda_{i}}}{\big{(} \frac{\beta}{2}(c+i-1)-\lambda_{i}\big{)}_{\lambda_{i}}}\,. \tag{2.20}\]
The first part of this expression is again (2.19), whereas the last part can be rewritten similarly as (2.18):
\[\prod_{1\leq i\leq n}\frac{\big{(}\frac{\beta}{2}(c+d+n+i-2)- \lambda_{i}\big{)}_{\lambda_{i}}}{\big{(}\frac{\beta}{2}(c+i-1)-\lambda_{i} \big{)}_{\lambda_{i}}} \,=\,\prod_{1\leq i\leq\ell(\lambda)}\prod_{1\leq j\leq\lambda_{i}} \frac{\frac{\beta}{2}(c+d+n+i-2)-j}{\frac{\beta}{2}(c+i-1)-j}\] \[\,=\,\prod_{\square\in\mathsf{D}(\lambda)}\frac{c+d+n-1-\frac{2}{ \beta}-\mathsf{c}_{2/\beta}(\square)}{c-\frac{2}{\beta}-\mathsf{c}_{2/\beta}( \square)}\,. \tag{2.21}\]
and the proof is complete.
Proof of Theorem 1.4.: By Lemma 2.4 and (1.17) we obtain the following identities of generating functions:
\[\Upsilon_{\beta}^{+}(\mathbf{t};n)\bigg{|}_{c=n(\gamma-1)+1,\,d=( \delta-1)+1} \,=\,\tau_{G_{+}^{\lambda}}^{b}\big{(}\frac{1}{n};\mathbf{p}\big{)} \bigg{|}_{p_{\mathsf{k}}=\frac{\gamma n}{\gamma+\delta}t_{k}}\,, \tag{2.22}\] \[\Upsilon_{\beta}^{-}(\mathbf{t};n)\bigg{|}_{c=n\gamma+\frac{2}{ \beta},\,d=(\delta-1)+1} \,=\,\tau_{G_{-}^{\lambda}}^{b}\big{(}\frac{1}{n};\mathbf{p}\big{)} \bigg{|}_{p_{\mathsf{k}}=\frac{(\gamma+\delta)n}{\gamma}t_{k}}\,, \tag{2.23}\]
where \(G_{\pm}^{\lambda}\) are given in (1.22). (Note that the restriction \(\ell(\lambda)\leq n\) in the sum in (2.12) can be lifted because \(f_{\lambda}^{\pm}\) are automatically zero when \(\ell(\lambda)<n\).) To complete the proof it is enough to compare the expansions (1.16) and (2.9).
## 3 Colored monotone Hurwitz maps
**Proposition 3.1**.: _Let \(g_{1},\ldots,g_{L+M}\) be formal parameters and let_
\[\mathcal{T}(t,\mathbf{p};b)\;=\;\sum_{\lambda\in\mathscr{P}}\,\frac{t^{|\lambda|} }{\mathsf{h}_{b+1}^{\prime}(\lambda)}\,\mathrm{P}_{\lambda}^{(b+1)}(\mathbf{p} )\prod_{\square\in\mathsf{D}(\lambda)}G\big{(}\mathsf{c}_{b+1}(\square)\big{)} \,,\qquad G(z)\;=\;\frac{\prod_{i=1}^{L}(1+g_{i}z)}{\prod_{i=1}^{M}(1-g_{L+i}z )}\,. \tag{3.1}\]
_We have_
\[\mathcal{T}(t,\mathbf{p};b)\;=\;\sum_{\lambda\in\mathscr{P}}\frac{t^{|\lambda| }}{|\lambda|!}\,p_{\lambda}\sum_{(\Gamma,c)}\frac{b^{\nu(\Gamma)}}{(1+b)^{| \pi_{0}(\Gamma)|}}\,\prod_{i=1}^{L+M}g_{i}^{|c^{-1}(i)|} \tag{3.2}\]
_with \((\Gamma,c)\) runs in the set of \((L|M)\)-colored monotone Hurwitz maps with profile \(\lambda\)._
Proof.: The argument is similar to the one in the proof of [2, Proposition 3.2], to which we refer for more details. In this proof we write "CMHM" in place of "\((L|M)\)-colored monotone Hurwitz map".
The key property is that a CMHM with \(n\) vertices can always be obtained, in a unique way, from a CMHM with \(n-1\) vertices by first adding an extra isolated vertex of label \(n\), then adding a number of edges incident to \(n\) of each color \(\{1,\ldots,L+M\}\); the second property in the definition of coloring is ensured by adding edges with increasing label and (weakly) increasing color, and the first property by adding \(0\) or \(1\) edges (and no more) of color \(\leq L\). We have to track this process at the level of generating functions.
Introduce the generating function, for all \(n\geq 0\),
\[\mathcal{T}^{[n]}(\mathbf{p};b)\;=\;\sum_{\begin{subarray}{c}\lambda\in \mathscr{P}\\ |\lambda|=n\end{subarray}}p_{\lambda}\sum_{(\Gamma,c)}\frac{b^{\nu(\Gamma)}}{ (1+b)^{|\pi_{0}(\Gamma)|}}\,\prod_{i=1}^{L+M}g_{i}^{|c^{-1}(i)|} \tag{3.3}\]
where \((\Gamma,c)\) runs in the set of CMHM with profile \(\lambda\) (hence, with \(n\) vertices).
We have
\[\frac{z_{1}}{1+b}\mathcal{T}^{[n-1]}(\mathbf{p};b)\;=\;\sum_{\begin{subarray}{ c}\lambda\in\mathscr{P}\\ |\lambda|=n-1\end{subarray}}z_{1}\,p_{\lambda}\sum_{(\Gamma,c)}\frac{b^{\nu( \Gamma)}}{(1+b)^{|\pi_{0}(\Gamma)|}}\,\prod_{i=1}^{L+M}g_{i}^{|c^{-1}(i)|} \tag{3.4}\]
where \((\Gamma,c)\) runs in the set of CMHM with \(n\) vertices, where the \(n\)th vertex is isolated and whose profile is \(\lambda\sqcup\{1\}\), the extra \(1\) being the degree of the face containing the \(n\)th vertex.
We next attach either \(0\) or \(1\) edges with color \(1\) incident to the \(n\)th vertex: the argument in [2, Proposition 3.2] shows that9
Footnote 9: We call \(y_{i}=z_{i+1}\) with respect to the notation of loc. cit.
\[(1+g_{1}\Lambda)\frac{z_{1}}{1+b}\mathcal{T}^{[n-1]}(\mathbf{p};b)\;=\;\sum_{ j\geq 1}\sum_{\begin{subarray}{c}\lambda\in\mathscr{P}\\ |\lambda|=n-j\end{subarray}}z_{j}\,p_{\lambda}\sum_{(\Gamma,c)}\frac{b^{\nu( \Gamma)}}{(1+b)^{|\pi_{0}(\Gamma)|}}\,\prod_{i=1}^{L+M}g_{i}^{|c^{-1}(i)|} \tag{3.5}\]
where now \((\Gamma,c)\) runs in the set of CMHM with \(n\) vertices, where edges incident to the \(n\)th vertex only have color \(1\), and whose profile is \(\lambda\sqcup\{j\}\), \(j\) being the degree of the face containing the active corner at \(n\). Indeed, it is proven in loc. cit. that the action of adding an edge incident to the last vertex \(n\) to a Hurwitz maps and counting the degree of the face containing the active corner at \(n\) with variables \(z_{i}\) instead of \(p_{i}\) corresponds to the action of the operator \(\Lambda\) given by
\[\Lambda\;=\;(1+b)\sum_{i,j\geq 1}i\,z_{i+j}\,\frac{\partial^{2}}{\partial p_{i} \,\partial z_{j}}\,+\,\sum_{i,j\geq 1}z_{i}\,p_{j}\,\frac{\partial}{ \partial z_{i+j}}\,+\,b\sum_{i\geq 2}(i-1)\,z_{i}\,\frac{\partial}{ \partial z_{i}}\,. \tag{3.6}\]
By iterating this argument \(L\) times, we obtain
\[(1+g_{L}\Lambda)\cdots(1+g_{1}\Lambda)\frac{z_{1}}{1+b}\mathcal{T}^{[n-1]}( \mathbf{p};b)\;=\;\sum_{j\geq 1}\sum_{\begin{subarray}{c}\lambda\in\mathscr{P}\\ |\lambda|=n-j\end{subarray}}z_{j}\,p_{\lambda}\sum_{(\Gamma,c)}\frac{b^{\nu( \Gamma)}}{(1+b)^{|\pi_{0}(\Gamma)|}}\,\prod_{i=1}^{L+M}g_{i}^{|c^{-1}(i)|} \tag{3.7}\]
where now \((\Gamma,c)\) runs in the set of CMHM with \(n\) vertices, where edges incident to the \(n\)th vertex have colors \(\leq L\) only, and whose profile is \(\lambda\sqcup\{j\}\), \(j\) being the degree of the face containing the active corner at \(n\).
Next, we attach edges incident to the \(n\)th vertex of color \(L+1\); by the same argument as before, taking into account that we can now add as many such edges as we wish, we get
\[\sum_{p=0}^{+\infty}(g_{L+1}\Lambda)^{p}\,(1+g_{L}\Lambda)\, \cdots\,(1+g_{1}\Lambda)\,\frac{z_{1}}{1+b}\,\mathcal{T}^{[n-1]}(\mathbf{p};b)\] \[\;=\;\frac{1}{1-g_{L+1}\Lambda}\biggl{(}\prod_{i=1}^{L}(1+g_{i} \Lambda)\biggr{)}\,\frac{z_{1}}{1+b}\mathcal{T}^{[n-1]}(\mathbf{p};b)\;=\; \sum_{j\geq 1}\sum_{\begin{subarray}{c}\lambda\in\mathscr{P}\\ |\lambda|=n-j\end{subarray}}z_{j}\,p_{\lambda}\sum_{(\Gamma,c)}\frac{b^{\nu( \Gamma)}}{(1+b)^{|\pi_{0}(\Gamma)|}}\,\prod_{i=1}^{L+M}g_{i}^{|c-1(i)|} \tag{3.8}\]
where now \((\Gamma,c)\) runs in the set of CMHM with \(n\) vertices, where edges incident to the \(n\)th vertex only have colors \(\leq L+1\), and whose profile is \(\lambda\sqcup\{j\}\), \(j\) being the degree of the face containing the active corner at \(n\).
Finally, iterating this step \(M\)-times (and restoring the variables \(p_{i}\) for the degree of the face containing active corner at the \(n\)th vertex) we get
\[\mathcal{T}^{[n]}(\mathbf{p};b)\;=\;G(\Lambda)\,\frac{z_{1}}{1+b}\,\mathcal{T }^{[n-1]}(\mathbf{p};b)\,\bigg{|}_{z_{i}=p_{i}}\,. \tag{3.9}\]
On the other hand, we know from identity [3, eq. (62)] that
\[\frac{\partial}{\partial t}\mathcal{T}(t,\mathbf{p};b)\;=\;G(\Lambda)\,\frac{ z_{1}}{1+b}\,\mathcal{T}(t,\mathbf{p};b)\,\bigg{|}_{z_{i}=p_{i}}\,, \tag{3.10}\]
and thus we can show (inductively in the powers of \(t\)) that \(\mathcal{T}(t,\mathbf{p};b)=\sum_{n\geq 0}\frac{t^{n}}{n!}\mathcal{T}^{[n]}( \mathbf{p};b)\).
Proof of Theorem 1.12.: Note that, by the definition (1.16), when \(G(z)=\frac{\prod_{i=1}^{L}(1+u_{i}z)}{\prod_{i=1}^{M}(1-u_{L+i}z)}\), \(H_{G}^{b}(\lambda;r)\) is the coefficient of \(p_{\lambda}\,\epsilon^{r}\) in
\[\mathcal{T}(t,\mathbf{p};b)\,\bigg{|}_{t=1,\,g_{i}=eu_{i}}\,, \tag{3.11}\]
where \(\mathcal{T}(t,\mathbf{p};b)\) is given in (3.1). The proof then follows from Proposition 3.1.
### Comparison with the orientable case
Let \(S_{n}\) be the group of permutations of \(\{1,\ldots,n\}\) and let, for all \(\lambda\in\mathscr{P}\) with \(|\lambda|=n\), \(\operatorname{Cyc}(\lambda)\subseteq S_{n}\) be the conjugacy class of permutations whose disjoint cycles have lengths equal to the nonzero parts of \(\lambda\). Let \(\mathbb{C}[S_{n}]\) be the group algebra of the symmetric group \(S_{n}\), and \(\mathcal{J}_{i}\in\mathbb{C}[S_{n}]\) for \(i=1,\ldots,n\), be the Young-Jucys-Murphy elements, namely
\[\mathcal{J}_{1}=0,\;\mathcal{J}_{2}=(1,2),\;\ldots,\;\mathcal{J}_{n}=(1,n)+( 2,n)+\cdots+(n-1,n)\,. \tag{3.12}\]
(We denote with \((a,b)\in S_{n}\) the transposition switching \(a,b\).) Let us recall that, although the \(\mathcal{J}_{i}\)'s are not central elements of \(\mathbb{C}[S_{n}]\), they commute among themselves. Moreover, any symmetric polynomial in \(n\) variables evaluated at \(\mathcal{J}_{1},\ldots,\mathcal{J}_{n}\) is central, hence it can be expanded (uniquely) as a linear combination of
\[\mathcal{C}_{\lambda}\,=\sum_{\sigma\in\operatorname{Cyc}(\lambda)}\sigma \tag{3.13}\]
for \(\lambda\in\mathscr{P}\) with \(|\lambda|=n\).
It is well-known (e.g., cf. [12, Sections 2.1-2.2]) that \(H_{G}^{\underline{b}=0}(\lambda;r)\) is the coefficient of \(\epsilon^{r}\mathcal{C}_{\lambda}\) in
\[\frac{1}{2_{\lambda}}\prod_{i=1}^{n}G(\epsilon\mathcal{J}_{i})\,, \tag{3.14}\]
provided that this expression is expanded as a linear combination of the \(\mathcal{C}_{\lambda}\)'s (times powers of \(\epsilon\)).
**Definition 3.2** (Colored monotone factorizations).: _Let \(\lambda\in\mathscr{P}\), with weight \(|\lambda|=n\). A "monotone factorization of length \(r\) and type \(\lambda\)" is an ordered collection of transpositions \((\pi_{1},\dots,\pi_{r})\) in \(S_{n}\) such that \(\pi_{1}\cdots\pi_{r}\in\operatorname{Cyc}(\lambda)\) and that, writing \(\pi_{i}=(a_{i},b_{i})\) with \(a_{i}<b_{i}\), we have \(b_{i}\leq b_{i+1}\) for \(i=1,\dots,r-1\)._
_Let \(L,M\) be non-negative integers. An "\((L|M)\)-coloring" of a monotone factorization of length \(r\) and type \(\lambda\) is a mapping \(c:\{1,\dots,r\}\to\{1,\dots,L+M\}\) such that_
* _if_ \(1\leq i<j\leq r\) _and_ \(1\leq c(i)=c(j)\leq L\)_, then_ \(b_{i}<b_{j}\)_;_
* _if_ \(1\leq i<j\leq r\) _and_ \(b_{i}=b_{j}\)_, then_ \(c(i)\leq c(j)\)_._
_An "\((L|M)\)-colored monotone factorization of length \(r\) and type \(\lambda\)" is a monotone factorization of length \(r\) and type \(\lambda\) with an \((L|M)\)-coloring._
**Proposition 3.3**.: _Let \(G(z)=\frac{\prod_{j=1}^{L}(1+u_{i}z)}{\prod_{i=1}^{L}(1-u_{L+z})}\), for a set of parameters \(u_{1},\dots,u_{L+M}\). For all \(\lambda\in\mathscr{P}\) we have_
\[H_{G}^{b=0}(\lambda;r)\;=\;\frac{1}{|\lambda|!}\sum_{(\Pi,c)}u_{c(1)}\cdots u_ {c(r)} \tag{3.15}\]
_where the sum on \((\Pi,c)\) runs over the set of \((L|M)\)-colored monotone factorizations of length \(r\) and type \(\lambda\)._
Proof.: Let us expand the product in (3.14) as follows:
\[\prod_{j=1}^{n}\left((1+\epsilon u_{1}\mathcal{J}_{i})\cdots(1+\epsilon u_{L} \mathcal{J}_{i})\biggl{(}\sum_{p\geq 0}(\epsilon u_{L+1}\mathcal{J}_{i})^{p} \biggr{)}\cdots\biggl{(}\sum_{p\geq 0}(\epsilon u_{L+M}\mathcal{J}_{i})^{p} \biggr{)}\right). \tag{3.16}\]
The coefficient of \(\epsilon^{r}\) in this expression is
\[\sum_{s_{1}\leq\cdots\leq s_{r}}\mathcal{J}_{s_{1}}\cdots\mathcal{J}_{s_{r}} \sum_{c}u_{c(1)}\cdots u_{c(r)} \tag{3.17}\]
where the inner sum runs over mappings \(c:\{1,\dots,r\}\to\{1,\dots,L+M\}\) satisfying the following constraints:
* if \(1\leq i<j\leq r\) and \(1\leq c(i)=c(j)\leq L\), then \(s_{i}<s_{j}\);
* if \(1\leq i<j\leq r\) and \(s_{i}=s_{j}\), then \(c(i)\leq c(j)\).
The proof is now immediate by (3.12).
**Remark 3.4**.: _This combinatorial model is different from the one given, e.g., in [12, Section 2.5]._
It can be shown (cf. [2, Remark 3]) that an _orientable_ monotone Hurwitz map with profile \(\lambda\) is uniquely characterized by a monotone factorization of type \(\lambda\), and the notions of coloring are exactly parallel in the two settings (Hurwitz maps or factorizations).
In particular, Theorems 1.4 and 1.5 reduce, when \(\beta=2\), to the known combinatorial interpretation of the large \(n\) expansions of (positive and negative) correlators of the Jacobi and Laguerre unitary ensembles in terms of monotone factorizations in the symmetric group [4, 12] (although in this slightly different formulation in terms of colored monotone factorizations).
## Acknowledgments
I am grateful to Dan Betea, Massimo Gisonni, and Tamara Grava for valuable conversations. This work is supported by the FCT grant 2022.07810.CEECIND.
Proof of (2.19)
We wish to show that for any \(\lambda\in\mathscr{P}\) with \(\ell(\lambda)\leq n\), we have
\[\prod_{i=1}^{n-1}\prod_{j=i+1}^{n}\frac{\big{(}\tfrac{\beta}{2}(j-i+1)\big{)}_{ \lambda_{i}-\lambda_{j}}}{\big{(}\tfrac{\beta}{2}(j-i)\big{)}_{\lambda_{i}- \lambda_{j}}}\,=\,\prod_{\square\in\mathsf{D}(\lambda)}\frac{n+\mathsf{c}_{2/ \beta}(\square)}{\tfrac{2}{\beta}\mathsf{arm}(\square)+\mathsf{leg}(\square)+1}\,.\] (A.1)
We first consider the part of the product in the left-hand side of (A.1) with \(i=1\), which is
\[\prod_{j=2}^{n}\frac{\big{(}\tfrac{\beta}{2}j\big{)}_{\lambda_{1}-\lambda_{j}} }{\big{(}\tfrac{\beta}{2}(j-1)\big{)}_{\lambda_{1}-\lambda_{j}}}\,=\,\frac{ \big{(}\tfrac{\beta}{2}n\big{)}_{\lambda_{1}-\lambda_{n}}}{\big{(}\tfrac{\beta }{2}\big{)}_{\lambda_{1}-\lambda_{2}}}\,\prod_{j=2}^{n-1}\frac{\big{(}\tfrac{ \beta}{2}j\big{)}_{\lambda_{1}-\lambda_{j}}}{\big{(}\tfrac{\beta}{2}j\big{)}_ {\lambda_{1}-\lambda_{j+1}}}\,=\,\frac{\big{(}\tfrac{\beta}{2}n\big{)}_{ \lambda_{1}-\lambda_{n}}}{\prod_{j=1}^{n-1}\big{(}\tfrac{\beta}{2}j+\lambda_{ 1}-\lambda_{j}\big{)}_{\lambda_{j}-\lambda_{j+1}}}\,.\] (A.2)
Let \(\mathsf{D}_{1}(\lambda)=\{(1,1),(1,2),\dots,(1,\lambda_{1})\}\) be the first _row_ of the diagram of \(\lambda\). The numerator of the last expression is equal to
\[\frac{\prod_{\square\in\mathsf{D}_{1}(\lambda)}\big{(}\tfrac{\beta}{2}n+\tfrac {\beta}{2}\mathsf{c}_{2/\beta}(\square)\big{)}}{\prod_{j=\lambda_{1}-\lambda_{ 1}+\lambda_{n}+1}^{\lambda_{1}-\lambda_{n}-1}\big{(}\tfrac{\beta}{2}n+f-1\big{)}}\,,\] (A.3)
(note that \(\mathsf{c}_{2/\beta}(\square)=\tfrac{2}{\beta}(j-1)\) for \(\square=(1,j)\in\mathsf{D}_{1}(\lambda)\)). On the other hand, we expand the denominator
\[\prod_{j=1}^{n-1}\big{(}\tfrac{\beta}{2}j\!+\!\lambda_{1}\!-\!\lambda_{j}\big{)} _{\lambda_{j}-\lambda_{j+1}}=\bigg{(}\!\!\prod_{k=0}^{\lambda_{1}-\lambda_{2}- 1}\!\big{(}\tfrac{\beta}{2}\!+\!k\big{)}\bigg{)}\!\bigg{(}\!\!\prod_{k=\lambda _{1}-\lambda_{2}}^{\lambda_{1}-\lambda_{3}-1}\!\big{(}\tfrac{\beta}{2}\!+\!k \big{)}\bigg{)}\cdots\bigg{(}\!\!\prod_{k=\lambda_{1}-\lambda_{n-1}}^{\lambda_{ 1}-\lambda_{n}-1}\big{(}\tfrac{\beta}{2}(n\!-\!1)\!+\!k\big{)}\bigg{)}\] (A.4)
and we quickly realize that the factors in the first product (on the right-hand side) correspond to boxes \(\square\in\mathsf{D}_{1}(\lambda)\) with \(\mathsf{leg}(\square)=0\), those in the second one to boxes \(\square\in\mathsf{D}_{1}(\lambda)\) with \(\mathsf{leg}(\square)=1\), and so on until the last group of factors, corresponding to boxes \(\square\in\mathsf{D}_{1}(\lambda)\) with \(\mathsf{leg}(\square)=n-2\), such that this product equals
\[\frac{\prod_{\square\in\mathsf{D}_{1}(\lambda)}\big{(}\tfrac{\beta}{2}(\mathsf{ leg}(\square)+1)+\mathsf{arm}(\square)\big{)}}{\prod_{\square\in\mathsf{D}_{1}( \lambda),\,\,\mathsf{leg}(\square)=n-1}\big{(}\tfrac{\beta}{2}(\mathsf{leg}( \square)+1)+\mathsf{arm}(\square)\big{)}}=\frac{\prod_{\square\in\mathsf{D}_{1}( \lambda)}\big{(}\tfrac{\beta}{2}(\mathsf{leg}(\square)+1)+\mathsf{arm}(\square) \big{)}}{\prod_{f=\lambda_{1}-\lambda_{n}+1}^{\lambda_{1}}\big{(}\tfrac{\beta }{2}n+f-1\big{)}}\,.\] (A.5)
Taking the ratio of (A.3) by (A.5) we get
\[\prod_{j=2}^{n}\frac{\big{(}\tfrac{\beta}{2}j\big{)}_{\lambda_{1}-\lambda_{j}} }{\big{(}\tfrac{\beta}{2}(j-1)\big{)}_{\lambda_{1}-\lambda_{j}}}\,=\,\prod_{ \square\in\mathsf{D}_{1}(\lambda)}\frac{n+\mathsf{c}_{2/\beta}(\square)}{ \tfrac{2}{\beta}\mathsf{arm}(\square)+\mathsf{leg}(\square)+1}\,.\] (A.6)
Thus, (A.1) follows by induction on the length of the partition, as (A.6) implies that the statement for a partition \((\lambda_{1},\lambda_{2},\dots)\) can be reduced to the same statement for the partition \((\lambda_{2},\lambda_{3},\dots)\). \(\blacksquare\)
|
2303.13485 | Timely Multi-Process Estimation Over Erasure Channels With and Without
Feedback: Signal-Independent Policies | We consider a multi-process remote estimation system observing $K$
independent Ornstein-Uhlenbeck processes. In this system, a shared sensor
samples the $K$ processes in such a way that the long-term average sum mean
square error (MSE) is minimized using signal-independent sampling policies, in
which sampling instances are chosen independently from the processes' values.
The sensor operates under a total sampling frequency constraint $f_{\max}$. The
samples from all processes consume random processing delays in a shared queue
and then are transmitted over an erasure channel with probability $\epsilon$.
We study two variants of the problem: first, when the samples are scheduled
according to a Maximum-Age-First (MAF) policy, and the receiver provides an
erasure status feedback; and second, when samples are scheduled according to a
Round-Robin (RR) policy, when there is no erasure status feedback from the
receiver. Aided by optimal structural results, we show that the optimal
sampling policy for both settings, under some conditions, is a \emph{threshold
policy}. We characterize the optimal threshold and the corresponding optimal
long-term average sum MSE as a function of $K$, $f_{\max}$, $\epsilon$, and the
statistical properties of the observed processes. Our results show that, with
an exponentially distributed service rate, the optimal threshold $\tau^*$
increases as the number of processes $K$ increases, for both settings.
Additionally, we show that the optimal threshold is an \emph{increasing}
function of $\epsilon$ in the case of \emph{available} erasure status feedback,
while it exhibits the \emph{opposite behavior}, i.e., $\tau^*$ is a
\emph{decreasing} function of $\epsilon$, in the case of \emph{absent} erasure
status feedback. | Karim Banawan, Ahmed Arafa, Karim G. Seddik | 2023-03-23T17:51:46Z | http://arxiv.org/abs/2303.13485v2 | # Timely Multi-Process Estimation Over Erasure Channels With and Without Feedback+
###### Abstract
We consider a multi-process remote estimation system observing \(K\) independent Ornstein-Uhlenbeck processes. In this system, a shared sensor samples the \(K\) processes in such a way that the long-term average sum mean square error (MSE) is minimized. The sensor operates under a total sampling frequency constraint \(f_{\max}\). The samples from all processes consume random processing delays in a shared queue and then are transmitted over an erasure channel with probability \(\epsilon\). We study two variants of the problem: first, when the samples are scheduled according to a Maximum-Age-First (MAF) policy, and the receiver provides an erasure status feedback; and second, when samples are scheduled according to a Round-Robin (RR) policy, when there is no erasure status feedback from the receiver. Aided by optimal structural results, we show that the optimal sampling policy for both settings, under some conditions, is a _threshold policy_. We characterize the optimal threshold and the corresponding optimal long-term average sum MSE as a function of \(K\), \(f_{\max}\), \(\epsilon\), and the statistical properties of the observed processes. Our results show that, with an exponentially distributed service rate, the optimal threshold \(\tau^{*}\) increases as the number of processes \(K\) increases, for both settings. Additionally, we show that the optimal threshold is an _increasing_ function of \(\epsilon\) in the case of _available_ erasure status feedback, while it exhibits the _opposite behavior_, i.e., \(\tau^{*}\) is a _decreasing_ function of \(\epsilon\), in the case of _absent_ erasure status feedback.
## 1 Introduction
We study the problem of timely tracking of multiple random processes using shared resources. This setting arises in many practical situations of remote estimation and IoT applications. Recent works have drawn connections between the quality of the estimates at the destination, measured through mean square error (MSE), and the _age of information_ (AoI) metric that assesses timeliness and freshness of the received data, see, e.g., the survey in [1, Section VI]. We extend these results to _multi-process_ estimation settings in this work.
AoI is defined as the time elapsed since the latest received message has been generated at its source. It has been studied extensively in the past few years in various contexts, see, e.g., [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27]. Relevant to this work is the fact that AoI can be closely tied to MSE in random processes tracking applications. The works in [28, 29, 30] characterize implicit and explicit relationships between MSE and AoI under different estimation contexts. References [31, 32], however, consider the notion of the value of information (mainly through MSE) and show that optimizing it can be different from optimizing AoI. Lossy source coding and distorted updates for AoI minimization is considered in [33, 34, 35]. The notion of age of incorrect information (AoII) is introduced in in [36], adding more context to AoI by capturing erroneous updates. The works in [37, 38] consider sampling of Wiener and Ornstein-Uhlenbeck (OU) processes for the purpose of remote estimation, and draw connections between MSE and AoI. Our recent work in [39] also focuses on characterizing the relationship of MSE and AoI, yet with the additional presence of coding and quantization. Reference [40] shows the optimality of threshold policies for tracking OU processes under rate constraints.
Reference [38] is closely related to our setting, in which optimal sampling methods to minimize the long-term average MSE for an OU process is derived. It is shown that if sampling times are independent of the instantaneous values of the process (signal-independent sampling) the minimum MSE (MMSE) reduces to an increasing function of AoI (age penalty). Then, threshold policies are shown optimal in this case, in which a new sample is acquired only if the expected age-penalty by the time it arrives surpasses a certain value. This paper extends [38] (and the related studies in [39, 40]) to multiple OU processes.
In this paper, we study a remote sensing problem consisting of a shared controlled sensor, a shared queue, and a receiver (see Fig. 1) to track \(K\) independent, but not necessarily
Figure 1: System model.
identical, OU processes.1 The sensor transmits the collected samples over an erasure channel with probability \(\epsilon\) after being processed for a random delay with service rate \(\mu\). The sensor generates the samples _at will_, subject to a total sampling frequency constraint \(f_{\max}\). The goal is to minimize the long-term average sum MSE of the \(K\) processes.
Footnote 1: The OU process is the continuous-time analog of the first-order autoregressive process [41, 42], and is used to model various physical phenomena, and has relevant applications in control and finance.
In this work, we investigate two variants of the problem, which are different based on the _availability of erasure status update_ at the transmitter2. In the first setting, the receiver sends an erasure status update to the transmitter. We focus on _maximum-age-first_ (MAF) scheduling, where the transmitter chooses the process with the largest AoI to be sampled. MAF scheduling results in obtaining a _fresh_ sample from the _same_ process until an unerased sample from that process is conveyed to the receiver. The erasure status update enables the use of MAF scheduling as the transmitter can accurately estimate the AoI of each process. In the second setting, the erasure status feedback is absent at the transmitter. In this setting, we focus on _round-robin_ (RR) scheduling, where the transmitter acquires samples from the processes in a _fixed order_ irrespective of the experienced erasure events. In both problem variants, we concentrate on _stationary deterministic policies_, where the sampling policy needs to induce a stationary distribution across transmission epochs. Based on this, the optimal sampling policy reduces to optimizing a _stationary waiting policy_.
Footnote 2: We use the words sensor and transmitter interchangeably.
We show that, for both problem settings, we can _aggregate_ the waiting times without affecting the sum MSE value, with a subtle difference between the two problem variants. Specifically, in the presence of erasure status feedback, the waiting times are aggregated at the beginning of the transmission _epoch_, defined as the inter-delivery time between successful samples, which includes the _successful_ transmission of a sample from each process. In the absence of erasure status feedback, however, the transmitter aggregates the waiting times at the beginning of the transmission _round_, which includes a transmission _trial_ of a sample from each process irrespective of the erasure outcome. We show that, for both problems, the optimal stationary deterministic policy is a _threshold policy_. We characterize the optimal threshold \(\tau^{*}(K,f_{\max},\epsilon,\boldsymbol{\theta},\boldsymbol{\sigma})\) and the corresponding long-term average sum MSE in terms of the processes' statistical properties \((\boldsymbol{\theta},\boldsymbol{\sigma})\), \(\epsilon\), and \(f_{\max}\). In both cases, the threshold is a maximum of two threshold values: one due to a nonbinding sampling frequency constraint scenario, and another due to a binding scenario.
Surprisingly, our results show that _optimal threshold \(\tau^{*}\) behaves differently based on the availability erasure status feedback_. Specifically, our numerical results show that 1) the optimal threshold \(\tau^{*}\) is an _increasing_ function in the erasure probability \(\epsilon\) in the presence of erasure status feedback, 2) the optimal threshold \(\tau^{*}\) is an _decreasing_ function in the erasure probability \(\epsilon\) in the _absence_ of erasure status feedback, and 3) the optimal threshold is an increasing function in the number of the observed processes \(K\) for both problem variants.
System Model
We consider a sensing system in which \(K\) independent, but not necessarily identical, OU processes are remotely monitored using a _shared_ sensor that transmits samples from the processes over an erasure channel to a receiver. Denote the \(k\)th process value at time \(s\) by \(X_{s}^{[k]}\). Given \(X_{s}^{[k]}\), the \(k\)th OU process evolves, for \(t\geq s\), as [41, 42]
\[X_{t}^{[k]}=X_{s}^{[k]}e^{-\theta_{k}(t-s)}+\frac{\sigma_{k}}{\sqrt{2\theta_{k }}}e^{-\theta_{k}(t-s)}W_{e^{2\theta_{k}(t-s)}-1}, \tag{1}\]
where \(W_{t}\) denotes a Wiener process, while \(\theta_{k}>0\) and \(\sigma_{k}>0\) are fixed parameters that control how fast the process evolves. We study the system in steady-state, hence, we assume that the processes are initiated as \(X_{0}^{[k]}\sim\mathcal{N}\left(0,\sigma_{k}^{2}/2\theta_{k}\right)\).3
Footnote 3: This way, the variance of \(X_{t}^{[k]}\) is \(\sigma_{k}^{2}/2\theta_{k},\ \forall t\), and the autocorrelation function is \(\frac{\sigma_{k}^{2}}{2\theta_{k}}e^{-\theta_{k}|\tau|},\ \tau\in\mathbb{R}\). Thus, large values of \(\sigma_{k}\) or \(\theta_{k}\) indicate a fast-varying process.
To estimate \(\left\{X_{t}^{[k]}\right\}\) at the receiver, the sensor acquires samples from the \(k\)th OU process at specific time instants \(\left\{S_{i}^{[k]}\right\}\) and sends them to the receiver. Sampling instants are fully-controlled, i.e., samples are _generated-at-will_. We focus on _signal-independent_ sampling policies, in which the optimal sampling instants depend on the statistical measures of the processes and not on exact processes' values4.
Footnote 4: We focus on the case in which the OU processes are non-observable prior to sampling. The case in which the processes are fully observable is to be studied in follow-up works.
The sensor must obey a _total sampling frequency constraint_\(f_{\max}\). Let \(\ell_{i}\) denote the \(i\)th sampling instant _regardless_ of the identity of the process being sampled. For example, if process 1 is sampled two times consecutively and then process 2 is sampled, then we have \(\ell_{1}=S_{1}^{[1]}\), \(\ell_{2}=S_{2}^{[1]}\), and \(\ell_{3}=S_{1}^{[2]}\). Hence, it holds that \(\left\{\ell_{i}\right\}_{i=1}^{\infty}\supseteq\cup_{k}\left\{S_{i}^{[k]} \right\}_{i=1}^{\infty}\). Therefore, the sampling constraint is expressed as follows:
\[\liminf_{n\to\infty}\frac{1}{n}\mathbb{E}\left[\sum_{i=1}^{n}\ell_{i+1}-\ell_ {i}\right]\geq\frac{1}{f_{\max}}, \tag{2}\]
i.e., the long-term average inter-sampling time \(\ell_{i+1}-\ell_{i}\) is constrained to be no smaller than \(\frac{1}{f_{\max}}\), which indicates that the sensor shares the sampling budget \(f_{\max}\) among the \(K\) processes. Samples go through a shared processing queue, whose service model follows a Poisson process with service rate \(\mu\), i.e., service times are independent and identically distributed (i.i.d.) \(\sim\exp(\mu)\) across samples. Served samples are prune to erasures with probability \(\epsilon\), also occuring independently across samples.
Samples are time-stamped prior to transmissions, and successfully-received samples from process \(k\) determine the _age-of-information_ (AoI) of that process at the receiver, denoted \(\texttt{AoI}^{[k]}(t)\). AoI is defined as the time elapsed since the latest successfully received sample's time stamp.
In this work, we investigate the effect of the presence/absence of the _erasure status_
_feedback_ on specifying the sampling time instants \(\left\{S_{i}^{[k]}\right\}\). Specifically, we study the following sampling scenarios:
1. _Erasure status feedback is available:_ In this case, an immediate erasure status feedback is available at the sensor. We focus, in this case, on _Maximum-Age-First (MAF) scheduling_, in which the processes are sampled according to their relative AoI's, with priority given to the process with highest AoI. Hence, at time \(t\), process \[\kappa(t)\triangleq\arg\max_{k}\texttt{AoI}^{[k]}(t)\] (3) is sampled. Observe that the value of \(\kappa(t)\) will _not_ change unless a successful transmission occurs. Therefore, in case of erasure events, a _fresh_ sample is generated from the _same_ process being served and transmission is re-attempted.
2. _Erasure status feedback is unavailable:_ In this case, the receiver does not provide any feedback signaling to the sensor about the erasure status. In this case, MAF scheduling is not a viable scheduling policy as the sensor cannot identify the AoI of each process. Alternatively, we employ in this case the _round-robin (RR) scheduling_ with new samples, i.e., the sensor acquires a new sample from process 1, followed by a new sample from process 2, \(\cdots\), etc., irrespective of the erasure events.
Although both MAF and RR policies schedule the processes' samples in the same order \(1,2,\cdots,K\), the two scheduling policies differ in the erasure counter-measure. Specifically, the MAF scheduling keeps re-attempting to sample the _same_ process until being successful. This is in contrast to RR scheduling, which _keeps the order_ of the processes regardless of the erasure events due to the absence of erasure status feedback.
Under both MAF and RR scheduling, and since the channel behaves similarly for all processes, each process will eventually be given an equal share of the allowed sampling budget, i.e., each process will be sampled at a rate of \(f_{\max}/K\), and the sampling constraint in (2) becomes
\[\liminf_{n\rightarrow\infty}\frac{1}{n}\mathbb{E}\left[\sum_{i=1}^{n}S_{i+1} ^{[k]}-S_{i}^{[k]}\right]\geq\frac{K}{f_{\max}},\quad\forall k. \tag{4}\]
Let \(\tilde{S}_{i}^{[k]}\) denote the sampling instant of the \(i\)th _successfully_-received (unerased) sample from the \(k\)th process, and (re-)define \(S_{i}^{[k]}(m)\) as the sampling instant of the \(m\)th _attempt_ to convey the \(i\)th sample of the \(k\)th process, \(m=1,\ldots,M_{i}^{[k]}\), with \(M_{i}^{[k]}\) denoting the number of trials. Hence, we have \(S_{i}^{[k]}(m)\leq\tilde{S}_{i}^{[k]},\ \forall m\), with equality at \(m=M_{i}^{[k]}\). Our channel model indicates that \(M_{i}^{[k]}\)'s are i.i.d. \(\sim\) geometric\((1-\epsilon)\). Each sample \(X_{S_{i}^{[k]}(m)}^{[k]}\) incurs a service time of \(Y_{i}^{[k]}(m)\) time units with \(Y_{i}^{[k]}(m)\)'s being i.i.d. \(\sim\exp(\mu)\). The successfully-received sample,
\(X_{\hat{S}_{i}^{[k]}}^{[k]}\), arrives at the receiver at time \(D_{i}^{[k]}\), i.e.,
\[D_{i}^{[k]}=\tilde{S}_{i}^{[k]}+Y_{i}^{[k]}\left(M_{i}^{[k]}\right). \tag{5}\]
Based on this notation, one can characterize the AoI of the \(k\)th OU process as follows:
\[\texttt{AoI}^{[k]}(t)=t-\tilde{S}_{i}^{[k]},\qquad D_{i}^{[k]}\leq t<D_{i+1}^{[ k]}. \tag{6}\]
The receiver collects the unerased samples from all processes and uses them to construct minimum mean square error (MMSE) estimates. Since the processes are independent, and by the strong Markov property of the OU process, the MMSE estimate for the \(k\)th process at time \(t\), denoted \(\hat{X}_{t}^{[k]}\), is based solely on the latest successfully-received sample from that process. Thus, for \(D_{i}^{[k]}\leq t<D_{i+1}^{[k]}\), we have [38, 39]
\[\hat{X}_{t}^{[k]}= \mathbb{E}\left[X_{t}^{[k]}\middle|\tilde{S}_{i}^{[k]},X_{\tilde {S}_{i}^{[k]}}\right]\] \[\overset{(1)}{=} X_{\hat{S}_{i}^{[k]}}e^{-\theta_{k}(t-\tilde{S}_{i}^{[k]})}. \tag{7}\]
Let \(\pi\) denote the scheduling policy, with \(\pi\equiv\) MAF and \(\pi\equiv\) RR in the presence and absence of erasure status feedback, respectively. Hence, the instantaneous mean square error (MSE) in estimating the \(k\)th process at time \(t\in\left[D_{i}^{[k]},D_{i+1}^{[k]}\right)\) is [38, 39]
\[\texttt{mse}_{\pi}^{[k]}\left(t,\tilde{S}_{i}^{[k]}\right) \triangleq\mathbb{E}\left[\left(X_{t}^{[k]}-\hat{X}_{t}^{[k]} \right)^{2}\right] \tag{8}\] \[=\frac{\sigma_{k}^{2}}{2\theta_{k}}\left(1-e^{-2\theta_{k}\left( t-\tilde{S}_{i}^{[k]}\right)}\right), \tag{9}\]
which is an increasing function of the AoI in (6) as we have \(\texttt{mse}_{\pi}^{[k]}=\frac{\sigma_{k}^{2}}{2\theta_{k}}\left(1-e^{-2 \theta_{k}\texttt{AoI}^{[k]}(t)}\right)\). Note that the MSE under MAF scheduling is different from that under RR scheduling, and hence the distinction using the subscript \(\pi\). Next, we define the long-term time average MSE of the \(k\)th process as
\[\overline{\texttt{mse}_{\pi}^{[k]}}\triangleq \limsup_{T\rightarrow\infty}\frac{\sum_{i=1}^{T}\mathbb{E}\left[ \int_{D_{i}^{[k]}}^{D_{i+1}^{[k]}}\texttt{mse}_{\pi}^{[k]}\left(t,\tilde{S}_{i }^{[k]}\right)dt\right]}{\sum_{i=1}^{T}\mathbb{E}\left[D_{i+1}^{[k]}-D_{i}^{[k] }\right]}. \tag{10}\]
Our goal is to choose the sampling instants to minimize a penalty function \(g(\cdot)\) of \(\left\{\overline{\texttt{mse}_{\pi}^{[k]}}\right\}\) under the sampling frequency constraint. More specifically, to solve the following problem in the presence/absence of erasure status feedback:
\[\min_{\{S_{i}^{[k]}(m)\}}\;\;g\left(\overline{\texttt{mse}_{\pi}^{[1]}}, \cdots,\overline{\texttt{mse}_{\pi}^{[K]}}\right)\]
\[\text{s.t.}\quad\liminf_{n\to\infty}\frac{1}{n}\mathbb{E}\left[\sum_{i=1}^{n}S_{i+ 1}^{[k]}-S_{i}^{[k]}\right]\geq\frac{K}{f_{\text{max}}},\ \forall k. \tag{11}\]
## 3 Stationary Policies: Problem Re-Formulation
In this section, we re-formulate problem (11) in terms of a _stationary waiting policy_ for each process. In the sequel, we provide the details of such re-formulation for both cases of erasure feedback availability.
### Problem Re-Formulation in the Presence of Erasure Feedback
In this subsection, we focus on the case in which the erasure status feedback is available. We define \(W_{i}^{[k]}(m)\) as the \(m\)th waiting time before the \(m\)th transmission attempt towards conveying the \(i\)th sample from the \(k\)th process, \(1\leq m\leq M_{i}^{[k]}\). Without loss of generality, let the MAF schedule be in the order \(1,2,\ldots,K\). Thus, we have (see Fig. 2)
\[S_{i}^{[k]}(m)=D_{i}^{[k-1]}+\sum_{j=1}^{m-1}Y_{i}^{[k]}(j)+\sum_{j=1}^{m}W_{i }^{[k]}(j), \tag{12}\]
with \(D_{i}^{[0]}\triangleq D_{i-1}^{[K]}\). Problem (11) now reduces to optimizing the waiting times \(\left\{W_{i}^{[k]}(m)\right\}\). We now define the \(i\)th _epoch_ of the \(k\)th process, denoted \(\Gamma_{i}^{[k]}\), as the inter-reception time in between its \(i\)th and \((i+1)\)th _unerased_ samples, i.e.,
\[\Gamma_{i}^{[k]}=D_{i+1}^{[k]}-D_{i}^{[k]}. \tag{13}\]
In this work, we focus on _stationary waiting policies_ in which the waiting policy \(\left\{W_{i}^{[k]}(m)\right\}\) has the same distribution across all processes' epochs. Note that under MAF scheduling, each process epoch entails a successful transmission of every other process. This, together with the fact that service times and erasure events are i.i.d., induces a stationary distribution across all processes' epochs. Therefore, dropping the indices \(i\) and \(k\), we have \(\Gamma_{i}^{[k]}\sim\Gamma,\ \forall i,k\), where
\[\Gamma=\sum_{k=1}^{K}\sum_{m=1}^{M^{[k]}}W^{[k]}(m)+Y^{[k]}(m). \tag{14}\]
Now consider a typical epoch for the \(k\)th process. By stationarity, one can write (10) as
\[\overline{\texttt{mse}_{\text{MAF}}^{[k]}}=\frac{\mathbb{E}\left[\int_{D^{[k ]}}^{D^{[k]}+\Gamma}\texttt{mse}_{\text{MAF}}^{[k]}\left(t,\tilde{S}^{[k]} \right)dt\right]}{\mathbb{E}\left[\Gamma\right]}. \tag{15}\]
where \(D_{i}^{[k]}\sim D^{[k]}\) and \(\tilde{S}_{i}^{[k]}\sim\tilde{S}^{[k]}\), \(\forall i\). In the sequel, _we treat the \(K\)th (last) process's epoch
as the typical epoch._
In the next lemma, we prove an important structural result, which asserts that the positions of the waiting times do not matter. Specifically, we show that one can achieve the same long-term average MSE penalty by _grouping all the waiting times at the beginning of the (typical) epoch_ (see Fig. 3).
**Lemma 1**: _Under signal-independent sampling with MAF scheduling, problem (11) is equivalent to the following optimization problem for stationary waiting policies:_
\[\min_{W\geq 0} g\left(\overline{\mbox{{ms}}\,\mbox{{e}}^{[1]}_{\mbox{{\scriptsize MAF }}}},\cdots,\overline{\mbox{{ms}}\,\mbox{{e}}^{[K]}_{\mbox{{\scriptsize MAF }}}}\right)\] \[s.t. \mathbb{E}\left[(1-\epsilon)W+\sum_{k=1}^{K}Y^{[k]}\right]\geq \frac{K}{f_{\max}}, \tag{16}\]
_where \(W\triangleq\sum_{k=1}^{K}\sum_{m=1}^{M^{[k]}}W^{[k]}(m)\) and the waiting is only performed at the beginning of the epoch._
**Proof:** By inspection of the average MSE function in (15), since \(\Gamma=\sum_{k=1}^{K}\sum_{m=1}^{M^{[k]}}W^{[k]}(m)+\sum_{k=1}^{K}\sum_{m=1}^{ M^{[k]}}Y^{[k]}(m)\), the waiting times appear in the numerator and denominator as the sum
\[\sum_{k=1}^{K}\sum_{m=1}^{M^{[k]}}W^{[k]}(m). \tag{17}\]
Thus, for the optimal waiting times \(\left\{W^{[k]^{*}}(m)\right\}\) that solve the optimization problem in (11), the waiting time \(W^{*}=\sum_{k=1}^{K}\sum_{m=1}^{M^{[k]}}W^{[k]^{*}}(m)\) achieves the same \(\overline{\mbox{{ms}}\,\mbox{{e}}^{[k]}_{\mbox{{\scriptsize MAF}}}}\). Conversely,
Figure 2: Timing diagram of a typical epoch for \(K=3\) processes in the presence of erasure feedback and MAF scheduling. In this example, process 1 (in blue) is successful after one trial, process 2 (in green) is successful after 3 trials, and process 3 (in red) is successful after 2 trials.
starting with \(W^{*}\) in the objective function of (16) and breaking it arbitrarily to any waiting times such that \(W^{*}=\sum_{k=1}^{K}\sum_{m=1}^{M^{[k]}}W^{[k]^{*}}(m)\) gives the same objective function in (11).
As for the sampling constraint, by observing the telescoping sum in (4), we have that for process \(k\),
\[\liminf_{n\to\infty}\frac{1}{n}\mathbb{E}\left[\sum_{i=1}^{n}S_{i+1}^{[k]}-S_ {i}^{[k]}\right]=\liminf_{n\to\infty}\frac{1}{n}\mathbb{E}\left[S_{n+1}^{[k]} \right]. \tag{18}\]
Now define \(e(n)\) to be the index of the epoch corresponding to the \(n\)th successfully-received sample. With \(W=\sum_{k=1}^{K}\sum_{m=1}^{M^{[k]}}W^{[k]^{*}}(m)\), we can write the sampling constraint as
\[\liminf_{n\to\infty}\frac{e(n)}{n}\cdot\frac{\mathbb{E}[S_{n+1}^{ [k]}]}{e(n)}\] \[= \frac{1}{\mathbb{E}[M^{[k]}]}\cdot\liminf_{n\to\infty}\frac{1}{e( n)}\Bigg{(}\sum_{i=1}^{e(n)-1}\mathbb{E}\Bigg{[}\sum_{k=1}^{K}\sum_{m=1}^{M^{[k]} _{i}}W^{[k]}_{i}(m)+Y^{[k]}_{i}(m)\Bigg{]}+o\left(e(n)\right)\Bigg{)} \tag{19}\] \[= \frac{1}{\mathbb{E}[M^{[k]}]}\cdot\liminf_{n\to\infty}\frac{1}{e( n)}\sum_{i=1}^{e(n)-1}\Bigg{(}\mathbb{E}[W]+\mathbb{E}\left[M^{[k]}_{i}\right] \cdot\mathbb{E}\left[\sum_{k=1}^{K}Y^{[k]}_{i}\right]\Bigg{)}\] (20) \[= \mathbb{E}\left[(1-\epsilon)W+\sum_{k=1}^{K}Y^{[k]}\right], \tag{21}\]
where (19) follows from the strong law of large numbers and the fact that the time spent in the \(e(n)\)th epoch, \(\Delta=\sum_{k=1}^{k-1}\sum_{m=1}^{M^{[k]}}W^{[k]}_{i}(m)+Y^{[k]}_{i}(m)+\sum_ {m=1}^{\hat{m}}W^{[k]}_{i}(m)+Y^{[k]}_{i}(m)\), is \(o(e(n))\) and hence \(\liminf_{n\to\infty}\frac{\Delta}{e(n)}=0\); equation (20) follows from Wald's identity. \(\blacksquare\)
We now have the following remark:
**Remark 1**: _Observe that the sampling constraint in problem (16) will not be active if \(f_{\max}>\mu\). This is intuitive since the inter-sampling time, on average, would be larger than the
Figure 3: Illustration of Lemma 1. The waiting times are grouped to the beginning of the epoch without affecting the long-term average MSE penalty.
minimum allowable sampling time, controlled by the maximum allowable sampling frequency, in this case._
_If the sampling constraint is binding, which occurs only if \(f_{\max}<\mu\), the average waiting time would monotonically increase with the erasure probability \(\epsilon\). This is true because no waiting is allowed in between unsuccessful transmissions, whose rate increases with \(\epsilon\). Hence, to account for the expected large number of back-to-back sample transmissions in the epoch, one has to wait for a relatively larger amount of time at its beginning so that the sampling constraint is satisfied._
### Problem Re-Formulation in the Absence of Erasure Feedback
In this subsection, we turn our attention to the other case, in which the erasure status feedback is not available. Throughout our treatment, we highlight the main differences compared to the re-formulation in Section 3.1.
Similar to Section 3.1, we define the \(i\)th epoch of the \(k\)th process \(\Gamma_{i}^{[k]}\) as the inter-reception time in between its \(i\)th and \((i+1)\)th successfully received samples. Despite the identical definition, we note that the epoch \(\Gamma_{i}^{[k]}\), in this case, consists of multiple _transmission rounds_. Each round implies transmitting a _new sample_ from each process in a round-robin fashion, i.e., in the order of \(1,2,3,\cdots,K\) irrespective of the erasure outcome of transmission at the receiver.
The transmitter introduces stationary waiting times \(\{W_{i}^{[k]}(m)\}\) between successive samples in each transmission round, i.e., the transmitter waits for \(W_{i}^{[k]}(m)\) time units after transmitting a sample from the \((k-1)\)th process before taking a new sample from the \(k\)th process at the \(m\)th transmission round (see Fig. 4). With a slight abuse of notation, let \(Y_{i}^{[k]}(m)\) be the service time of the sample from the \(k\)th process in the \(i\)th epoch at the \(m\)th transmission round. We have that \(Y_{i}^{[k]}(m)\)'s are i.i.d. such that \(Y_{i}^{[k]}(m)\sim\exp(\mu)\). Let \(M_{i}^{[k]}\) denote the total number of transmission rounds needed to convey the \(i\)th sample of the \(k\)th process. Consequently, \(M_{i}^{[k]}\sim\mbox{geometric}(1-\epsilon)\). We note that \(M_{i}^{[k]}\) controls the epoch length \(\Gamma_{i}^{[k]}\) irrespective of the other processes' erasure statuses. Without loss of generality, we focus on the \(K\)th process. Under RR scheduling, we can express the sampling instant \(S_{i}^{[K]}(m)\), for \(m=1,\cdots,M_{i}^{[K]}\) as
\[S_{i}^{[K]}(m)=S_{i}^{[K]}(m-1)+Y_{i}^{[K]}(m-1)+\sum_{k=1}^{K-1}Y_{i}^{[k]}+ \sum_{k=1}^{[K]}W_{i}^{[k]}(m). \tag{22}\]
Based on the aforementioned description, we can write the \(i\)th epoch length corresponding to the \(k\)th process as
\[\Gamma_{i}^{[k]}=\sum_{m=1}^{M_{i}^{[k]}}\sum_{k=1}^{K}W_{i}^{[k]}(m)+Y_{i}^{ [k]}(m). \tag{23}\]
By imposing the stationarity restriction of the waiting policy, the i.i.d. statistics of the service times and the erasure events, and the fact that epoch corresponding to the \(k\)th process entails repeating the same cycle of transmissions \(M_{i}^{[k]}\sim\text{geometric}(1-\epsilon)\) times, we can drop the indices \(i\) and \(k\) as we have done in Section 3.1 to have5
Footnote 5: It is worth noting that the epoch definitions in (14) and (24) differ only in the order of summation. This signifies the fact that in the presence of erasure status feedback, the transmitter takes \(M_{i}^{[k]}\) samples from the \(k\)th process until being successful before sampling the \((k+1)\)th process. This is in contrast to passing by all processes _once_ and repeating this cycle \(M\) times to complete the epoch in the absence of erasure status feedback.
\[\Gamma=\sum_{m=1}^{M}\sum_{k=1}^{K}W^{[k]}(m)+Y^{[k]}(m) \tag{24}\]
Consequently, despite the subtle differences and the slight abuse of notation, we can still write the long-term average MSE of the \(k\)th process as that in (15) (after replacing MAF with RR).
Now, similar to Lemma 1, we can aggregate all waiting times in the \(m\)th round at the _beginning of each transmission round_, i.e., the transmitter waits for
\[W_{i}(m)=\sum_{k=1}^{K}W_{i}^{[k]}(m) \tag{25}\]
time units in the \(m\)th round. Specifically, the transmitter waits for \(W_{i}(1)=\sum_{k=1}^{K}W_{i}^{[k]}(1)\) at
Figure 4: Timing diagram of a typical epoch for \(K=3\) processes in the absence of erasure feedback and RR scheduling. In this example, we focus on characterizing the epoch of process 3, which is successful after 2 transmission rounds. The erasures experienced by the remaining of the processes is irrelevant to the epoch of process 3.
the first round, after which the transmitter acquires and transmits a sample from process 1, followed by a sample from process 2, \(\cdots\), followed by a sample from process \(K\), without any waiting times in between. Then the transmitter waits for \(W_{i}(2)=\sum_{k=1}^{K}W_{i}^{[k]}(2)\) before completing the second round-robin cycle (second round) of sampling across all processes, \(\cdots\), etc. Similar to Lemma 1, this aggregation does not affect the long-term average MSE and results in a simpler equivalent optimization problem (see Fig. 5). This is summarized in the following lemma:
**Lemma 2**: _Under signal-independent sampling with RR scheduling, problem (11) is equivalent to the following optimization problem for stationary waiting policies:_
\[\min_{W\geq 0} g\left(\overline{\textsf{ms}}\boldsymbol{e}_{RR}^{[1]},\cdots,\overline{\textsf{ms}}\boldsymbol{e}_{RR}^{[K]}\right)\] _s.t._ \[\mathbb{E}\left[W(m)+\sum_{k=1}^{K}Y^{[k]}\right]\geq\frac{K}{f_{ \max}}, \tag{26}\]
_where \(W(m)\triangleq\sum_{k=1}^{K}W^{[k]}(m)\) and the waiting is only performed at the beginning of the \(m\)th transmission round._
The proof of Lemma 2 follows the exact steps of the proof of Lemma 1 while noting that the waiting time here is at the beginning of _every transmission round_ and not at the beginning of the epoch as in Lemma. 1. Hence, the proof is removed to avoid unnecessary repetitions.
**Remark 2**: _We note that, in the absence of erasure status feedback, the transmitter cannot aggregate all waiting times at the beginning of the epoch. This is due to the fact that the
Figure 5: Illustration of Lemma 2. The waiting times are grouped to the beginning of each transmission round without affecting the long-term average sum MSE.
epoch beginning time \(D_{i}^{[k]}\) and end time \(D_{i+1}^{[k]}\) are only visible at the receiver side due to the absence of the erasure status feedback._
**Remark 3**: _The sampling constraint in (26) lacks the erasure effect in (16). This is due to the fact that the number of samples at each transmission round is \(K\) samples, which is independent of the erasure events. This is in contrast to the number of samples in an epoch in (16), which is significantly dependent on the erasure probability._
**Remark 4**: _We note that when \(\epsilon=0\), MAF \(\equiv\) RR scheduling, and the optimization problems (11) with and without erasure status feedback are indeed the same (i.e., problems (16) and (26) are the same) as the transmitter acquires one sample only from each process during any epoch._
We discuss optimal solutions of problems (16) and (26) over the next two sections, respectively, for specific MSE penalty functions.
## 4 Optimal Waiting Threshold and Minimum Sum MSE Characterization with Erasure Feedback
In this section, we provide the optimal solution of problem (16) for a _sum MSE_ penalty
\[g\left(\overline{\mathsf{mse}_{\mathrm{MAF}}^{[1]}},\cdots,\overline{ \mathsf{mse}_{\mathrm{MAF}}^{[K]}}\right)=\sum_{k=1}^{K}\overline{\mathsf{mse }_{\mathrm{MAF}}^{[k]}}, \tag{27}\]
together with a _stationary deterministic_ waiting policy, in which the waiting value at the beginning of an epoch is given by a deterministic function \(w(\cdot)\) of the previous epoch's total service time, denoted \(\tilde{Y}\), i.e.,
\[\tilde{Y}\sim\sum_{k=1}^{K}\sum_{m=1}^{M^{[k]}}Y^{[k]}(m). \tag{28}\]
Note that such choice of waiting policies emerges naturally since the MSE is an increasing function of the AoI, whose value at the start of the epoch is, in turn, an increasing function of \(\tilde{Y}\). Stationary deterministic policies have been used extensively in similar contexts in the literature, see, e.g., [37, 38, 39], and have been shown to perform optimally.
Formally, substituting the above into problem (16), we now aim at solving the following functional optimization problem:
\[\min_{w(\cdot)\geq 0}\ \ \frac{\sum_{k=1}^{K}\mathbb{E}\left[\int_{D^{[k]}}^{D^ {[k]}+\Gamma}\overline{\mathsf{mse}_{\mathrm{MAF}}^{[k]}}\left(t,\tilde{S}^{[ k]}\right)dt\right]}{\mathbb{E}\left[\Gamma\right]}\]
\[\mbox{s.t.}\hskip 14.226378pt\mathbb{E}\left[w\left(\bar{Y}\right)\right]\geq \frac{1}{1-\epsilon}\left(\frac{K}{f_{\max}}-\frac{K}{\mu}\right). \tag{29}\]
Theorem 1 below provides the optimal solution of problem (29). We use the compact vector notation \(\mathbf{\theta}\triangleq[\theta_{1}\ \theta_{2}\ \cdots\ \theta_{K}]\) and \(\mathbf{\sigma}\triangleq[\sigma_{1}^{2}\ \sigma_{2}^{2}\ \cdots\ \sigma_{K}^{2}]\).
**Theorem 1**: _The optimal waiting policy \(w^{*}(\cdot)\) that solves problem (29) is given by the threshold policy_
\[w^{*}(z)=\left[\tau^{*}_{\mbox{\footnotesize MAF}}(K,f_{\max}, \epsilon,\mathbf{\theta},\mathbf{\sigma})-z\right]^{+}, \tag{30}\]
_where the optimal threshold \(\tau^{*}_{\mbox{\footnotesize MAF}}(K,f_{\max},\epsilon,\mathbf{\theta },\mathbf{\sigma})\) is given by_
\[\tau^{*}_{\mbox{\footnotesize MAF}}=\max\left\{G^{-1}_{\mathbf{\theta},\mathbf{\sigma}}\left(\beta^{*}\right),H^{-1} \left(\frac{1}{(1-\epsilon)}\left[\frac{K}{f_{\max}}-\frac{K}{\mu}\right]^{+} \right)\right\}, \tag{31}\]
_in which_
\[G_{\mathbf{\theta},\mathbf{\sigma}}(x) \triangleq\sum_{k=1}^{K}\frac{\sigma_{k}^{2}}{2\theta_{k}}\left(1-\mathbb{E} \left[e^{-2\theta_{k}Y}\right]e^{-2\theta_{k}x}\right), \tag{32}\]
_and \(\beta^{*}\) corresponds to the optimal long-term average sum MSE in this case, and is given by the unique solution of_
\[p(\beta^{*})=\sum_{k=1}^{K}\frac{\sigma_{k}^{2}}{2\theta_{k}} \left(H(\tau^{*}_{\mbox{\footnotesize MAF}})+\frac{K}{\mu(1-\epsilon)}-\frac {1}{2\theta_{k}}\cdot\frac{\mu}{2\theta_{k}+\mu}(1-F_{k}(\tau^{*}_{\mbox{ \footnotesize MAF}}))\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\beta^{ *}\left(H(\tau^{*}_{\mbox{\footnotesize MAF}})+\frac{K}{\mu(1-\epsilon)} \right)=0, \tag{33}\]
_in which \(H(\cdot)\) and \(F_{k}(\cdot)\) are defined as follows:_
\[H(\tau)=\sum_{\rho=K}^{\infty}\binom{\rho-1}{K-1}e^{\rho-K}(1- \epsilon)^{K}\left[\tau\gamma(\mu\tau,\rho)-\frac{\rho}{\mu}\gamma(\mu\tau, \rho+1)\right], \tag{34}\]
\[F_{k}(\tau)=\sum_{\rho=K}^{\infty}\binom{\rho-1}{K-1}e^{\rho-K}(1 -\epsilon)^{K}\left[e^{-2\theta_{k}\tau}\gamma(\mu\tau,\rho)+\left(\frac{\mu} {2\theta_{k}+\mu}\right)^{\rho}(1-\gamma((2\theta_{k}+\mu)\tau,\rho)\right], \tag{35}\]
_where \(\gamma(x,y)\) is the normalized incomplete Gamma function: \(\gamma(x,y)=\frac{1}{(y-1)!}\int_{0}^{x}t^{y-1}e^{-t}dt\)._
**Proof:** We follow Dinkelbach's approach [43] to solve the optimization problem in (29). We
start by defining an auxiliary function \(p(\beta)\), for \(\beta\geq 0\), such that:
\[p(\beta)=\min_{w(\cdot)\geq 0} \sum_{k=1}^{K}\mathbb{E}\left[\int_{D}^{D+\Gamma(K,\epsilon,w)} \mathsf{mse}_{\mathrm{MAF}}^{[k]}(t,\tilde{S}_{i}^{[k]})dt\right]-\beta\mathbb{ E}\left[\Gamma(K,\epsilon,w)\right]\] \[\mathrm{s.t.} \mathbb{E}\left[(1-\epsilon)w\left(\tilde{Y}\right)+\sum_{k=1}^{ K}Y^{[k]}\right]\geq\frac{K}{f_{\mathrm{max}}} \tag{36}\]
The optimal solution of our original optimization problem (29) is the solution of \(p(\beta^{*})=0\) of the auxiliary problem above [43].
The Lagrangian corresponding to the auxiliary problem can be written as:
\[\mathcal{L}= \sum_{k=1}^{K}\mathbb{E}\left[\int_{D}^{D+\Gamma(K,\epsilon,w)} \mathsf{mse}_{\mathrm{MAF}}^{[k]}(t,\tilde{S}_{i}^{[k]})dt\right]-\beta\mathbb{ E}\left[\Gamma(K,\epsilon,w)\right]-\int_{0}^{\infty}\eta(y)w(y)dy\] \[-\zeta\left(\mathbb{E}\left[(1-\epsilon)w\left(\tilde{Y}\right) +\sum_{k=1}^{K}Y^{[k]}\right]-\frac{K}{f_{\mathrm{max}}}\right) \tag{37}\]
Now, denoting \(\tilde{Y}=\sum_{k=1}^{K}\sum_{m=1}^{M^{[k]}}Y^{[k]}(m)\), we calculate the expected epoch length as:
\[\mathbb{E}\left[\Gamma(K,\epsilon,w)\right]= \mathbb{E}\left[w(\tilde{Y})+\sum_{k=1}^{K}\tilde{Y}\right] \tag{38}\] \[= \mathbb{E}\left[w(\tilde{Y})\right]+\sum_{k=1}^{K}\mathbb{E} \left[M^{[k]}\right]\mathbb{E}\left[Y^{[k]}\right]\] (39) \[= \mathbb{E}\left[w(\tilde{Y})\right]+\frac{K}{\mu(1-\epsilon)} \tag{40}\]
where (39) follows from Wald's identity. The expected total MSE in the epoch can be calculated as:
\[\mathbb{E}\left[\int_{D_{i}}^{D_{i}+\Gamma(K,\epsilon,w)}\frac{ \sigma_{k}^{2}}{2\theta_{k}}\left(1-e^{-2\theta_{k}\left(t-\tilde{S}_{i}^{[k]} \right)}\right)dt\right]\] \[=\frac{\sigma_{k}^{2}}{2\theta_{k}}\left(\mathbb{E}\left[\Gamma(K,\epsilon,w)\right]-\frac{1}{2\theta_{k}}\mathbb{E}\left[e^{2\theta_{k}\tilde {S}_{i}^{[k]}}\left(e^{-2\theta_{k}D}-e^{-2\theta_{k}(D+\Gamma(K,\epsilon,w))} \right)\right]\right) \tag{41}\] \[=\frac{\sigma_{k}^{2}}{2\theta_{k}}\left(\mathbb{E}\left[\Gamma( K,\epsilon,w)\right]-\frac{1}{2\theta_{k}}\mathbb{E}\left[e^{-2\theta_{k}Y_{0}} \left(1-e^{-2\theta_{k}\Gamma(K,\epsilon,w)}\right)\right]\right)\] (42) \[=\frac{\sigma_{k}^{2}}{2\theta_{k}}\left(\mathbb{E}\left[\Gamma( K,\epsilon,w)\right]-\frac{1}{2\theta_{k}}\mathbb{E}\left[e^{-2\theta_{k}Y} \right]\left(1-\mathbb{E}\left[e^{-2\theta_{k}\Gamma(K,\epsilon,w)}\right) \right]\right). \tag{43}\]
We note the following functional derivatives with respect to \(w(\cdot)\) (at realization \(\tilde{Y}=z\)),
\[\frac{\partial}{\partial w(\cdot)}\mathbb{E}[w(\tilde{Y})]=f_{\tilde{Y}}(z), \tag{44}\]
\[\frac{\partial}{\partial w(\cdot)}\mathbb{E}\left[e^{-2\theta_{k} \Gamma(K,\epsilon,w)}\right]=\frac{\partial}{\partial w(\cdot)}\mathbb{E}\left[e^ {-2\theta_{k}(w(\tilde{Y})+\tilde{Y})}\right]=-2\theta_{k}e^{-2\theta_{k}(w(z) +z)}f_{\tilde{Y}}(z), \tag{45}\] \[\frac{\partial}{\partial w(\cdot)}\int_{0}^{\infty}\eta(y)w(y)dy =\eta(z). \tag{46}\]
By substituting in (37) and use the stationarity condition of the Lagrangian with respect to the functional \(w(\cdot)\), i.e., \(\frac{\partial\mathcal{L}}{\partial w(z)}=0\), we get
\[\sum_{k=1}^{K}\frac{\sigma_{k}^{2}}{2\theta_{k}}\left(f_{\tilde{Y}}(z)- \mathbb{E}\left[e^{-2\theta_{k}Y}\right]e^{-2\theta_{k}(w^{*}(z)+z)}f_{\tilde{ Y}}(z)\right)-\beta f_{\tilde{Y}}(z)-\eta(z)-\zeta(1-\epsilon)f_{\tilde{Y}}(z)=0, \tag{47}\]
which leads to
\[\sum_{k=1}^{K}\frac{\sigma_{k}^{2}}{2\theta_{k}}\left(1-\mathbb{E}\left[e^{-2 \theta_{k}Y}\right]e^{-2\theta_{k}(w^{*}(z)+z)}\right)=\beta+\zeta(1-\epsilon )+\frac{\eta(z)}{f_{\tilde{Y}}(z)}. \tag{48}\]
Now define the function \(G_{\mathbf{\theta},\mathbf{\sigma}^{2}}(\cdot)\) as
\[G_{\mathbf{\theta},\mathbf{\sigma}^{2}}(x)=\sum_{k=1}^{K}\frac{\sigma_{k}^{2}}{2\theta _{k}}\left(1-\mathbb{E}\left[e^{-2\theta_{k}Y}\right]e^{-2\theta_{k}x}\right). \tag{49}\]
Such function is monotonically increasing. Consequently, the equation \(G_{\mathbf{\theta},\mathbf{\sigma}^{2}}(w^{*}(z)+z)=\beta+\zeta(1-\epsilon)+\frac{\eta (y)}{f_{\tilde{Y}}(z)}\) has a unique solution, which is given by
\[w^{*}(z)+z=G_{\mathbf{\theta},\mathbf{\sigma}^{2}}^{-1}\left(\beta+\zeta(1-\epsilon) +\frac{\eta(y)}{f_{\tilde{Y}}(z)}\right). \tag{50}\]
Using the complementary slackness for the constraint \(w(\cdot)\geq 0\)[44], we have,
\[w^{*}(z)=\left[G_{\mathbf{\theta},\mathbf{\sigma}^{2}}^{-1}\left(\beta+\zeta(1- \epsilon)\right)-z\right]^{+}. \tag{51}\]
This proves the first part of the theorem, which states that the optimal waiting function is in fact a threshold policy with a threshold
\[\tau_{\text{MAF}}^{*}(K,f_{\text{max}},\epsilon,\mathbf{\theta},\mathbf{\sigma}^{2})= G_{\mathbf{\theta},\mathbf{\sigma}^{2}}^{-1}\left(\beta+\zeta(1-\epsilon)\right) \tag{52}\]
Now, we focus on characterizing the Lagrange multiplier \(\zeta\). First, if \(f_{\text{max}}\geq\mu\), then \(\mathbb{E}[w(\tilde{Y})]\geq 0\), which is feasible for any \(w(\cdot)\geq 0\). Consequently, in this case the sampling constraint is never active and \(\zeta^{*}=0\), i.e.,
\[\tau_{\text{MAF}}^{*}(K,f_{\text{max}},\epsilon,\mathbf{\theta},\mathbf{\sigma}^{2})= G_{\mathbf{\theta},\mathbf{\sigma}^{2}}^{-1}\left(\beta\right),\quad\text{if }f_{\text{max}}\geq\mu. \tag{53}\]
Next, we consider the other case, i.e., when \(f_{\text{max}}<\mu\). In this case, we need to evaluate
the expected value of the waiting function \(\mathbb{E}[w(\tilde{Y})]\) with an arbitrary threshold \(\tau\). We denote this expected value by \(H(\tau)\):
\[H(\tau)=\mathbb{E}\left[w(\tilde{Y})\right]=\mathbb{E}\left[(\tau-\tilde{Y})^{+ }\right]. \tag{54}\]
To evaluate (54), we need to characterize the statistical distribution of \(\tilde{Y}\). By the law of total probability, we can write,
\[f_{\tilde{Y}}(z)=\sum_{\rho=K}^{\infty}f_{\tilde{Y}|\sum_{k=1}^{K}M^{[k]}} \left(z\left|\sum_{k=1}^{K}M^{[k]}=\rho\right)\mathbb{P}\left(\sum_{k=1}^{K}M ^{[k]}=\rho\right). \tag{55}\]
Conditioned on \(\sum_{k=1}^{K}M^{[k]}=\rho\), then \(\tilde{Y}=\sum_{k=1}^{K}\sum_{m=1}^{M^{[k]}}Y^{[k]}(m)\) is a sum of \(\rho\) i.i.d. exponential random variables. Hence, \(\tilde{Y}|\rho\sim\text{Erlang}(\rho,\mu)\), i.e.,
\[f_{\tilde{Y}|\sum_{k=1}^{K}M^{[k]}}\left(z|\rho\right)=\frac{\mu^{\rho}z^{\rho -1}e^{-\mu z}}{(\rho-1)!},\quad z\geq 0. \tag{56}\]
Further, the distribution of \(\sum_{k=1}^{K}M^{[k]}\) is given by
\[\mathbb{P}\left(\sum_{k=1}^{K}M^{[k]}=\rho\right)=\binom{\rho-1}{K-1}e^{\rho -K}(1-\epsilon)^{K},\quad\rho\geq K. \tag{57}\]
Substituting the above in (54) leads to
\[H(\tau) =\int_{0}^{\tau}(\tau-z)f_{\tilde{Y}}(z)dz \tag{58}\] \[=\sum_{\rho=K}^{\infty}\binom{\rho-1}{K-1}e^{\rho-K}(1-\epsilon)^ {K}\frac{\mu^{\rho}}{(\rho-1)!}\left[\tau\int_{0}^{\tau}z^{\rho-1}e^{-\mu z} dz-\int_{0}^{\tau}z^{\rho}e^{-\mu z}dz\right]\] (59) \[=\sum_{\rho=K}^{\infty}\binom{\rho-1}{K-1}e^{\rho-K}(1-\epsilon)^ {K}\frac{\mu^{\rho}}{(\rho-1)!}\left[\tau\frac{(\rho-1)!}{\mu^{\rho}}\gamma( \mu\tau,\rho)-\frac{\rho!}{\mu^{\rho+1}}\gamma(\mu\tau,\rho+1)\right]\] (60) \[=\sum_{\rho=K}^{\infty}\binom{\rho-1}{K-1}e^{\rho-K}(1-\epsilon)^ {K}\left[\tau\gamma(\mu\tau,\rho)-\frac{\rho}{\mu}\gamma(\mu\tau,\rho+1) \right], \tag{61}\]
where interchanging sum and integral in (59) follows from the dominated convergence theorem and \(\gamma(x,y)\) is the normalized incomplete Gamma function:
\[\gamma(x,y)=\frac{1}{(y-1)!}\int_{0}^{x}t^{y-1}e^{-t}dt \tag{62}\]
Now observe that the sampling constraint in (18) can be written as:
\[(1-\epsilon)H(\tau)\geq\frac{K}{f_{\max}}-\frac{K}{\mu}. \tag{63}\]
Thus, if \((1-\epsilon)H\left(G_{\mathbf{\theta},\mathbf{\sigma}^{2}}^{-1}\left(\beta\right)\right)> \frac{K}{f_{\max}}-\frac{K}{\mu}\), then \(\zeta^{*}=0\) and the threshold for the unconstrained problem (without the sampling constraint) is optimal.
Otherwise, the constraint is satisfied with equality, i.e.,\((1-\epsilon)H(\tau)=\frac{K}{f_{\max}}-\frac{K}{\mu}\), and hence,
\[\tau^{*}=\frac{1}{(1-\epsilon)}H^{-1}\left(\frac{K}{f_{\max}}-\frac{K}{\mu} \right). \tag{64}\]
All this put together implies that
\[\tau_{\text{MAF}}^{*}(K,f_{\max},\epsilon,\mathbf{\theta},\mathbf{\sigma}^{2})=\max \left\{G_{\mathbf{\theta},\mathbf{\sigma}^{2}}^{-1}\left(\beta\right),\frac{1}{(1- \epsilon)}H^{-1}\left(\frac{K}{f_{\max}}-\frac{K}{\mu}\right)\right\}. \tag{65}\]
Finally, we solve for the optimal long-term MMSE, \(\beta^{*}\). Denote \(\mathbb{E}[e^{-2\theta_{k}\Gamma(K,\epsilon,w)}]\) by \(F(\tau)\). By observing that
\[F_{k}(\tau) =\mathbb{E}[e^{-2\theta_{k}\Gamma(K,\epsilon,w)}] \tag{66}\] \[=\int_{0}^{\tau}e^{-2\theta_{k}\tau}f_{\bar{Y}}(z)dz+\int_{\tau} ^{\infty}e^{-2\theta_{k}z}f_{\bar{Y}}(z)dz\] (67) \[=\sum_{\rho=K}^{\infty}\binom{\rho-1}{K-1}\epsilon^{\rho-K}(1- \epsilon)^{K}\frac{\mu^{\rho}}{(\rho-1)!}\left[e^{-2\theta_{k}\tau}\!\!\int_ {0}^{\tau}\!\!\!z^{\rho-1}e^{-\mu z}dz+\!\int_{\tau}^{\infty}\!\!\!z^{\rho-1}e ^{-(2\theta_{k}+\mu)z}dz\right]\] (68) \[=\sum_{\rho=K}^{\infty}\binom{\rho-1}{K-1}\epsilon^{\rho-K}(1- \epsilon)^{K}\left[e^{-2\theta_{k}\tau}\gamma(\mu\tau,\rho)+\left(\frac{\mu}{ 2\theta_{k}+\mu}\right)^{\rho}(1-\gamma((2\theta_{k}+\mu)\tau,\rho)\right], \tag{69}\]
\(\beta^{*}\) can be obtained by solving the auxiliary problem \(p(\beta^{*})=0\), i.e., solving (33).
This concludes the proof.
**Remark 5**: _We observe that the optimal threshold \(\tau^{*}\) in (31) is increasing with the erasure probability \(\epsilon\). This is indeed the case since both \(G_{\mathbf{\theta},\mathbf{\sigma}}^{-1}(\cdot)\) and \(H^{-1}(\cdot)\) are increasing functions, and both of their arguments are increasing with \(\epsilon\) (the optimal long-term average sum MSE \(\beta^{*}\), for instance, can only increase with higher erasure rates). This can be attributed to the fact that for higher erasure probabilities, the average samples' inter-delivery time increases. This, in turn, causes the transmitter to wait more before sending a new sample to make sure that the new sample is sufficiently **different** from the previously delivered sample; this new sample will be used to estimate the signal over a large period of time because of the large expected inter-delivery time caused by the high erasure probability. In this case, the transmitter will be
blocked from generating new samples waiting for the previous sample to be delivered. On the other hand, and for smaller erasure probabilities, we can see that the waiting time decreases. Again, this can be attributed to the fact that for smaller erasure probabilities, the average samples' delivery time decreases. This means that when the transmitter has a chance to transmit a sample, it would be better to do so as this sample will be delivered quickly and it will not block the transmitter from sending new samples._
_Note that the fact that second term in the \(\max\) function, \(H^{-1}\), is increasing with \(\epsilon\) is consistent with the observation in Remark 1; the higher the erasure rate, the larger the waiting time should be to compensate for the high rate of back-to-back transmissions under MAF scheduling that takes a toll on the sampling frequency in this case._
## 5 Optimal Waiting Threshold and Minimum Sum MSE Characterization without Erasure Feedback
In this section, we consider the setting in which the receiver does not provide erasure status feedback to the transmitter. As in Section 4, we focus on the sum MSE penalty together with stationary deterministic waiting policies.
With a slight abuse of notation, let \(\tilde{Y}_{i}(m)=\sum_{k=1}^{K}Y_{i}^{[k]}(m)\) be the sum of the service times during the \(m\)th transmission round. To develop a stationary deterministic waiting policy, we cannot rely on the starting MSE of the epoch (or the starting AoI) since the transmitter does not know of its exact starting time due to the absence of the erasure feedback. Alternatively, we focus our attention on a waiting time that is a function \(w(\cdot)\) of the sum of service times of all processes in the preceding _transmission round_, as opposed to the preceding epoch, i.e., we set
\[W_{i}(m)=w\left(\tilde{Y}_{i}(m-1)\right)=w\left(\sum_{k=1}^{K}Y_{i}^{[k]}(m- 1)\right), \tag{70}\]
with \(\tilde{Y}_{i}(0)\triangleq Y_{i-1}(M_{i-1})\) by definition. This policy is realizable in the absence of erasure feedback since the start and end of transmission rounds are known at the transmitter side. Note that this policy is a stationary deterministic waiting policy since \(\tilde{Y}_{i}(m)\)'s are i.i.d across all transmission rounds.
Similar to Section 4, problem (26) in Lemma 2 can be written in the following functional form:
\[\min_{w(\cdot)\geq 0} \frac{\sum_{k=1}^{K}\mathbb{E}\left[\int_{D^{[k]}}^{D^{[k]}+ \Gamma}\mathsf{mse}_{\text{RR}}^{[k]}\left(t,\tilde{S}^{[k]}\right)dt\right]}{ \mathbb{E}\left[\Gamma\right]}\] s.t. \[\mathbb{E}\left[w\left(\tilde{Y}\right)\right]\geq\frac{K}{f_{ \max}}-\frac{K}{\mu}. \tag{71}\]
Theorem 2 below provides the solution of problem (71).
**Theorem 2**: _The optimal waiting policy \(w^{*}(\cdot)\) that solves problem (71) is given by the threshold policy_
\[w^{*}(z)=\left[\tau_{\text{RR}}^{*}(K,f_{\max},\epsilon,\mathbf{\theta},\mathbf{\sigma}) -z\right]^{+}, \tag{72}\]
_where the optimal threshold \(\tau_{\text{RR}}^{*}(K,f_{\max},\epsilon,\mathbf{\theta},\mathbf{\sigma})\) is given by_
\[\tau_{\text{RR}}^{*}(K,f_{\max},\epsilon,\mathbf{\theta},\mathbf{\sigma}^{2})=\max \left\{\tilde{G}_{\mathbf{\theta},\mathbf{\sigma}^{2},\epsilon}^{-1}(\beta),\tilde{H} ^{-1}\left(\frac{K}{f_{\max}}-\frac{K}{\mu}\right)\right\} \tag{73}\]
_in which_
\[\tilde{G}_{\mathbf{\theta},\mathbf{\sigma}^{2},\epsilon}(x)\triangleq\sum_{k=1}^{K} \frac{\sigma_{k}^{2}}{2\theta_{k}}\left(1-\mathbb{E}\left[e^{-2\theta_{k}Y} \right]\frac{(1-\epsilon)^{2}e^{-2\theta_{k}x}}{(1-\epsilon L_{k}(x))^{2}} \right), \tag{74}\]
_with \(L_{k}(\cdot)\) being_
\[L_{k}(x)\triangleq e^{-2\theta_{k}x}\gamma(\mu x,K)+\left(\frac{\mu}{\mu+2 \theta_{k}}\right)^{K}\left(1-\gamma\left((2\theta_{k}+\mu)x,K\right), \tag{75}\]
_and \(\beta^{*}\) corresponds to the optimal long-term average sum MSE in this case, and is given by the unique solution of_
\[\tilde{p}(\beta)=\sum_{k=1}^{K}\frac{\sigma_{k}^{2}}{2\theta_{k}}\left(\frac{ \tilde{H}(\tau_{\text{RR}}^{*})+\frac{K}{\mu}}{1-\epsilon}-\frac{1}{2\theta_{k }}\cdot\frac{\mu}{\mu+2\theta_{k}}\cdot\left(1-\tilde{F}_{k}(\tau_{\text{RR}} ^{*})\right)\right)-\beta\left(\frac{\tilde{H}(\tau_{\text{RR}}^{*})+\frac{K}{ \mu}}{1-\epsilon}\right)=0, \tag{76}\]
_in which \(\tilde{H}(\cdot)\) and \(\tilde{F}_{k}(\cdot)\) are defined as follows:_
\[\tilde{H}(\tau)=\tau\gamma\left(\mu\tau,K\right)-\frac{K}{\mu}\gamma\left(\mu \tau,K+1\right), \tag{77}\]
\[\tilde{F}_{k}(\tau)=\frac{(1-\epsilon)L_{k}(\tau)}{1-\epsilon L_{k}(\tau)} \tag{78}\]
_where \(\gamma(x,y)\) is the normalized incomplete Gamma function._
**Proof:** Similar to the proof of Theorem 1, we follow the Dinkelbach's approach to solve the optimization problem in (71). We start by defining the auxiliary function \(\tilde{p}(\beta)\) for \(\beta\geq 0\)
such that:
\[\tilde{p}(\beta)=\min_{w(\cdot)\geq 0} \sum_{k=1}^{K}\mathbb{E}\left[\int_{D}^{D+\Gamma(K,\epsilon,w)} \mathfrak{mse}_{\text{RR}}^{[k]}(t,\tilde{S}_{i}^{[k]})dt\right]-\beta\mathbb{E }\left[\Gamma(K,\epsilon,w)\right] \tag{79}\] \[\text{s.t.} \mathbb{E}\left[w\left(\tilde{Y}\right)+\sum_{k=1}^{K}Y^{[k]} \right]\geq\frac{K}{f_{\text{max}}}\]
Hence, the Lagrangian function corresponding the auxiliary problem above is given by
\[\mathcal{L}= \sum_{k=1}^{K}\mathbb{E}\left[\int_{D}^{D+\Gamma(K,\epsilon,w)} \mathfrak{mse}_{\text{RR}}^{[k]}(t,\tilde{S}_{i}^{[k]})dt\right]-\beta\mathbb{ E}\left[\Gamma(K,\epsilon,w)\right]-\int_{0}^{\infty}\eta(y)w(y)dy\] \[-\zeta\left(\mathbb{E}\left[w\left(\tilde{Y}\right)+\sum_{k=1}^{ K}Y^{[k]}\right]-\frac{K}{f_{\text{max}}}\right). \tag{80}\]
Without loss of generality, we focus on the epoch of the \(K\)th process. In this case, we can write \(\Gamma_{i}^{[K]}\) as
\[\Gamma_{i}^{[K]}=w\left(\tilde{Y}_{i-1}(M_{i-1}^{[K]})\right)+\sum_{m=1}^{M_{ i}^{[K]}-1}\tilde{Y}_{i}(m)+w\left(\tilde{Y}_{i}(m)\right)+\tilde{Y}_{i}\left(M _{i}^{[K]}\right), \tag{81}\]
where \(M_{i}^{[K]}\geq 2\). Note that \(\Gamma_{i}^{[1]}\sim\cdots\sim\Gamma_{i}^{[K]}\sim\Gamma(K,w,\epsilon)\). Moreover, due to the stationarity of the waiting policy, we can drop the indices \(i\), and \(K\), and re-define \(\tilde{Y}_{i-1}^{[K]}(M_{i-1}^{K})=\tilde{\underline{Y}}\), and \(\tilde{Y}_{i}^{[K]}(M_{i}^{K})=\tilde{\underline{Y}}\). Hence, we have the expected epoch length given by
\[\mathbb{E}\left[\Gamma(K,\epsilon,w)\right] =\mathbb{E}\left[\Gamma_{i}^{[K]}\right] \tag{82}\] \[=\mathbb{E}[w(\tilde{\underline{Y}})]+\mathbb{E}[M-1]\cdot \mathbb{E}[\tilde{Y}+w(\tilde{Y})]+\mathbb{E}[\tilde{\underline{Y}}]\] (83) \[=\mathbb{E}[M]\cdot\left(\mathbb{E}[\tilde{Y}]+\mathbb{E}[w( \tilde{Y})]\right)\] (84) \[=\frac{\frac{K}{\mu}+\mathbb{E}[w(\tilde{Y})]}{1-\epsilon} \tag{85}\]
where (83) follows from Wald's equation, (84) follows from the fact that the aggregate service times \(\underline{\tilde{Y}}\sim\tilde{Y}\sim\tilde{\underline{Y}}\sim\text{Erlang}( K,\mu)\) as the individual service times are i.i.d. \(\sim\exp(\mu)\), and (85) follows from the fact that the number of transmission rounds \(M\sim\text{geometric}(1-\epsilon)\). From (43), we have the expected MSE in the epoch given by
\[\mathbb{E}\left[\int_{D_{i}}^{D_{i}+\Gamma(K,\epsilon,w)}\frac{ \sigma_{k}^{2}}{2\theta_{k}}\left(1-e^{-2\theta_{k}\left(t-\tilde{S}_{i}^{[k]} \right)}\right)dt\right]\\ =\frac{\sigma_{k}^{2}}{2\theta_{k}}\left(\mathbb{E}\left[\Gamma (K,\epsilon,w)\right]-\frac{1}{2\theta_{k}}\mathbb{E}\left[e^{-2\theta_{k}Y} \right]\left(1-\mathbb{E}\left[e^{-2\theta_{k}\Gamma(K,\epsilon,w)}\right) \right]\right). \tag{86}\]
Due to the stationary policy \(w(\cdot)\), and the fact that all service times are i.i.d., we can evaluate the expectation \(\mathbb{E}\left[e^{-2\theta_{k}\Gamma(K,\epsilon,w)}\right]\) as follows:
\[\mathbb{E}\left[e^{-2\theta_{k}\Gamma(K,\epsilon,w)}\right]= \mathbb{E}\left[\mathbb{E}\left[e^{-2\theta_{k}\sum_{m=0}^{M-1}w( \tilde{Y}(m))+\tilde{Y}(m)}|M\right]\right] \tag{87}\] \[= \sum_{m=1}^{\infty}\left(\mathbb{E}\left[e^{-2\theta_{k}(w(\tilde {Y})+\tilde{Y})}\right]\right)^{m}\epsilon^{m-1}(1-\epsilon)\] (88) \[= \frac{(1-\epsilon)\mathbb{E}\left[e^{-2\theta_{k}(w(\tilde{Y})+ \tilde{Y})}\right]}{1-\epsilon\mathbb{E}\left[e^{-2\theta_{k}(w(\tilde{Y})+ \tilde{Y})}\right]} \tag{89}\]
where (87) follows from iterated expectation over the number of transmission rounds \(M\) with \(\tilde{Y}(0)=\underline{\tilde{Y}}\sim\tilde{\tilde{Y}}\). This leads to the following functional derivative of the expectation \(\mathbb{E}\left[e^{-2\theta_{k}\Gamma(K,\epsilon,w)}\right]\) with respect to \(w(\cdot)\) (at realization \(\tilde{Y}=z\)),
\[\frac{\partial}{\partial w(\cdot)}\mathbb{E}\left[e^{-2\theta_{k}\Gamma(K, \epsilon,w)}\right]=\frac{-2\theta_{k}(1-\epsilon)e^{-2\theta_{k}(w(z)+z)}f_{ \tilde{Y}}(z)}{\left(1-\epsilon\mathbb{E}\left[e^{-2\theta_{k}(w(\tilde{Y})+ \tilde{Y})}\right]\right)^{2}}. \tag{90}\]
Applying the stationarity condition of the Lagrangian with respect to the functional \(w(\cdot)\), \(\frac{\partial\mathcal{L}}{\partial w(z)}=0\), we get the following optimality condition:
\[\sum_{k=1}^{K}\frac{\sigma_{k}^{2}}{2\theta_{k}}\left(1-\mathbb{E}\left[e^{-2 \theta_{k}Y}\right]\frac{(1-\epsilon)^{2}e^{-2\theta_{k}(w(z)+z)}}{\left(1- \epsilon\mathbb{E}\left[e^{-2\theta_{k}(w(\tilde{Y})+\tilde{Y})}\right] \right)^{2}}\right)=\beta+\frac{(1-\epsilon)\eta(z)}{f_{\tilde{Y}}(z)}+(1- \epsilon)\zeta. \tag{91}\]
Define the function \(\tilde{G}_{\mathbf{\theta},\mathbf{\sigma}^{2},\epsilon,w}(\cdot)\) as
\[\tilde{G}_{\mathbf{\theta},\mathbf{\sigma}^{2},\epsilon,w}(x)=\sum_{k=1}^{K}\frac{ \sigma_{k}^{2}}{2\theta_{k}}\left(1-\mathbb{E}\left[e^{-2\theta_{k}Y}\right] \frac{(1-\epsilon)^{2}e^{-2\theta_{k}x}}{\left(1-\epsilon\mathbb{E}\left[e^{ -2\theta_{k}(w(\tilde{Y})+\tilde{Y})}\right]\right)^{2}}\right) \tag{92}\]
Observe that the function \(G_{\mathbf{\theta},\mathbf{\sigma}^{2},\epsilon,w}\) is a monotonically increasing function in \(x\) (given that \(\mathbb{E}\left[e^{-2\theta_{k}(w(\tilde{Y})+\tilde{Y})}\right]\) for any specific choice of \(w(\cdot)\) is fixed, irrespective of the realizations). Thus, using the complementary slackness condition, the optimal waiting function is indeed a threshold waiting policy in the form of 6
Footnote 6: The optimal threshold seems to be _self-dependent_ through the expectation term \(\mathbb{E}\left[e^{-2\theta_{k}(w(\tilde{Y})+\tilde{Y})}\right]\). This does not affect the claim that the optimal waiting policy is a threshold policy, as the function \(\tilde{G}_{\mathbf{\theta},\mathbf{\sigma}^{2},\epsilon,w}(\cdot)\) is a monotone function for any choice of \(w(\cdot)\), which further implies the existence of a unique solution. Nevertheless, finding this threshold numerically would require iterating back and forth between the expectation term \(L_{k}(\tau)\) (see (94)) and the inverse function \(\tilde{G}_{\mathbf{\theta},\mathbf{\sigma}^{2},\epsilon,w}^{-1}(\cdot)\) as we show later on in the proof.
\[w^{*}(z)=\left[\tilde{G}_{\mathbf{\theta},\mathbf{\sigma}^{2},\epsilon,w}^{-1}\left( \beta+(1-\epsilon)\zeta\right)-z\right]^{+}. \tag{93}\]
Now that the threshold behavior has been established, we note that the threshold policy
maintains the monotonicity behavior of \(\tilde{G}(\cdot)\) since \(\mathbb{E}\left[e^{-2\theta_{k}(w(\tilde{Y})+\tilde{Y})}\right]\) would be a monotone decreasing function in the threshold value.
Considering a threshold waiting policy of the form \(w(\tilde{y})=\left[\tau-\tilde{y}\right]^{+}\), and since \(\tilde{Y}\sim\text{Erlang}(K,\mu)\), we can evaluate the expectation in \(\mathbb{E}\left[e^{-2\theta_{k}(w(\tilde{Y}+\tilde{Y}))}\right]\), denoted by \(L_{k}(\tau)\), as follows:
\[L_{k}(\tau)\triangleq\mathbb{E}\left[e^{-2\theta_{k}(w(\tilde{Y} +\tilde{Y}))}\right] =\int_{0}^{\tau}e^{-2\theta_{k}\tau}\frac{\mu^{K}\tilde{y}^{K-1}e^ {-\mu\tilde{y}}}{(K-1)!}d\tilde{y}+\int_{\tau}^{\infty}e^{-2\theta_{k}\tilde{y} }\frac{\mu^{K}\tilde{y}^{K-1}e^{-\mu\tilde{y}}}{(K-1)!}d\tilde{y} \tag{94}\] \[=e^{-2\theta_{k}\tau}\gamma(\mu\tau,K)+\frac{\mu^{K}}{(K-1)!}\int _{\tau}^{\infty}\tilde{y}^{K-1}e^{-(2\theta_{k}+\mu)}d\tilde{y}\] (95) \[=e^{-2\theta_{k}\tau}\gamma(\mu\tau,K)+\left(\frac{\mu}{\mu+2 \theta_{k}}\right)^{K}(1-\gamma\left((2\theta_{k}+\mu)\tau\right),K)\,. \tag{96}\]
In this case, the function \(\tilde{G}_{\mathbf{\theta},\mathbf{\sigma}^{2},\epsilon}(\cdot)\) (we dropped the \(w(\cdot)\) dependence as we deal with a threshold structure) can be re-written as:
\[\tilde{G}_{\mathbf{\theta},\mathbf{\sigma}^{2},\epsilon}(x)=\sum_{k=1}^{K}\frac{ \sigma_{k}^{2}}{2\theta_{k}}\left(1-\mathbb{E}\left[e^{-2\theta_{k}Y}\right] \frac{(1-\epsilon)^{2}e^{-2\theta_{k}x}}{(1-\epsilon L_{k}(x))^{2}}\right). \tag{97}\]
We also evaluate the expected waiting time function, \(\tilde{H}(\tau)\), as
\[\tilde{H}(\tau)\triangleq\mathbb{E}[w(\tilde{Y})] =\mathbb{E}\left[\left[\tau-\tilde{Y}\right]^{+}\right] \tag{98}\] \[=\int_{0}^{\tau}(\tau-\tilde{y})f(\tilde{y})d\tilde{y}\] (99) \[=\tau\int_{0}^{\tau}\frac{\mu^{K}\tilde{y}^{K-1}e^{-\mu\tilde{y}} }{(K-1)!}d\tilde{y}-\int_{0}^{\tau}\tilde{y}\frac{\mu^{K}\tilde{y}^{K-1}e^{-\mu \tilde{y}}}{(K-1)!}d\tilde{y}\] (100) \[=\tau\gamma\left(\mu\tau,K\right)-\frac{K}{\mu}\gamma\left(\mu \tau,K+1\right). \tag{101}\]
Finally, for compactness, we define
\[\tilde{F}_{k}(\tau)\triangleq\mathbb{E}\left[e^{-2\theta_{k}\Gamma(K, \epsilon,w)}\right]=\frac{(1-\epsilon)L_{k}(\tau)}{1-\epsilon L_{k}(\tau)}. \tag{102}\]
Plugging all these quantities into the long-term average MSE penalty, we get
\[\sum_{k=1}^{K}\overline{\texttt{mse}_{\text{RR}}^{[k]}} =\frac{\sum_{k=1}^{K}\mathbb{E}\left[\int_{D^{[k]}+\Gamma}^{D^{[k] }+\Gamma}\texttt{mse}_{\text{RR}}^{[k]}\left(t,\tilde{S}^{[k]}\right)dt\right]}{ \mathbb{E}\left[\Gamma\right]} \tag{103}\] \[=\sum_{k=1}^{K}\frac{\sigma_{k}^{2}}{2\theta_{k}}\left(1-\frac{1 }{2\theta_{k}}\cdot\frac{\mathbb{E}\left[e^{-2\theta_{k}Y}\right]}{\mathbb{E} [\Gamma(K,\epsilon,w)]}\left(1-\mathbb{E}\left[e^{-2\theta_{k}\Gamma(K, \epsilon,w)}\right]\right)\right) \tag{104}\]
\[=\sum_{k=1}^{K}\frac{\sigma_{k}^{2}}{2\theta_{k}}\left(1-\frac{1}{2 \theta_{k}}\cdot\frac{\mu}{\mu+2\theta_{k}}\cdot\frac{1-\epsilon}{\tilde{H}(\tau )+\frac{K}{\mu}}\cdot\left(1-\tilde{F}_{k}(\tau)\right)\right) \tag{105}\]
Hence, the optimal solution the auxiliary problem, \(\tilde{p}(\beta^{*})=0\), is given by solving
\[\sum_{k=1}^{K}\frac{\sigma_{k}^{2}}{2\theta_{k}}\left(\frac{\tilde{H}(\tau)+ \frac{K}{\mu}}{1-\epsilon}-\frac{1}{2\theta_{k}}\cdot\frac{\mu}{\mu+2\theta_{k }}\cdot\left(1-\tilde{F}_{k}(\tau)\right)\right)-\beta^{*}\left(\frac{\tilde{H }(\tau)+\frac{K}{\mu}}{1-\epsilon}\right)=0 \tag{106}\]
as stated in the theorem.
Focusing on the sampling frequency constraint, one can rewrite it as
\[\tilde{H}(\tau)\geq\frac{K}{f_{\max}}-\frac{K}{\mu}. \tag{107}\]
Hence, similar to the proof of Theorem 1, if the sampling constraint is non-binding, i.e., if \(\tilde{H}(\tilde{G}_{\boldsymbol{\theta},\boldsymbol{\sigma}^{2},\epsilon}^ {-1}(\beta))>\frac{K}{f_{\max}}-\frac{K}{\mu}\), the unconstrained solution of the threshold is optimal. Otherwise, we satisfy the constraint with equality, i.e., we set
\[\tau_{\mathrm{RR}}^{*}=\tilde{H}^{-1}\left(\frac{K}{f_{\max}}-\frac{K}{\mu} \right). \tag{108}\]
Combining the above two cases gives (73), and completes the proof.
**Remark 6**: _Different from Theorem 1, the function \(\tilde{G}_{\boldsymbol{\sigma},\boldsymbol{\theta},\boldsymbol{\epsilon}}\) is not necessarily increasing in \(\epsilon\) as the numerator and denominator are both monotonically decreasing functions in \(\epsilon\). Consequently, the optimal threshold in the absence of an erasure status feedback cannot be always increasing in \(\epsilon\) as in Theorem 1. In fact, our numerical evaluation of the optimal threshold in the absence erasure status feedback shows that the optimal threshold is indeed decreasing as the erasure probability \(\epsilon\) increases. This is intuitive as the transmitter, in this case, has no knowledge about the current age of the processes' at the receiver other than it is on average increasing in \(\epsilon\). Consequently, waiting less is more conservative and leads to a reduction in the overall long-term average of the sum MSE._
In Algorithm 1, we illustrate how to evaluate the optimal policy in Theorem 2 using a _nested bisection_ method. Specifically, we run an outer bisection search over the sum MSE value, \(\beta\), and an inner bisection search over the threshold value, \(\tau\). For the inner bisection, we efficiently evaluate the inverse function \(\tilde{G}^{-1}(\cdot)\) or \(\tilde{H}^{-1}(\cdot)\), while the outer bisection solves for the unique solution of the auxiliary problem \(\tilde{p}(\beta)=0\).
**Remark 7**: _Algorithm 1 can be used to evaluate Theorem 1 as well after replacing \(\tilde{G}(\cdot)\) by \(G(\cdot)\), \(\tilde{H}(\cdot)\) by \(H(\cdot)\), \(\tilde{F}(\cdot)\) by \(F(\cdot)\), \(\tilde{p}(\cdot)\) by \(p(\cdot)\), and the condition for satisfying the sampling frequency constraint from \(\tilde{H}(\tau_{0})<\frac{K}{f_{\max}}-\frac{K}{\mu}\) to \(H(\tau_{0})<\frac{1}{1-\epsilon}\left(\frac{K}{f_{\max}}-\frac{K}{\mu}\right)\)._
```
0:\(K\), \(\boldsymbol{\sigma}\), \(\boldsymbol{\theta}\)\(f_{\max}\), \(\mu\), \(\epsilon\), \(\delta\), \(\tau_{\max}\) \(\beta_{\ell}\gets 0\), and \(\beta_{h}\leftarrow\sum_{k=1}^{K}\frac{\sigma_{k}^{2}}{2\theta_{k}}\) while\(\beta_{h}-\beta_{\ell}>\delta\)do\(\triangleright\) Outer bisection over the sum MSE value \(\beta\leftarrow\frac{\beta_{h}+\beta_{\ell}}{2}\) \(\tau_{\ell}\gets 0\), and \(\tau_{h}\leftarrow\tau_{\max}\) while\(\tau_{h}-\tau_{\ell}>\delta\)do\(\triangleright\) Inner bisection over the unconstrained threshold \(\tau\leftarrow\frac{\tau_{h}+\tau_{\ell}}{2}\) \(\mathcal{O}_{\tau}\leftarrow G(\tau)-\beta\) if\(\mathcal{O}_{\tau}>0\)then \(\tau_{h}\leftarrow\tau\) else \(\tau_{\ell}\leftarrow\tau\) endif endwhile return\(\tau_{0}\leftarrow\tau\) \(\triangleright\) Unconstrained solution of the threshold if\(\tilde{H}(\tau_{0})<\frac{K}{f_{\max}}-\frac{K}{\mu}\)then\(\triangleright\) Checking the sampling frequency constraint \(\tau_{\ell}\leftarrow\tau_{0}\), and \(\tau_{h}\leftarrow\tau_{\max}\) while\(\tau_{h}-\tau_{\ell}>\delta\)do\(\triangleright\) Inner bisection over the constrained threshold \(\tau\leftarrow\frac{\tau_{h}+\tau_{\ell}}{2}\) \(\mathcal{O}_{\tau}\leftarrow\tilde{H}(\tau)-\left(\frac{K}{f_{\max}}-\frac{K}{ \mu}\right)\) if\(\mathcal{O}_{\tau}>0\)then \(\tau_{h}\leftarrow\tau\) else \(\tau_{\ell}\leftarrow\tau\) endif endwhile else \(\tau\leftarrow\tau_{0}\) endif return\(\tau^{*}=\tau\) endif return\(\tau^{*}=\tau\) endif \(\mathcal{O}_{\beta}\leftarrow\tilde{p}(\beta)=\sum_{k=1}^{K}\frac{\sigma_{k}^{2}}{2 \theta_{k}}\left(\frac{\tilde{H}(\tau)+\frac{K}{\mu}}{1-\epsilon}-\frac{1}{2 \theta_{k}}\cdot\frac{\mu}{\mu+2\theta_{k}}\cdot\left(1-\tilde{F}_{k}(\tau) \right)\right)-\beta\left(\frac{\tilde{H}(\tau)+\frac{K}{\mu}}{1-\epsilon}\right)\) if\(\mathcal{O}_{\beta}>0\)then \(\beta_{h}\leftarrow\beta\) else \(\beta_{\ell}\leftarrow\beta\) endif endwhile return\(\beta^{*}=\beta\)
```
**Algorithm 1** Nested Bisection Search for Evaluating the Optimal Policy in Theorem 2
## 6 Numerical Results: Optimal Threshold Behavior
In this section, we present our numerical results concerning Theorem 1 and Theorem 2. We study the effects of the erasure probability, the number of users, and the speed of the processes on the optimal waiting threshold and the MMSE with and without erasure status feedback.
### Effect of Erasure Probability
We study a 2-process system with \(\mathbf{\theta}=[0.1\quad 0.5]\), and \(\mathbf{\sigma}=[1\quad 2]\). The exponential service rate is \(\mu=1\). In case of the presence/absence of the erasure status feedback, we study how the optimal threshold \(\tau_{\text{MAF}}^{*}/\tau_{\text{RR}}^{*}\) behaves versus the erasure probability \(\epsilon\) for \(f_{\text{max}}=0.5\), \(0.95\), and \(1.5\).
First, we consider the case with erasure status feedback in Fig. 5(a). Our results show that for all sampling frequency constraints, the optimal threshold _increases_ as the erasure probability increases. This is due to the fact that the functions \(G_{\mathbf{\theta},\mathbf{\sigma}}^{-1}(\cdot)\) and \(H^{-1}(\cdot)\) are increasing functions in their argument, which are, in turn, increasing functions of \(\epsilon\) (see Remark 5). Nevertheless, we have three different cases. First, when \(f_{\text{max}}=0.5\), the sampling frequency constraint is binding even at \(\epsilon=0\). Hence, the optimal threshold is given by
\[H^{-1}\left(\frac{1}{1-\epsilon}\left[\frac{K}{f_{\text{max}}}-\frac{K}{\mu} \right]\right)=H^{-1}\left(\frac{2}{1-\epsilon}\right). \tag{109}\]
We see that the optimal threshold is higher than the other two cases and has a much steeper curve versus \(\epsilon\). On the other hand, when \(f_{\text{max}}=1.5\), the sampling frequency constraint is inactive since \(f_{\text{max}}>\mu\), and the optimal threshold is given by \(G_{\mathbf{\theta},\mathbf{\sigma}}^{-1}(\beta^{*})\) for all \(\epsilon\). Finally,
Figure 6: The optimal threshold (\(\tau_{\text{MAF}}^{*}\) (right) and \(\tau_{\text{RR}}^{*}\) (left)) versus the erasure probability (\(\epsilon\)) for different sampling frequency constraints (\(f_{\text{max}}\)).
for the case when \(f_{\max}=0.95\), we observe an interesting behavior: when \(\epsilon<\epsilon^{*}=0.7\), the threshold corresponding to \(G_{\mathbf{\theta},\mathbf{\sigma}}^{-1}(\beta^{*})\) is (slightly) higher than the threshold corresponding to \(H^{-1}\left(\frac{1}{1-\epsilon}\left[\frac{K}{f_{\max}}-\frac{K}{\mu}\right]\right)\) (which is shown as a dotted curve in Fig. 6a); while for \(\epsilon>\epsilon^{*}=0.7\), the sampling frequency constraint becomes binding and therefore, the optimal threshold is characterized by \(H^{-1}(\cdot)\) and becomes steeper.
In Fig. 6b, we consider the case without erasure status feedback. Our results show that the optimal threshold exhibits the _opposite behavior_ compared to the case with feedback (see Remark 6). Specifically, when the sampling frequency constraint is never binding, i.e., in the case of \(f_{\max}=1.5>\mu=1\), the optimal threshold \(\tau_{RR}^{*}\) is _decreasing_ as the erasure probability increases. Furthermore, when \(\epsilon>0.6\), the transmitter employs a zero-wait policy and sends its samples immediately after acquiring them, since \(\tau_{RR}^{*}=0\) in this case. Second, we note that the optimal threshold in the case of \(f_{\max}=0.5\) is constant irrespective of the erasure probability. This is due to the fact that the waiting time needed to satisfy the sampling constraint is _independent_ of \(\epsilon\) in the case of absent erasure status feedback as the waiting appears at the beginning of each transmission round regardless of the erasure events. Finally, we see that for \(f_{\max}=0.95\), the optimal threshold is decreasing (following the unconstrained solution of the problem) until it saturates at \(\tau_{RR}^{*}=1\) starting from \(\epsilon=0.2\) to satisfy the sampling frequency constraint.
In Fig. 7, we show the optimal threshold for both cases of erasure status feedback avail
Figure 7: Comparison between the optimal threshold behavior versus erasure probability in both cases of the presence/absence of erasure status feedback with \(f_{\max}=1.5\).
ability at \(f_{\max}=1.5\) (non-binding sampling frequency constraint) on the same figure. Fig. 7 shows that both thresholds begin from the same value at \(\epsilon=0\). This is due to the fact that both settings are equivalent when there are no erasure events. The optimal thresholds then part their way as the erasure probability increases and behave _oppositely_ as stated above.
In Fig. 8, we show the resultant sum MSE versus the erasure probability for both cases of erasure status feedback availability at \(f_{\max}=1.5\) (non-binding sampling frequency constraint) on the same figure. We compare the resultant MSE values with their counterparts if the zero-wait policy is employed. Fig. 8 shows that the sum MSE in the presence of erasure status feedback is smaller than that in the absence of erasure status feedback. Furthermore, the sum MSE with optimal waiting for both cases outperforms the zero-wait policy, as expected. In the case of no erasure status feedback, the sum MSE with optimal waiting converges to its counterpart with zero-waiting policy, as the optimal threshold converges to zero as \(\epsilon\) increases. Surprisingly, Fig. 8 shows that for small erasure probability (up to \(\approx 0.07\)), employing _optimal waiting with RR scheduling_ outperforms _zero-waiting with MAF scheduling_.
Figure 8: Comparison between the sum MSE with optimal waiting (in solid lines) versus the erasure probability in both cases of the presence/absence of erasure status feedback with \(f_{\max}=1.5\). For each case, we compare the resultant sum MSE with its counterpart if the zero-wait policy is employed (in dotted lines).
### Effect of the Number of Processes
We now consider a symmetric system with \(K\) processes, each having \(\sigma_{k}^{2}=1\) and \(\theta_{k}=0.5\), for all \(k\), communicating through a channel with \(\epsilon=0.3\). In Fig. (a)a, We study the optimal threshold variation with increasing the number of processes \(K\) in the presence of erasure status feedback. We observe that the long-term average sum MMSE increases with \(K\) (as expected, since more processes need to be conveyed through a shared channel). Fig. (a)a shows that as \(K\) increases, the optimal threshold increases as well. The slope of the curve depends on \(f_{\max}\). When \(f_{\max}=0.5\), the sampling frequency constraint is binding, and \(\tau^{*}\) appears to linearly increase with \(K\) with a steeper slope. When \(f_{\max}=1.5>\mu=1\), i.e., for an unconstrained version of the problem, the optimal threshold is slowly increasing with \(K\). For \(f_{\max}=0.95\), the optimal threshold matches the unconstrained solution for \(K=1,2\). Nevertheless, when \(K>2\), the sampling frequency constraint becomes binding and the linear-like profile of the optimal threshold prevails.
The exact general patterns hold for the case of the absence of erasure status feedback in Fig. (b)b. We note, however, that the slopes of curves in Fig. (a)a are relatively higher than their counterparts in Fig. (b)b. This is consistent with the results of the previous subsection, where larger waiting (at the same erasure probability) is employed if erasure feedback is present.
Finally, we study how the optimal threshold without feedback, \(\tau_{\text{RR}}^{*}\) behaves with \(K\) over a set of different values of \(\epsilon\). Interestingly, despite the general increasing trend of the \(\tau_{\text{RR}}^{*}\) versus the number of processes \(K\) for fixed \(\epsilon\), it is at the same time monotonically decreasing in \(\epsilon\) for fixed \(K\). This is illustrated in Fig. 10. This implies that there exist a specific number of processes \(\underline{K}\), after which the transmitter starts to wait before each transmission round.
Figure 9: The optimal threshold (\(\tau_{\text{MAF}}^{*}\) (right) and \(\tau_{\text{RR}}^{*}\) (left)) versus the number of processes (\(K\)) for different sampling frequency constraints (\(f_{\max}\)).
This is a contrasting behavior comapred to the case of having erasure status feedback, in which increasing \(\epsilon\) always increases the \(\tau_{\rm MAF}^{*}\) for any number of processes.
### Effect of the Variation Speed of the Processes
We consider a 2-process system with \(\sigma_{1}^{2}=2\), \(\sigma_{2}^{2}=1\), and \(\theta_{1}=0.5\). We vary \(\theta_{2}\in[0.1,1]\) and observe its effect on the optimal threshold and the MMSE for the same service rate \(\mu=1\).
In Fig. 10(a), in the case of available erasure status update, we observe that when the sampling frequency constraint is binding, e.g., when \(f_{\rm max}=0.5\), the optimal threshold is independent of \(\theta_{2}\) as the argument of \(H^{-1}(\cdot)\) is independent of \(\theta_{2}\). The optimal threshold, however, is a monotonically decreasing function in \(\theta_{2}\) for \(f_{\rm max}=1.5\) as the process becomes faster, and thus the system needs to wait less to track the variations in the process as long as the sampling constraint is inactive. In both cases, the long-term average MMSE is decreasing in \(\theta_{2}\) since the sum of the processes' variances decreases.
Similar observations can be drawn for the case of the absence of erasure status feedback in Fig. 10(b). Different from Fig. 10(a), it appears that the optimal threshold is changing slightly with respect to \(\theta_{2}\) even for the unconstrained problem (under the considered system parameters). Furthermore, as expected, we can observe that the MMSE function (versus \(\theta_{2}\)
Figure 10: The optimal threshold (\(\tau_{\rm RR}^{*}\)) versus the number of processes (\(K\)) for different values of the erasure probability (\(\epsilon\)) in the absence of erasure status feedback.
for the case of no erasure status feedback is higher than its counterpart if the erasure status feedback is available.
## 7 Conclusion
In this paper, we investigated the problem of estimating \(K\) independent OU processes under a total sampling constraint \(f_{\max}\), with the goal of identifying the optimal sampling instants such that the long-term average sum MSE is minimized. The acquired samples experience independent erasure events with an erasure probability \(\epsilon\). We focused on characterizing the optimal sampling policy in two cases: first, when erasure status feedback is available at the transmitter. In this case, we assume that the transmitter acquires the samples according to the MAF scheduling policy. In the second case, the erasure status feedback is non-existent at the transmitter, and the transmitter employs an RR scheduling policy. We re-formulated both problems in terms of optimizing a stationary waiting policy. In the case of available erasure status feedback, we demonstrated that aggregating waiting at the beginning of the epoch does not hurt the long-term average MSE. In the case of absent erasure status feedback, however, this aggregation needs to be done at the beginning of each RR transmission round. We showed that the optimal waiting policy is indeed a threshold policy in both settings. We characterized the optimal threshold in terms of \(K\), \(f_{\max}\), and \(\epsilon\).
Our numerical evaluations and our structural results show an intriguing behavior of the optimal threshold. While, the optimal threshold \(\tau^{*}\) at \(\epsilon=0\) is identical for both settings, with increasing \(\epsilon\), the optimal threshold for both settings part ways: it _increases_ for the case of _available_ erasure status feedback, and _decreases_ when the erasure status feedback
Figure 11: The optimal threshold (\(\tau^{*}_{\mathrm{MAF}}\) (left) and \(\tau^{*}_{\mathrm{RR}}\) (right)) and the optimal long-term average MMSE versus \(\theta_{2}\) for tracking two processes with different sampling frequency constraints (\(f_{\max}\)); the first process has fixed parameters \(\sigma_{1}^{2}=1\) and \(\theta_{1}=1\).
is _non-existent_. Furthermore, we show that for both settings, the optimal threshold is an increasing function of the number of processes \(K\). Finally, we show the effect of the variation speed of the process on the long-term average sum MSE.
Future directions of this work may include investigating fully-observable processes (OU or otherwise), different age-dependent penalties other than the sum MSE penalty, the behavior of the waiting policy for generalized statistical models for the service queue other than the exponential distribution, exogenous sampling with and without preemption rather than the generate-at-will model, and signal-dependent sampling rather than signal-independent sampling.
|
2307.01576 | Merging and band transition of bound states in the continuum in
leaky-mode photonic lattices | Bound states in the continuum (BICs) theoretically have the ability to
confine electromagnetic waves in limited regions with infinite radiative
quality ($Q$) factors. However, in practical experiments, resonances can only
exhibit finite $Q$ factors due to unwanted scattering losses caused by
fabrication imperfections. Recently, it has been shown that ultrahigh-$Q$
guided-mode resonances (GMRs), which are robust to fabrication imperfections,
can be realized by merging multiple BICs in momentum space. In this study, we
analytically and numerically investigate the merging and band transition of
accidental BICs in planar photonic lattices. Accidental BICs can merge at the
edges of the second stop band, either with or without a symmetry-protected BIC.
We show that as the thickness of the photonic lattice gradually increases, the
merged state of BICs transitions from the upper to the lower band edge. Using
coupled-mode analysis, we present the analytical merging thickness at which
multiple accidental BICs merge at the second-order $\Gamma$ point. Our
coupled-mode analysis could be beneficial for achieving ultrahigh-$Q$ GMRs in
various photonic lattices composed of materials with different dielectric
constants. | Sun-Goo Lee, Seong-Han Kim, Wook-Jae Lee | 2023-07-04T09:09:44Z | http://arxiv.org/abs/2307.01576v1 | # Merging and band transition of bound states in the continuum in leaky-mode photonic lattices
###### Abstract
Bound states in the continuum (BICs) theoretically have the ability to confine electromagnetic waves in limited regions with infinite radiative quality (\(Q\)) factors. However, in practical experiments, resonances can only exhibit finite \(Q\) factors due to unwanted scattering losses caused by fabrication imperfections. Recently, it has been shown that ultrahigh-\(Q\) guided-mode resonances (GMRs), which are robust to fabrication imperfections, can be realized by merging multiple BICs in momentum space. In this study, we analytically and numerically investigate the merging and band transition of accidental BICs in planar photonic lattices. Accidental BICs can merge at the edges of the second stop band, either with or without a symmetry-protected BIC. We show that as the thickness of the photonic lattice gradually increases, the merged state of BICs transitions from the upper to the lower band edge. Using coupled-mode analysis, we present the analytical merging thickness at which multiple accidental BICs merge at the second-order \(\Gamma\) point. Our coupled-mode analysis could be beneficial for achieving ultrahigh-\(Q\) GMRs in various photonic lattices composed of materials with different dielectric constants.
## I Introduction
In photonics, bound states in the continuum (BICs) refer to the special eigensolutions of electromagnetic wave equations in non-Hermitian physical systems [1; 2; 3]. Unusually localized BICs can exhibit theoretically infinite radiative \(Q\) factors, even though they can couple with outgoing radiative waves that can carry electromagnetic energy [4]. Due to their remarkable ability to increase light-matter interactions in confined regions, extensive studies on BICs have been conducted for basic research and practical applications in recent years [5; 6; 7; 8; 9; 10; 11]. In theoretical or numerical studies, BICs with infinite \(Q\) factors can be found in various open photonic systems, including photonic crystals [12; 13], metasurfaces [14; 16], and plasmonic structures [17; 18; 19]. In experimental implementations, however, unwanted scattering losses attributed to fabrication imperfections, such as surface roughness, imperfect boundaries of individual scattering elements, and structural disorder, give rise to the considerable reduction of radiative \(Q\) factors. In practical photonic structures, the theoretically perfect BICs appear as quasi-BICs with finite \(Q\) factors due to out-of-plane coupling with continuous radiating waves.
Subwavelength photonic crystal slabs can exhibit abundant BICs if the lattices possess time-reversal symmetry, up-down mirror symmetry, and proper rotational symmetry [20; 21]. Recently, topologically-protected BICs in photonic crystal slabs have attracted particular interest because they are stable [22; 23; 24; 25]. Furthermore, current nanofabrication technology is sufficient to implement BICs in photonic crystal slabs [26; 27; 28]. Due to their topological nature, accidental BICs can be moved along the band diagram by varying the structural parameters while maintaining the symmetry of the systems [29; 30]. It has also been demonstrated that multiple accidental BICs can be merged at the Brillouin zone center, which is also referred to as the lattice \(\Gamma\) point in a photonic band structure, by adjusting structural parameters [31; 32]. In the merged states of multiple BICs, undesired out-of-plane scattering losses can be significantly suppressed by increasing the radiative \(Q\) factors of nearby eigenstates over a finite range of wavevectors. Merging BICs is important for practical applications because it provides a powerful mechanism to achieve robust ultrahigh-\(Q\) resonances, that enhance light-matter interactions.
The concept of merging multiple BICs has been introduced recently and has been investigated in only a few studies so far. In this paper, we present analytical and numerical results on the merging and band transition of accidental BICs in one-dimensional (1D) and two-dimensional (2D) photonic lattice slabs. Guided modes become BICs when out-of-plane radiation completely disappears. We present an analytical expression, an overlap integral in the slab region, associated with the out-of-plane radiation of the guide modes, and show that accidental BICs can appear at generic \(k\) points in reciprocal space where the value of the overlap integral approaches zero. As the thickness of photonic lattice slabs increases, accidental BICs gradually move downward along the upper dispersion curve and eventually meet at the lattice \(\Gamma\) point. As the thickness is further increased, the merged state of BICs transitions from the upper to lower band edges at the lattice \(\Gamma\) point. The BICs move down and away from each other with further increase in thickness. Merging thicknesses, at which multiple accidental BICs merge at the second-order \(\Gamma\) point, are calculated through the coupled-mode analysis. Our coupled-mode analysis
could be applied to achieve ultrahigh-\(Q\) resonances in diverse 1D and 2D photonic lattices made of materials with different dielectric constants.
## II Merging and band transition of BICs in 1D photonic lattices
Figure 1(a) compares a conventional homogeneous slab waveguide and a representative 1D photonic lattice for studying the merging and band transition of BICs in this study. The waveguide and photonic lattice are surrounded by a surrounding medium with a dielectric constant \(\epsilon_{s}\). The waveguide consists of a homogeneous material with a dielectric constant \(\epsilon_{w}\), and the 1D photonic lattice is composed of high (\(\epsilon_{h}\)) and low (\(\epsilon_{l}\)) dielectric constant materials. The period is \(\Lambda\), the width of the high dielectric constant part is \(\rho\Lambda\), and the thickness of the photonic lattice is \(t\). In the homogeneous waveguide with \(\epsilon_{w}>\epsilon_{s}\), guided modes posses purely real eigenfrequencies \(\Omega=\Omega_{\text{Re}}\) and propagate along the waveguide without out-of-plane radiation because they are perfectly protected by the total internal reflection (TIR). As illustrated in Fig. 1(b), dispersion curves for waveguide modes should be located in yellow region below the light line. The 1D lattice can also support transverse electric (TE) guided modes because its average dielectric constant \(\epsilon_{0}=\epsilon_{l}+\rho(\epsilon_{h}-\epsilon_{l})\) is larger than \(\epsilon_{s}\). With \(\Delta\epsilon=\epsilon_{h}-\epsilon_{l}>0\), as illustrated in Fig. 1(c), all Bloch guided modes can be plotted in the irreducible Brillouin zone and photonic band gaps open at the Bragg condition \(k_{z}=jK/2\), where \(k_{z}\) is the Bloch wavevector, \(K=2\pi/\Lambda\) is the magnitude of the grating vector, and \(j\) is an integer. Near the second stop band in the white region, guided modes posses complex eigenfrequencies \(\Omega=\Omega_{\text{Re}}+i\,\Omega_{\text{Im}}\) and exhibit interesting leaky-wave effects such as BICs and guided-mode resonances (GMRs), also known as guided resonances, via zero-order diffraction [33; 34]. Guided modes in the yellow region are not associated with the leaky-wave effects because they do not couple with external waves in the radiation continuum, and Bloch modes in the grey region are less practical because they generate unwanted higher-order diffracted waves as well as the desired zero-order diffraction. In general, numerous leaky guided modes coexist in the photonic lattices with a slab geometry and the various modes possess their own dispersion curves, photonic band gaps, and high-\(Q\) BICs. In this study, we focus our attention to the BICs in the vicinity of the second stop band (\(j=2\)) open by fundamental TE\({}_{0}\) mode, as this simplest case brings out the key properties of the accidental BICs. In fact, most of important studies on the BICs are associated the leaky Bloch modes in the vicinities of the second stop bands. To elucidate the fundamental properties of BICs, we employ a semianalytical dispersion model and rigorous finite-element method (FEM) simulations.
Merging and band transition of BICs are possible because they are topologically protected [29; 30]. Figure 2(a) illustrates the evolution of the dispersion curves of guided modes in the vicinity of the second stop band under variation of the thickness \(t\). No noticeable changes in the dispersion curves are observed with variations in thickness. The simulated spatial electric field (\(E_{y}\)) distributions in the insets show that regardless of \(t\), upper band edge modes with asymmetric field distributions become nonleaky symmetry-protected BICs. Conversely, lower band edge modes with symmetric field distributions can be either leaky or nonleaky depending on the thickness values. The merging and band transition of accidental BICs can be clearly seen in Fig. 2(b), which depicts \(Q\) factors as a function of \(k_{z}\). When \(t=1.13\)\(\Lambda\), two accidental BICs with \(-1\) charge and one symmetry
Figure 1: (a) Schematics of a 1D photonic lattice and homogeneous dielectric waveguide. Leaky modes in the photonic lattice lose their electromagnetic energy over time, while nonleaky guided modes in the homogeneous waveguide are protected by total internal reflection. (b) The dispersion curve for the fundamental TE\({}_{0}\) mode of a homogeneous dielectric waveguide. (c) Conceptual illustration of photonic band structures and transmission curve through a 1D photonic lattice. Leaky modes of the photonic lattice in the lower (upper) dispersion curve at the Bloch wavevector \(k_{z}=|\delta|\) around the second band gap (\(j=2\)) originate from the propagating modes of the homogeneous waveguide at the wavevector \(\beta=K-|\delta|\) (\(\beta=K+|\delta|\)). At the second stop band, one band edge mode generate a GMR via coupling with normally incident plane wave, whereas the other edge mode becomes a symmetry-protected BIC.
protected BIC with \(+1\) charge appear, resulting in three isolated peaks in the \(Q\) factor curve of the upper band. Due to the \(180^{\circ}\) rotational symmetry of the 1D lattice, two accidental BICs appear in pairs at wavevectors \(k_{z}=\pm 0.01995\)\(K\) simultaneously. As the value of \(t\) increases from 1.13 \(\Lambda\), two accidental BICs gradually move downward along the upper dispersion curve and approach to \(\Gamma\) point where a symmetry-protected BIC is located. When \(t=1.14489\)\(\Lambda\), two accidental BICs and one symmetry-protected BIC merge at the lattice \(\Gamma\) point, resulting in a single enhanced peak in the \(Q\) factor curve. Compared to the isolated symmetry-protected or accidental BICs, the merged state at \(t=1.14489\)\(\Lambda\) exhibits significantly improved radiative \(Q\) factors over a wide range of wavevectors. As the value of \(t\) further increases to 1.16282 \(\Lambda\), simultaneous peaks in the \(Q\) factor curves are observed at both the upper and lower band edges. The peak in the upper band is attributed to the symmetry-protected BIC, evident from the asymmetric field distributions shown in the inset of Fig. 2(a). We propose that the interband transition of the merged state of BICs, induced by the variation of \(t\), leads to the enhanced peak at the lower band edge. Upon further increasing \(t\) beyond 1.16282 \(\Lambda\), the merged state at the \(\Gamma\) point splits into two isolated peaks with \(-1\) charge.
## III Coupled-mode analysis
In the planar waveguide structures shown in Fig. 1(a), the dispersion relations of guided modes, including eigenfrequencies and radiative \(Q\) factors, can be obtained by solving the 1D wave equation given by [35]
\[\left(\frac{\partial^{2}}{\partial x^{2}}+\frac{\partial^{2}}{\partial z^{2}} \right)E_{y}(x,z)+\epsilon(x,z)k_{0}^{2}E_{y}(x,z)=0, \tag{1}\]
where \(k_{0}\) denotes the wave number in free space. For the homogeneous waveguide plotted in Fig. 1(b), Eq. (1) can be solved analytically and spatial electric field distribution of the TE\({}_{0}\) mode satisfies \(E_{x}=0\), \(E_{z}=0\), and
\[E_{y}(x,z)=\varphi(x)e^{i\beta z}, \tag{2}\]
where \(\varphi(x)\) characterizes the transverse profile of the TE\({}_{0}\) mode and \(\beta\) denotes the propagation constant along the \(z\) direction [36]. The TE\({}_{0}\) mode can propagate without radiative loss along the waveguide due to TIR.
For the photonic lattices in Fig. 1(a), Eq. (1) can be solved numerically by expanding the electric field \(E_{y}(x,z)\) as a Bloch form and the periodic dielectric function \(\epsilon(x,z)\) in a Fourier series [37]. Using the current state of computational software and hardware, it is possible to obtain exact solutions of Eq. (1) for certain system parameters. However, relying solely on direct numerical simulations may not be sufficient to fully comprehend the
Figure 2: Merging and band transition of accidental BICs in a 1D leaky-mode photonic lattice. (a) FEM-simulated dispersion curves near the second stop band for four different values of \(t\). Insets with blue and red colors show the spatial distribution of the electric field (\(E_{y}\)) for the band edge modes at the \(y=0\) plane. The vertical dotted lines indicate the mirror planes in the computational cells. (b) Simulated radiative \(Q\) factors of the upper and lower dispersion curves. The merged state of BICs transitions from the upper to lower dispersion curve as \(t\) increases, while one symmetry-protected BIC with \(+1\) charge always appears at the upper band edge. The structural parameters used in the FEM simulations were \(\epsilon_{0}=4\), \(\Delta\epsilon=0.5\), \(\epsilon_{s}=1\), and \(\rho=0.35\).
fundamental properties of BICs, even though numerical calculations can provide accurate dispersion relations of leaky guided modes. To gain a deeper understanding of the merging and band transition of BICs, we employ the semianalytical dispersion model introduced by Kazarinov and Henry (KH) [38]. In this model, the spatial dielectric function and electric field are approximated as:
\[\epsilon(x,z) =\gamma_{0}+\sum_{n=1,2}\left(\gamma_{n}e^{inKz}+\gamma_{-n}e^{- inKz}\right), \tag{3}\] \[E_{y}(x,z) =\left(Ae^{iKz}+Be^{-iKz}\right)\varphi(x)+\Delta E(x,z). \tag{4}\]
In Eq. (3), the Fourier coefficient \(\gamma_{n}\) has a zero value outside the periodic layer, and \(\gamma_{0}=\epsilon_{0}\). In Eq. (4), \(A(z)\sim\exp(ik_{z}z)\) and \(B(z)\sim\exp(-ik_{z}z)\) are the slowly varying envelopes of the two counter-propagating waves, and \(\Delta E(x,z)\) represents the diffracted wave radiating away from the periodic structure. For symmetric lattices with \(\epsilon(x,z)=\epsilon^{*}(x,-z)\) and \(\gamma_{-n}=\gamma_{n}\), near the second stop band, the eigenfrequencies of guided modes can be written as
\[\Omega(k_{z})=\Omega_{0}-\frac{ih_{1}\pm\sqrt{k_{z}^{2}+(h_{2}+ih_{1})^{2}}}{ Kh_{0}}, \tag{5}\]
where \(\Omega_{0}\) is the Bragg frequency under vanishing index modulation, and the coupling coefficients \(h_{n=0,1,2}\) are given by
\[h_{0} = \frac{k_{0}}{K}\int_{-\infty}^{\infty}\gamma_{0}(x)\varphi(x) \varphi^{*}(x)dx, \tag{6}\] \[h_{1} = i\frac{k_{0}^{4}\gamma_{1}^{2}}{2K}\int_{-t}^{0}\int_{-t}^{0}G(x,x^{\prime})\varphi(x^{\prime})\varphi^{*}(x)dx^{\prime}dx,\] (7) \[h_{2} = \frac{k_{0}^{2}\gamma_{2}}{2K}\int_{-t}^{0}\varphi(x)\varphi^{*}( x)dx, \tag{8}\]
where \(G(x,x^{\prime})\) denotes the Green's function for diffracted fields [33; 8]. Equation (5) indicates that the leaky stop band with two band edges \(\Omega^{b}=\Omega_{0}+h_{2}/(Kh_{0})\) and \(\Omega^{a}=\Omega_{0}-(h_{2}+i2h_{1})/(Kh_{0})\) opens at \(k_{z}=0\). For symmetric lattices, the coupling coefficient \(h_{1}\) is generally a complex value, while \(h_{0}\) and \(h_{2}\) are real values, irrespective of the lattice parameters. With the time dependence of \(\exp(-i\Omega t)\), Bloch modes near the second stop band tend to lose their electromagnetic energy over time because they have complex frequencies. However, one of the band edge modes with a purely real eigenfrequency \(\Omega^{b}\) can become a nonleaky symmetry-protected BIC at the lattice's \(\Gamma\) point.
By investigating the dispersion relations in Eq. (5) with the associated coupling coefficients \(h_{0}\), \(h_{1}\), and \(h_{2}\), one can notice that \(\text{Im}[\Omega]\) becomes zero when \(\text{Re}[h_{1}]\) approaches zero. With \(\text{Re}[h_{1}]=0\), dispersion relations in Eq. (5) can be rewritten as
\[\Omega(k_{z})=\Omega_{0}+\frac{\text{Im}[h_{1}]\pm\sqrt{k_{z}^{2}+(h_{2}-\text {Im}[h_{1}])^{2}}}{Kh_{0}}. \tag{9}\]
Equation (9) shows that accidental BICs with purely real eigenfrequencies can be observed at any generic \(k\) point, including the lattice \(\Gamma\) point. Due to their topological nature, these accidental BICs do not destroy but rather move along dispersion curves as the lattice parameters change continuously. In 1D photonic lattices possessing \(180^{\circ}\) rotational symmetry, two accidental BICs should appear in pairs at \(k_{z}=\pm|k_{\text{BIC}}|\) simultaneously. It is important to note that the formation of accidental BICs is independent of the existence of symmetry-protected BICs. Thus, at the asymmetric edge, two accidental BICs and one symmetry-protected BIC can merge as illustrated in Fig. 2 when \(\text{Re}[h_{1}]\) approaches \(0\) due to a variation in lattice parameters. On the other hand, at the symmetric edge, two accidental BICs can merge with the appropriate lattice parameters. In previous studies [34; 8], it was shown that the position of a symmetry-protected BIC changes from the upper to lower band edge through the band flip, also known as the topological phase transition. This transition can be achieved by varying the value of the parameter \(\rho\). Before the band flip, as shown in Fig. 2 with \(\rho=0.35\), accidental BICs can merge with a symmetry-protected BIC at the upper band edge. Conversely, after the band flip with appropriate value of \(\rho\), the merged state with a symmetry-protected BIC appears at the lower band edge.
We now present an analytical expression that is associated with the formation and band transition of accidental BICs. By examining the value of \(h_{1}\) with the appropriate Green's function [8], we can demonstrate that \(\text{Re}(h_{1})\) approaches zero when the overlap integral given by
\[\Sigma=\int_{-t}^{0}e^{i\mu x}\varphi(x)dx, \tag{10}\]
approaches zero; see Supplemental Material [39] for details. In Eq. (10), \(\mu=\sqrt{\epsilon_{0}k_{0}^{2}-\delta^{2}}\), and the transverse profile of the \(\text{TE}_{0}\) mode is given by \(\varphi(x)=ce^{+i\alpha x}+c^{*}e^{-i\alpha x}\), where \(\alpha=\sqrt{\epsilon_{0}k_{0}^{2}-\beta^{2}}\), and the complex coefficient \(c\) is set to satisfy \(\int_{-\infty}^{\infty}|\varphi(x)|^{2}dx=1\). As shown in Figs. 1(b) and 1(c), waveguide modes with a positive (negative) value of \(\delta\) at \(\beta=K+\delta\) correspond to the leaky modes in the upper (lower) dispersion curve of photonic lattices at the Bloch wavevector \(k_{z}=|\delta|\). Merging and band transition of accidental BICs illustrated in Fig. 2 can be explained by investigating analytical BIC points \(\delta_{\text{BIC}}\), where the overlap integral \(\Sigma\) becomes zero, as a function of the thickness \(t\). Figure 3 illustrates the calculated \(\delta_{\text{BIC}}\) with \(\Sigma=0\). In the calculations, the value of \(t\) varies from \(\Lambda\) to \(1.3\)\(\Lambda\), and \(\beta\) changes in the small discrete step of \(10^{-5}\)\(K\). \(\delta_{\text{BIC}}(t)\), represented by the blue solid line, is defined as the first value at which the value of \(|\Sigma|\) becomes smaller than \(10^{-5}\) for a given \(t\). Figure 3 indicates that as the thickness gradually increases from \(\Lambda\), \(\delta_{\text{BIC}}\) monotonically moves downward and reaches \(\delta=0\) (\(\beta=K\)), which corresponds to the second-order \(\Gamma\) point, when the thickness is \(t_{0}=1.15464\)\(\Lambda\). As the thickness increases further, \(\delta_{\text{BIC}}\) continues to move downward and
gets away from the \(\Gamma\) point.
The analytical BICs in Fig. 3 and the dispersion curve in Fig. 1(b) indicate that the eigenfrequencies of accidental BICs decrease continuously with increasing thickness. However, the effects of dielectric constant modulation, such as band folding and photonic band gap, cannot be observed in the analytical BICs represented by the blue line in Fig.3, as the overlap integral \(\Sigma\) in Eq.(10) is determined by the transverse mode profile \(\varphi(x)\) and the Green's function for the unmodulated homogeneous waveguide slab of thickness \(t\). To verify whether accidental BICs indeed occur as \(\Sigma\) approaches zero, we conducted additional investigations of the BIC points \(k_{\rm BIC}\) using FEM simulations. The results are represented as magenta circles in Fig. 3. Due to the mirror symmetry of the photonic lattice, FEM-simulated accidental BICs with infinite \(Q\) factors appear in pairs at \(\pm|k_{\rm BIC}|\). When the thickness (\(t<t_{0}\)) is far from \(t_{0}=1.15464\ \Lambda\), the values of \(\delta_{\rm BIC}\) and \(k_{\rm BIC}\) agree well. However, as \(t\) approaches \(t_{0}\), the FEM-simulated values of \(k_{\rm BIC}\) slightly deviate from the analytic values of \(\delta_{\rm BIC}\). The deviation \(|k_{\rm BIC}-\delta_{\rm BIC}|\) increases until \(t\) approaches \(t_{u}=1.14489\ \Lambda\), where \(k_{\rm BIC}\) reaches the \(\Gamma\) point. As \(t\) further increases beyond \(t_{u}\), the BIC point is not found for a while, but when \(t_{l}=1.16282\ \Lambda\), the BIC point appears again at the \(\Gamma\) point. The deviation \(|k_{\rm BIC}-\delta_{\rm BIC}|\) decreases with additional increases in thickness (\(t>t_{l}\)), and \(k_{\rm BIC}\) agrees well with \(\delta_{\rm BIC}\) if \(t\) is far away from \(t_{l}\). In the vicinity of the second-order \(\Gamma\) point, photonic band gaps open by the coupling between two counterpropagating waves [8]. Since the coupling strength is maximum at \(k_{z}=0\) and decreases rapidly as the Bloch wavevector moves away from the \(\Gamma\) point, the deviation \(|k_{\rm BIC}-\delta_{\rm BIC}|\) could be noticeable only around the \(\Gamma\) point. In the FEM simulations, there exist a thickness range, \(t_{u}<t<t_{l}\), where accidental BIC can not be found due to the presence of a photonic band gap. At \(t=t_{u}\) and \(t=t_{l}\), two accidental BICs meet at the upper and lower band edges, respectively, resulting in the formation of merged BIC states. A symmetry-protected BIC can be included in the merged state at the upper or lower band depending on the value of lattice parameter \(\rho\), not \(t\).
## IV Ultrahigh-\(Q\) resonances robust to fabrication imperfections
Our coupled-mode analysis reveals that the value of \(\Sigma\) can be tuned to zero at the second-order \(\Gamma\) point of photonic lattices by adjusting the lattice parameters. This tuning can cause accidental BICs to merge, resulting in interesting ultrahigh-\(Q\) resonances. The overlap integral \(\Sigma\) in Eq. (10) mathematically depends on the lattice parameters \(t\), \(\epsilon_{0}\), and \(\epsilon_{s}\). However, thickness variation is essential to achieve merged states of BICs. While positions of accidental BICs can be substantially moved along dispersion curves with thickness variation, BICs only move slightly with variations of \(\epsilon_{0}\) and \(\epsilon_{s}\). In conventional experimental configurations at wavelengths around \(\lambda=1550\) nm, the value of \(\epsilon_{s}\) typically ranges from 1 to 2.25, and \(\epsilon_{0}\) varies from 3 to 12. For photonic lattices with these typical values of \(\epsilon_{0}\) and \(\epsilon_{s}\), it is meaningful to investigate the normalized merging thickness \(t_{0}/\Lambda\), where \(\Sigma=0\). Red lines in Figs. 4(a) and 4(b) display the calculated \(t_{0}/\Lambda\) as a function of \(\epsilon_{0}\) for \(\epsilon_{s}=1\) and \(\epsilon_{s}=2.25\), respectively. FEM-simulated merging thicknesses \(t_{u}/\Lambda\) and \(t_{l}/\Lambda\), where accidental BICs merge at the upper and lower band edges, respectively, are also displayed for comparison. Figure 4 demonstrates that \(t_{0}/\Lambda\) obtained from Eq. (10) could provide useful reference for achieving merged states of BICs for ultrahigh-\(Q\) resonances. In Fig. 4, the values of \(t_{0}/\Lambda\) change from 1.14588 to 1.16981 when \(\epsilon_{s}=1\), and from 1.08116 to 1.16066 when \(\epsilon_{s}=2.25\). As the value of \(\epsilon_{0}\), representing the averaged dielectric constant in the waveguide region, decreases, the calculated and simulated merging thicknesses decrease. Figure 4 also shows that a higher dielectric constant \(\epsilon_{s}\) in the cladding region results in a decrease in merging thicknesses. Changes in \(\epsilon_{0}\) and \(\epsilon_{s}\) have a limited impact on the analytical and FEM-simulated merging thicknesses. The dependency of merging thicknesses on the dielectric constants \(\epsilon_{0}\) and \(\epsilon_{s}\) may be attributed to the strong confinement of guided modes in the waveguide and the scaling property of Maxwell's equations [40].
Figure 3: The locations of the accidental BICs, \(\delta_{\rm BIC}\), represented by a blue solid line as a function of \(t\), are obtained from the 1D coupled-mode analysis. The overlap integral \(\Sigma\) in Eq. (10) becomes zero at \(\delta_{\rm BIC}\). FEM-simulated BIC points, \(k_{\rm BIC}\), denoted by magenta circles, are also plotted for comparison. In the calculation of \(\delta_{\rm BIC}\), \(\epsilon_{0}\) and \(\epsilon_{s}\) are fixed to 4 and 1, respectively, so that the analytically obtained \(\delta_{\rm BIC}\) explains the evolution of the accidental BICs simulated by FEM in Fig. 1. The parameter \(t\) was used as the vertical axis, such that BIC points move downward as \(t\) increases. Since the eigenfrequencies of guided modes get lower as \(t\) increases, the FEM-simulated magenta circles mimic the dispersion curves represented in Fig. 2(a).
## V Merging and band transition of BICs in 2D photonic lattices
We also investigate the merging and band transition of BICs in 2D photonic crystals with a thin-film geometry. As illustrated in Fig. 5(a), we use a simple 2D periodic slab composed of square arrays of square-shaped high dielectric constant (\(\epsilon_{h}\)) materials in the background medium with a low dielectric constant (\(\epsilon_{l}\)). Here, the averaged dielectric constant is given by \(\epsilon_{0}=\epsilon_{l}+\rho^{2}\Delta\epsilon\). FEM-simulated dispersion curves plotted in Fig. 5(b) shows that there are four bands, A, B, C, and D, in the vicinity of the second-order \(\Gamma\) point. Additionally, spatial electric field (\(|\mathbf{E}|\)) distributions of four band edge modes and radiative \(Q\) factors displayed in Figs. 5(c) and 5(d), respectively, show that the band edge modes in A and B are nonleaky symmetry-protected BICs. Conversely, band edge mode in C and D are degenerate and generally radiative out of the photonic crystal slab. Merging and band transition of accidental BICs can be clearly seen in Fig. 5(e), which depicts the evolution of radiative \(Q\) factors as the slab thickness varies. The radiative \(Q\) factors are simulated along the TX and TM directions in the Brillouin zone. In this study, we investigate the merging and band transition of BICs in the highest band A and the lowest band D, for convenience. When \(t=1.161\)\(\Lambda\), two accidental BICs with \(\pm 1\) charge and one symmetry-protected BIC with \(-1\) charge appear, resulting in three distinct peaks in the \(Q\) factor curve in band A. Accidental BICs with \(+1\) and \(-1\) charge approach the symmetry-protected BICs along the \(\Gamma\)X and \(\Gamma\)M directions, respectively. Due to the \(C_{4}\) rotational symmetry of the 2D lattice, four sets of accidental BICs with \(+1\) charge and four sets with \(-1\) charge appear simultaneously. As the value of \(t\) increases from \(1.161\)\(\Lambda\), eight accidental BICs gradually move downward along the band A and approach to \(\Gamma\) point where a symmetry-protected BIC is located. When \(t=1.176\)\(\Lambda\), eight accidental BICs and one symmetry-protected BIC merge at the lattice \(\Gamma\) point, resulting in a noticeably enhanced peak in the \(Q\) factor curve. When \(t=1.18072\)\(\Lambda\), two peaks are observed in the \(Q\) factor curves at the edges of band A and band D. The peak in band A is attributed to the symmetry-protected BIC, while the enhanced peak in band D arises from the merged state of accidental BICs. As \(t\) is increased beyond \(1.18072\)\(\Lambda\), the merged state at the \(\Gamma\) point splits into isolated peaks with \(+1\) and \(-1\) charges, and these peaks move downward along band D.
In the FEM simulations with 2D photonic crystal slabs, the merging of BICs was achieved when \(t=1.176\)\(\Lambda\) and \(t=1.18072\)\(\Lambda\), which are close to the analytical merging thickness of \(t_{0}/\Lambda=1.167\) from Fig. 4(a). Furthermore, Jin _et al._ have recently demonstrated experimentally that on-chip photonic resonances can be made robust against fabrication imperfections by combining nine BICs in 2D photonic crystal slabs [31]. The merged state of BICs was achieved at a normalized thickness of \(h/a=1.13\), which is also close to \(t_{0}/\Lambda=1.167\). Hence, it is reasonable to conclude that the analytical merging thickness obtained from Eq. (10) could also be beneficial for achieving merged states of BICs in practical 2D photonic crystal slabs. Compared to the merged states in a 1D photonic lattice plotted in Fig. 2(b), the merged states in a 2D lattice exhibit significantly improved radiative \(Q\) factors across a wide range of wavevectors. This improvement is due to the convergence of a larger number of BICs at the center of the Brillouin zone in 2D configurations. Jin _et al._ showed that there was an improvement in the scaling property from \(Q\propto 1/k_{z}^{-2}\) to \(Q\propto 1/k_{z}^{-6}\) by merging eight accidental BICs and one symmetry-protected BIC in 2D photonic crystal slabs.
## VI Conclusion
In conclusion, we analytically and numerically investigated the formation, merging, and band transition of accidental BICs in 1D and 2D photonic crystal slabs. Using a simple coupled-mode analysis, we derived an analyti
Figure 4: Analytical and FEM-simulated normalized merging thicknesses as a function of \(\epsilon_{0}\) for (a) \(\epsilon_{s}=1\) and (b) \(\epsilon_{s}=2.25\). Analytical results \(t_{0}/\Lambda\) are beneficial to find the merging thicknesses \(t_{u}/\Lambda\) and \(t_{l}/\Lambda\). In the FEM simulations, structural parameters \(\rho=0.35\) and \(\Delta\epsilon=0.5\) are kept to constant.
cal expression for the overlap integral associated with the out-of-plane radiation of guided modes. Accidental BICs can emerge at generic \(k\) points, including the lattice \(\Gamma\) point, where the value of the overlap integral approaches zero. Due to the \(180^{\circ}\) (\(90^{\circ}\)) rotational symmetry of 1D (2D) lattices, two (eight) accidental BICs appear in the momentum space simultaneously. As the thickness of the slab increases, the accidental BICs move downward along the dispersion curve, resulting in the merged state at the edges of the second stop band. As the thickness increases further, the merged state of BICs transitions from the upper to the lower band edge. With a further increase in thickness, the merged BICs at the lower band edge split into multiple isolated ones and move away from each other. We presented the theoretical merging thickness \(t_{0}/\Lambda\), which is the ratio between the thickness and period of photonic crystal slabs, where multiple BICs merge at the second-order \(\Gamma\) point. The merging of multiple BICs are important for practical applications because it provides a powerful mechanism to achieve robust ultrahigh-\(Q\) resonances capable of enhancing light-matter interactions. Our coupled-mode analysis and FEM-simulated results are helpful for achieving topologically enabled ultrahigh-\(Q\) resonances that are robust to fabrication imperfections in diverse 1D and 2D photonic lattices made of various dielectric materials.
**Supporting Information**
Supporting Information is available from the Wiley Online Library or from the author.
**Acknowledgements**
This research was supported by the grant from the National Research Foundation of Korea, funded by the Ministry of Science and ICT (No. 2022R1A2C1011091).
**Conflict of Interest**
The authors declare no conflicts of interest.
**Data Availability Statement**
Data underlying the results in this paper may be obtained from the authors upon reasonable request.
**Keywords**
bound states in the continuum, guided-mode resonances, topologically enabled ultrahigh-\(Q\)
|
2304.06991 | WYTIWYR: A User Intent-Aware Framework with Multi-modal Inputs for
Visualization Retrieval | Retrieving charts from a large corpus is a fundamental task that can benefit
numerous applications such as visualization recommendations.The retrieved
results are expected to conform to both explicit visual attributes (e.g., chart
type, colormap) and implicit user intents (e.g., design style, context
information) that vary upon application scenarios. However, existing
example-based chart retrieval methods are built upon non-decoupled and
low-level visual features that are hard to interpret, while definition-based
ones are constrained to pre-defined attributes that are hard to extend. In this
work, we propose a new framework, namely WYTIWYR
(What-You-Think-Is-What-You-Retrieve), that integrates user intents into the
chart retrieval process. The framework consists of two stages: first, the
Annotation stage disentangles the visual attributes within the bitmap query
chart; and second, the Retrieval stage embeds the user's intent with customized
text prompt as well as query chart, to recall targeted retrieval result. We
develop a prototype WYTIWYR system leveraging a contrastive language-image
pre-training (CLIP) model to achieve zero-shot classification, and test the
prototype on a large corpus with charts crawled from the Internet. Quantitative
experiments, case studies, and qualitative interviews are conducted. The
results demonstrate the usability and effectiveness of our proposed framework. | Shishi Xiao, Yihan Hou, Cheng Jin, Wei Zeng | 2023-04-14T08:25:53Z | http://arxiv.org/abs/2304.06991v1 | # WYTIWYR: A User Intent-Aware Framework with Multi-modal Inputs for Visualization Retrieval
###### Abstract
_Retrieving charts from a large corpus is a fundamental task that can benefit numerous applications such as visualization recommendations. The retrieved results are expected to conform to both explicit visual attributes (e.g., chart type, colormap) and implicit user intents (e.g., design style, context information) that vary upon application scenarios. However, existing example-based chart retrieval methods are built upon non-decoupled and low-level visual features that are hard to interpret, while definition-based ones are constrained to pre-defined attributes that are hard to extend. In this work, we propose a new framework, namely WYTIWYR (What-You-Think-Is-What-You-Retrieve), that integrates user intents into the chart retrieval process. The framework consists of two stages: first, the Annotation stage disentangles the visual attributes within the query chart; and second, the Retrieval stage embeds the user's intent with customized text prompt as well as bitmap query chart, to recall targeted retrieval result. We develop a prototype WYTIWYR system leveraging a contrastive language-image pre-training (CLIP) model to achieve zero-shot classification as well as multi-modal input encoding, and test the prototype on a large corpus with charts crawled from the Internet. Quantitative experiments, case studies, and qualitative interviews are conducted. The results demonstrate the usability and effectiveness of our proposed framework._
**CCS Concepts**
\(\bullet\) _Human-centered computing \(\rightarrow\) Visualization; \(\bullet\) Information systems \(\rightarrow\) Query intent; \(\bullet\) Computing methodologies \(\rightarrow\) Artificial intelligence;_
## 1 Introduction
Data visualization can empower users' understanding of data and facilitate communication. Therefore, enormous charts are flourishing on the Internet. Designing an appropriate chart is a time-consuming and labor-intensive process that needs to consider various visual attributes (_e.g._, chart type, color) [18], as well as the design style (_e.g._, context information, 2D/3D effect) [19]. Therefore, designing based on existing examples, rather than starting with sketches, is a more preferred approach [10, 17]. Chart retrieval, as an approach to return the ranked examples with respect to the corresponding query, has gained much interest in both industry and academia [1, 18, 19].
However, existing works mainly focus on improving the similarity between query charts and retrieval results while neglecting implicit user intent. As illustrated in Figure 1, existing chart retrieval frameworks might be divided into two categories: example-based (Figure 1 (left-top)) and definition-based (Figure 1 (left-bottom)) approaches. Example-based methods (_e.g._, [20, 17, 21]) characterize the relevance criteria by the visual similarity of charts, whilst definition-based methods (_e.g._, [1, 18, 19]) measure similarity based on attributes extracted from the input charts. However, these methods may suffer from both input and target shifts problems. First, many existing works take only specific formats (_e.g._, scalable vector graphics (SVG) [19, 18]) of visualization as inputs, which are often synthesized with monotonous data distributions and limited chart types that incurs overfitting problem of models embedded in the framework. In contrast, most online charts are generally in bitmap formats,
Figure 1: Comparison of pipelines of existing and our proposed chart retrieval frameworks: example-based retrieval (left-top), definition-based retrieval (left-bottom), and our proposed intent-aware retrieval (right).
and they are highly diverse in terms of chart types and visual styles [1]. In addition, online charts contain noisy and redundant information, putting higher demands on generalizability and robustness of the retrieval framework. Second, example-based methods would indiscriminately recall pixel-wise visual similarity in the input chart, while definition-based methods consider only a small number of pre-defined and fixed chart attributes. Such rigid similarity measurements would return unwanted target charts. Furthermore, it may tend to limited the retrieval to a specific chart attribute, a specific combination of chart attributes, or even chart attributes that are not necessarily enclosed by the query chart.
To mitigate such shifts, we propose a novel chart retrieval framework named _WYTIWYR_. It explicitly extracts visual attributes for the query chart, and offers a set of target charts through a customized retrieval procedure that considers flexible combinations of the visual attributes and users' intents. As Figure 1 (right) illustrates, our method tears down the query chart flexibly with customizable attributes and actively injects user intent as auxiliary inputs to seek better chart retrieval outputs.
Our intent-aware framework consists of two essential stages, the first stage is _Annotation_, which highlights explicitly disentangling the visual attributes for query charts. The disentangled attributes would enable the flexibly for users to combine the attributes representing their intent. Because these attributes differ amongst chart visualizations, we performed preliminary research with 18 visualization categories to establish the branching of each attribute and what users would anticipate when retrieving charts. Based on the study, we utilize deep neural networks to train several independent attribute classifiers tailored to four primary visual attributes, namely {_Type_, _Trend_, _Color_, _Layout_}. To enable the resilience to user intent, we leverage the state-of-the-art contrastive language-image pretraining (CLIP) [16] model famous for zero-shot classification. The model allows users for their own creation of the attribute classifier to identify the extended attributes. These attributes are excluded from our preliminary set of classifiers but are still within the query process.
The second stage is _Retrieval_, which highlights the role of human-in-the-loop through intent attributes as prompts. The intent attributes would condition intent-aware filter, narrow the search scope from a huge collection of charts, and finally generate multiple candidate charts. The multi-modal encoder fuses context information from both the query chart and user intent prompt into a joint representation. After that, similarity modeling is conducted between such joint representation and candidate charts. Ultimately, we rank the similarity scores in decreasing order and output the top-\(K\) results as the target charts. In summary, our main contributions are summarized as follows:
* **Intent-aware Retrieval.** We propose a novel framework integrating user intent into chart retrieval process. The _Annotation_ stage disentangles attributes and enables a flexible combination of attributes. The _Retrieval_ stage digests both query chart and user intent as multi-modal inputs to get target retrieval results.
* **Prototype Construction.** We implement a prototype system combining CLIP model with visual interface to support intent-aware retrieval. Dataset, code, pretrained model are released at [https://github.com/SerendipityxX/WYTIWYR](https://github.com/SerendipityxX/WYTIWYR).
* **Extensive Evaluation.** We conduct extensive quantitative experiments, case studies as well as expert interviews to validate the effectiveness of our approach.
## 2 Related Works
**Chart Attributes**. Visualization is the process of mapping data to images. The data can be encoded in different ways, yielding various types of chart attributes that can be categorized in several levels. Low-level visual stimuli of charts include color, position, and shape [14], while high-level taxonomy includes consideration of the object of study, data, design model, and user model [13]. The importance of choosing appropriate chart attributes has been highlighted. Recent studies (_e.g._, [12, 13]) stress the effects of chart attributes on users' comprehension and cognitive loads. Specifically, Li et al. [11] point out that two charts of the same data but with different layouts can cause perceptional imbalance, even though they are informatively equivalent. Despite the importance, the design space of chart attributes is too large for designers to choose from, resulting in various types of charts in designers' own style [10]. Most of these works follow a coarse-to-fine strategy to first identify chart types and then extract visual marks and channels [15, 16, 17, 18].
Existing methods often mix visual attributes as global style for similarity estimation [13, 14, 15], in which not all attributes are of interest to users [13]. To fill the gap, the _Annotation_ stage of the proposed WYTIWYR framework allows users to disentangle attributes in a given chart.
**Chart Retrieval**. The core for effective retrieval is to delineate the similarity between the query input and retrieval candidates in the database. Based on the object of similarity measurement in the retrieval process, existing chart retrieval strategies can be mainly categorized into two classes: definition- and example-based approaches. Definition-based approaches (_e.g._, [1,
**Vision-Language Pretraining**. Large-scale pretrained models have shown promising performance in both natural language processing and vision tasks these years, such as BERT [13], RoBerTa [14], and GPT series [2, 1, 18, 20]. Following the paradigm of pretrain and finetune, many downstream tasks [1, 19] transfer the knowledge from the pretrained model without training a new model from scratch by utilizing it. Prompt, a text sequence in the form of natural language, links the pretrained model and downstream tasks as a bridge. Strobelt _et al._[2] develops a visual interface to provide a promising prompt evaluation. Combined with a high-capacity text encoder and visual encoder, contrastive language-image pretraining (CLIP) [10] learns heterogeneous multi-modality representations from 400 million image-text pairs by resorting to semantic supervision from the embedding space of CLIP. Several studies [2, 1, 18] extend its applicability in a zero-shot manner, which means the model can predict samples whose class is not observed during training.
In this work, we take advantage of vision-language pretraining models, to encode user intents as a prompt to collaborate with the decisive process of retrieval. The CLIP-driven model is leveraged to align user intent with corresponding visual attributes in both _Annotation_ and _Retrieval_ stages. In this way, we can offer highly customizable attributes for chart annotation and multi-modal representation extraction for chart retrieval.
## 3 WYTIWYR Framework
To build the WYTIWYR framework, we first formulate chart attributes based on a preliminary study (Sec. 3.1), then present an overview of the WYTIWYR framework (Sec. 3.2).
### Attribute Formulation
There are two categories of chart attributes for chart retrieval: 1) _primary attributes_ that are generally considered for chart retrieval, and 2) _extended attributes_ that meet the specific needs of different users. To better understand user intents, we conducted a comprehensive study to categorize chart attributes.
**Study design.** Our preliminary study was conducted online and involved 40 participants whose ages ranged from 20 to 52 (mean = 24.3). Before the study, we first introduced the background and purpose of the experiment to the participants. Then we tested their knowledge on the chart through a set of verification questions. Only the participants who could identify typical charts, such as bar and pie charts, were allowed to participate in the study. Fortunately, all participants were familiar with charts since their popularity on social media. Next, the participants were shown multiple real-world charts and asked to indicate what attributes were of interest. To eliminate any potential subjective bias during the chart selection process for testing, a random sample of five images (three in the Beagle dataset [1], two in other real-world examples) was taken for 18 types of chart from the collected data. The testing charts were carefully chosen according to chart type taxonomy as in the study by Borkin _et al._[2]. The detail of sampled testing chart can be seen at supplementary material. Their answers were recorded in text and summarized in different perspectives after the experiments. The perspectives were cross-validated by two of our co-authors to ensure correctness.
**Result.** As shown in Figure 2, we identify a total of eight perspectives of attributes are of interest from the feedback, which are also consistent with previous studies [2]. Among them, _Type, Color, Trend_, and _Layout_ are the four attributes mostly frequently mentioned by the participants and can be applied to most types of charts. Although the _Form_ attribute was also frequently mentioned by participants, it is primarily associated with line charts and less frequently mentioned in the other chart types. Therefore, to ensure generalization and expressivity, we decided not to combine _Form_ with the other four attributes. We dub _primary attributes_ as \(\mathcal{T}_{p}:=\{\textit{Type, Color, Trend, Layout}\}\). In addition, the participants also brought up some other chart attributes, including _form, background, context_, and _2D/3D_, which we dubbed as _extended attributes_\(\mathcal{T}_{c}\) that includes the other attributes. For these chart attributes, the variation between different users is large, making it difficult to be comprehensively enumerated.
Moreover, the study also revealed a set of design options for each chart attribute. Specifically, we choose 10 primary categories and 18 subcategories of chart types from the visualization taxonomy [2]. Figure 3 illustrates the options in the attribute type of {_color_, _trend_, _layout_} for several chart types including bar, line, and point charts. Notice that some chart types may not have all
Figure 3: **Illustration of attribute options of primary attributes for several chart types.** Some options may not be available for certain chart types, such as _Layout_ for the point chart.
Figure 2: **Results of our preliminary study.**_Type, color, trend,_ and _layout_ are voted by most participants and identified as _primary_ attributes, whilst the others are categorized as _extended_ attributes.
options mentioned above. For example, _layout_ for point charts are not available, and maps only have _color_ options.
### Overall Pipeline
The workflow of our proposed WYTIWYR framework is depicted in Figure 4. For the _Annotation_ stage, we employ several classifiers based on robust neural network architecture to disentangle \(\{\mathcal{T}_{p},\mathcal{T}_{c}\}\in\mathcal{T}\) embedded in the query chart \(\mathcal{Q}\). After that, users can select the disentangled attributes \(\mathcal{I}_{A}\) as their intents based on their needs. Also, \(\mathcal{I}_{A}\) would condition the components of the filter in the retrieval stage. For the _Retrieval_ stage, we take a dual-path similarity modeling between the _query feature_ and the _candidate info_ with the guidance of intent attributes to yield the return charts. For the path of the _query feature_, bitmap query chart \(\mathcal{Q}\) with user intent text prompt \(\mathcal{I}_{p}\) is fed into a multi-modal encoder to obtain the joint representation. For the _candidate info_ path, the feature is produced by charts in the database filtered by the intent-aware filter built by \(\mathcal{I}_{A}\). Throughout the overall pipeline, user intent can be injected and guide the chart retrieval process by the following three ways:
* **Classifier Customization.** Users can tailor the usage of classifiers depending on their needs in order to determine the presence of an attribute in the query chart.
* **Disentangled Attributes Selection.** The attributes adopted in the _Retrieval_ stage can be selected and combined by the users.
* **User Prompt Tuning.** Users can add specific implicit intent as the text prompt to guide the retrieval process, which is independent of the query chart.
## 4 WYTIWYR Prototype System
### Stage1: Annotation
As shown in Figure 5, the task of the _Annotation_ stage is to disentangle and generate the visual attributes \(\mathcal{T}\) adhered to the given query chart \(\mathcal{Q}\). For the primary attributes \(\mathcal{T}_{p}\), we adopt four classifiers for separate extraction in a supervised learning manner. For the extended attributes \(\mathcal{T}_{c}\), as they are absent from our training, we conduct the extraction process in the fashion of zero-shot learning.
#### 4.1.1 Annotation for Primary Attributes
**Redundant Information Removal.** Several practical scenario-based raw data are shown in Figure 6 (a), including fancy decoration, text information, background, and legends. This work focuses on the intrinsic attributes of \(\mathcal{Q}\) instead of this redundant information. Hence we remove the redundant information using ISNet [QDH\({}^{+}\)22], a segmentation network with pretrained weights.
Nevertheless, there exists context beyond the segmentation capability of ISNet, such as the emojs in the left-most example in Figure 6 (a) & (b). The redundant emojs in red and yellow colors would degrade the performance of colormap classification. Instead of being categorized as _categorical colormap_, the chart would be mistakenly recognized to _diverging colormap_. To mitigate such issue, we ignore the colors that share less than 10% pixels of non-transparent regions of the image after segmentation.
**Unified Classifiers.** For annotation of the primary attributes, we employ ResNet50 [10] architecture as the backbone for all four attribute classifiers. Thanks to the residual connection among the convolution layers, the network is both lightweight and performance-guaranteed. During the training process, the network will learn the complex features under the supervision of given attributes. In the reference process, the trained classifier would identify the corresponding attribute in the query chart, as in Figure 5.
The only difference in the four classifiers is the number of output channels in the last fully connection layer. For instance, the number of \(Type\) classifiers is set to 18 since there are 18 types of charts
Figure 4: **Overall pipeline of attributes-aware retrieval.** The pipeline consists of two stages: Annotation for extracting the attributes and Retrieval for modeling the similarity between query charts and charts in the database.
Figure 5: **The _Annotation_ stage of our pipeline.** To disentangle various attributes in the query chart, four classifiers are built for primary attributes. In addition, one CLIP-based classifier is employed to optionally annotate the extended attribute via user customization.
under consideration. Similarly, based on the specific task, we set this number to 3 for the _Layout_, _Trend_, and _Color_ classifiers with the respect to their attribute options; see Figure 3 for detail. Note that \(\mathcal{Q}\) may not have the coverage of all attributes included in the classifiers. For instance, \(\mathcal{T}_{p}\) of a heatmap chart is \(\{\textit{Type},\textit{Color}\}\) while \(\{\textit{Trend},\textit{Layout}\}\) attributes are absent.
**Loss Function.** There are two challenges to overcome when formulating the loss function: 1) imbalanced samples in each class of the dataset (see Table 1), and 2) the existence of noisy samples that are hard to classify. In order to improve the accuracy for \(\mathcal{T}_{p}\) annotation, we introduce Focal Loss [1], which modifies the standard cross entropy loss to overcome the above-mentioned challenges. The Focal Loss is defined as:
\[\mathcal{L}_{\text{FL}}(p_{t})=-\alpha_{t}(1-p_{t})^{7}\text{log}(p_{t}), \tag{1}\]
where \(p_{t}\in[0,1]\) represents the estimated probability of class \(t\), \(\alpha_{t}\) represents the scaling factor, and \(\gamma\) represents the modulating factor. Among them, \(\alpha_{t}\) is set by inverse class frequency, thus learning parameters tend to contribute to classes with fewer samples, and \(\gamma\) assists in up-weight the loss assigned to poor-classified examples, avoiding the possibility that the training process is dominated by the amount of well-classified samples.
#### 4.1.2 Annotation for Extended Attributes
Since the primary attributes only consider the general needs in chart retrieval, we offer an optional classifier manipulated by the users to distinguish \(\mathcal{T}_{e}\) in this query chart, as shown in Figure 5. The input of \(\mathcal{T}_{e}\) classifier requires the user to provide several labels of the attribute apart from \(\mathcal{Q}\) (_e.g._, style). We denote these labels as \(\{t_{1},t_{2},\ldots,t_{m}\}\in\mathcal{T}\). As these labels are out of our dataset, a task of zero-shot classification is naturally formed. In the following, we will briefly introduce the mechanism of the CLIP model at a general level with a toy example as in Figure 5, where the user sets the text labels as [_~3D style=_"Flat style=_"Sketch style=_"].
The CLIP model aligns \(T\) embeddings with \(\mathcal{Q}\) embeddings in a multi-modality embedding space in a contrastive manner. Specifically, in the embedding space of the example in Figure 5, the distance of \(\mathcal{Q}\) and _~3D style=_ would be less than the other two irrelevant labels. As such, the query results tend to be _"3D style"_ bar charts. The CLIP model balances well between the performance and computational cost in the prototype system. Nevertheless, the model in our framework can be feasibly replaced with other advanced language-image models, _e.g._, [1, 2].
### Stage2: Retrieval
As the _Amnotation_ stage completes the disentanglement of the attributes \(\mathcal{T}\) in the query chart, users in the _Retrieval_ stage can select the intent attributes \(\mathcal{I}_{A}\) based on their intents. Depicted in Figure 7, there are two branches with different modules for our _Retrieval_ stage, namely multi-modal encoder and intent-aware filter. The intent-aware filter rules out charts whose attributes are beyond the range of \(\mathcal{I}_{A}\). The multi-modal encoder integrates \(\mathcal{Q}\) and \(\mathcal{I}_{P}\) as a joint representation. Then, similarity modeling is conducted between results of these two branches, yielding the final retrieval results based on the modeling score.
#### 4.2.1 Multi-Modal Encoder
This branch would form a CLIP-generated _query feature_ with inputs \(\mathcal{Q}\), \(\mathcal{I}_{P}\). As introduced in Sec. 4.1.2, the CLIP embedding space has heterogeneous aligned text and visual features from the multi-modal input. Hence, \(\mathcal{Q}\) and \(\mathcal{I}_{P}\) are encoded into features by their respective CLIP encoders, and fused as a joint multi-modal feature \(f_{\mathcal{M}}\) that can be denoted as:
\[f_{\mathcal{M}}=\frac{F_{\emptyset}(\mathcal{Q})+G_{\emptyset}(\mathcal{I}_{ P})}{2}, \tag{2}\]
where \(F_{\emptyset}\) and \(G_{\emptyset}\) are denoted as the image encoder and text encoder of the CLIP model, respectively.
#### 4.2.2 Intent-Aware Filter
The branch of Intent-Aware Filter rules out irrelevant charts in the database and forms the _candidate feature_. Previous works (_e.g._, [1, 1]) on chart filtering primarily rely on fixed settings of the model, which are coarse and neglect the user's intents. For this shortcoming, we propose an intent-aware filter constructed by \(\mathcal{I}_{A}\), a set of disentangled attributes selected by the user. For example, with \(\mathcal{Q}\) illustrated in Figure 7, a user may select the [_Type_, _Layout_, _Trend_] as the target attributes for the retrieval, then the filter retains \(n\) charts containing these target attributes in the
Figure 6: **Redundant information removal and main color extraction.** (a) Raw images. (b) The chart after removing redundant information. (c) The extracted color after filtering redundant colors.
Figure 7: **The _Retrieval_ stage consists of two branches. The intent-aware filter branch (bottom) filters out charts based on intent attributes selected by the user, generating multiple chart candidates. The multi-modal encoder branch (top) produces the query feature based on the multi-modal inputs. Similarity modeling is conducted between results of these two branches.
chart database. Then, the retained charts would serve as candidates \(C:=\{c_{1},c_{2},\cdots,c_{n}\}\) that are encoded as _candidate feature_\(f_{C}:=\{f_{c_{1}},f_{c_{2}},\cdots,f_{c_{n}}\}\) by image encoder in the CLIP model.
#### 4.2.3 Similarity Modeling
As the core of retrieval, similarity modeling is comprehensively performed with triplets \(\{\mathcal{Q},\mathcal{I}_{A},f_{\mathcal{M}}\}\). Among them, \(\mathcal{Q}\) provides the global perception composed of implicit features. The explicit intent attributes \(\mathcal{I}_{A}\) and implicit query feature \(f_{\mathcal{M}}\) that include user intent prompt and query chart, work together to recall user-intended examples throughout the retrieval process. The overall similarity score \(\mathcal{S}\) can be denoted as:
\[\mathcal{S}=S_{\mathcal{Q}}\cdot\mathrm{e}^{\mathrm{v}S_{\mathcal{I}_{A}}+ \mathrm{\beta}_{\mathcal{M}}}, \tag{3}\]
where \(S_{\mathcal{Q}}\) denotes the global perception at pixel level, and \(S_{\mathcal{I}_{A}}\) and \(S_{\mathcal{M}}\) denotes the similarity score of user-selected attributes and the multi-modality feature, respectively. The scaling factors \(\mathrm{v}\) and \(\mu\) are empirically set as 1 and 5, respectively. All \(S_{\mathcal{Q}}\), \(S_{\mathcal{I}_{A}}\) and \(S_{\mathcal{M}}\) are normalized with range of \([0,1]\). In the following, we will introduce these similarity scores in detail.
**Global Perception Score.** Although salient attributes with respect to user intent are disentangled and selected, a multitude of implicit features are intertwined within the chart image, which forms the concept of global perception. We adopt the classic cosine similarity and compute it at the pixel level between \(\mathcal{Q}\) and the feature of every candidate \(f_{C},i=1,2,\ldots,n\) as follows:
\[S_{\mathcal{Q}}=\frac{F_{\mathcal{Q}}(\mathcal{Q})\cdot f_{C}}{\|F_{\mathcal{ Q}}(\mathcal{Q})\|\|f_{C}\|}. \tag{4}\]
**Intent Attributes Score.** Extended attribute \(\mathcal{T}_{c}\) contains a user's expected attributes of retrieval results charts apart from four primary attributes. Among the attributes, _Type_ and _Layout_ define core properties of a chart, which is easy to distinguish from \(\mathcal{Q}\). Therefore, we neglect them in the similarity crafting. The other attributes serve as the miscellaneous variant, and the change of them can be minuscule. This motivates us to build another score \(S_{\mathcal{I}_{A}}\) to further enhance the retrieval process. Therefore, we mainly consider the following three attributes: _Trend_\(\mathcal{N}\), _Color_\(\mathcal{C}\) and _extended attribute_\(\mathcal{T}_{c}\), \(S_{\mathcal{I}_{A}}\) is formulated as:
\[S_{\mathcal{I}_{A}}=S_{\mathcal{N}}+S_{\mathcal{C}}+S_{\mathcal{T}_{c}}. \tag{5}\]
For \(S_{\mathcal{N}}\), we build an extractor to extract the trend feature from the query chart \(\mathcal{Q}\) and every candidate chart \(C_{i},i=1,2,\ldots,n\). The extractor shares the parameters with the trend classifier in the _Annotation_ stage except the last fully connected layer. Then, \(S_{\mathcal{N}}\) can be estimated by cosine similarity between the extracted features.
For \(S_{\mathcal{C}}\), we follow the scheme in Figure 6 to transform \(\mathcal{Q}\) into a proportional color palette after the steps of background removal and color extraction. Then the proportional color palette is transformed into a 128-bin color histogram in all RGB channels and separately stored in three vectors. We then concatenate these vectors to form the ultimate color vector \(V\). Denote \(V_{\mathcal{Q}}\), \(V_{C_{i}},i=1,2,\ldots,n\) as the color vectors of \(\mathcal{Q}\), \(V_{C_{i}},i=1,2,\ldots,n\), respectively, we then estimate the cosine similarity between them.
For \(S_{\mathcal{T}_{c}}\), since it is hard to quantify the similarity of intents, we leverage the power of the CLIP model again to give an accurate response. We feed the CLIP model with candidate chart \(C_{i},i=1,2,\ldots,n\) and text labels in user intent \(t_{i},i=1,2,\ldots,m\). For each chart, we would obtain \(m\) outputs \(\{y_{1},y_{2},\ldots,y_{m}\}\). Prior to the computation, the user would select one text label indexed \(s\) as the attribute that best represented their intent. Then, we denote \(S_{\mathcal{T}_{c}}\) for a candidate chart as: \(S_{\mathcal{T}_{c}}=\mathrm{e}^{y_{s}}/\sum_{m}\mathrm{e}^{y_{m}}\).
**Feature Matching Score.** Similarly, we match the closeness between the multi-modal feature and the candidate feature by the following equation: \(S_{\mathcal{M}}=\frac{f_{\mathcal{M}}\cdot f_{C}}{\|f_{\mathcal{M}}\|\|F_{ \mathcal{C}}\|}\).
### System Interface
We design an interface for the prototype system that allows users to retrieve a chart according to their intents.
**Annotation View.** In this view, users can upload a query chart and get disentangled primary attributes, including _Type, Color, Trend_, and _Layout_. As shown in Figure 8 (A), a bar chart is uploaded and the automatically annotated attributes, "_Barchart, Categorical Colormap, Increasing Trend, Horizontal Layout_" are presented.
**Retrieval View.** In this view (Figure 8 (B)), users can view and choose both primary and extended attributes. The first tag in green corresponds to the annotated attributes from the query chart, while other options are also shown, allowing users to select the desired attributes. If users feel that the default primary attributes are insufficient or have other attributes of interest, they can add a new classifier by clicking on the chart in the upper right corner, which reflects the philosophy of the proposed intent-aware design. In the input box below, users can enter their intentions as intent prompts for more customized queries. As shown in Figure 8 (B1), the user selects "Bar chart" for the type attribute and enters "Fancy style with icon" in the Retrieval view.
**Result View.** This view (Figure 8 (C)) shows top five query results, arranged in decreasing order according to their similarity scores.
Figure 8: **Interface of our prototype system. A) the _Annotation_ view decouples the properties of the query chart; B) the _Retrieval_ view allows users to select the primary and extended attributes, and C) the _Result_ view shows the top 5 retrieved charts.**
## 5 Evaluation
### Quantitative Experiment
**Dataset.** Previous works concentrate on retrieval of limited chart types, _e.g._, single [18] or a few type [1, 19] chart retrieval. Moreover, some of the literature coarsely considers the categorization of charts, by broadly grouping several chart categories into one [18]. Many datasets are synthesized with simple composition and monotonous variation, making it difficult to adapt to real-world scenarios that are much more complex. In this work, we utilize the Beagle dataset [2]_with charts_ in bitmap format, which offers visualization collections designed by real-world users through multiple tools, including D3, Plotly, Chartblocks, Fusion Charts, and Graphio. To make our framework more robust, we further add more difficult instances by manually collecting 4k images from Pinterest, which supplements a large number of stylized and irregular charts. We filter out charts whose type is beyond our scope (_e.g._, scientific visualizations) and manually label them with \(\mathcal{T}_{p}\). Finally, our dataset consists of 33,260 images in total. The detailed distribution is shown in Table 1.
**Annotation Accuracy.** We evaluate the performance of \(\mathcal{T}_{p}\) annotation using ResNet50 architectures with different losses. As Table 2 lists, the best result is achieved with the designated Focal Loss.
**Retrieval Results.** To evaluate the retrieval performance comprehensively, we examine both precisions and recall via the F1-score. Denoting the classification result with true positive, true negative, false positive, and false negative conditions as \(TP,TN,FP,FN\), F1-score can be formulated as:
\[F1\text{-}score=\frac{2TP}{2TP+FP+FN}. \tag{6}\]
The retrieval performance with \(\mathcal{T}_{p}\) is individually computed using F1-score in the top-\(K\) fashion, where \(K\in\{3,5,10\}\). We devise two settings of with and without user intent to fully evaluate our approach. For the setting of the absence of user intent, similarity estimation depends on the global perception \(S_{Global}\).
To demonstrate the effectiveness of our approach, we take two other approaches as the counterpart: a conventional method (Histogram of Oriented Gradients [18], denoted as HOG from hereon), and a deep-learning-based method (ResNet50, denoted as CNN from hereon). The quantitative results are displayed in Table 3. The results reveal that our method significantly outperforms the others. Furthermore, as \(K\) increases, our method keeps the performance superiority while other methods degrade dramatically. For the evaluation of user intent retrieval, as there are no ground-truth labels to compute quantitative results, we visually compare results generated by different methods for several typical retrieval results. As shown in Figure 9, CNN outputs visually more similar results than HOG. Our method also produces similar results without user intent. Nevertheless, our method allows users to add their intent by selecting the disentangled primary attributes \(\mathcal{T}_{p}\) or adding extra intent, which contributes to retrieving results of interest as shown in the last two rows.
### Case Study
During the process of visualization design, many inspirations may arise, but bringing those inspirations to fruition can be time-consuming. Retrieval is a fast way to validate those inspirations. Inspired by [19], two usage scenarios of chart retrieval are illustrated as a proof-of-concept of the WYTHVYR framework: 1) extending the design space by explicit goals and 2) fuzzy retrieval based on implicit user intent. For each scenario, we compare the results generated by our intent-aware chart retrieval technique with those generated by intent-free chart retrieval technique.
#### 5.2.1 Design Space Extension by Explicit Goals
With the prototype system, users can customize retrieval inputs based on disentangled attributes of the query chart and the user intent prompt. Moreover, beyond the four primary attribute classifiers, users can further add customized classifiers to extract extended attributes. In this scenario, we divide the disentangled attribute operation into three categories: origin attribute change, new attribute addition, and existing attribute deletion. In the end, we show an interesting case of attribute transfer that combines these operations. Figure 10 shows four cases of our study. In general, retrieval without user intent provides similar results as the input but gives minimum design insights. Instead, with the guidance of user intent, our framework retrieves more diverse results that are well-accepted. The following lists more details for each case.
\begin{table}
\begin{tabular}{|l|c||c|c|} \hline Chart.Type & \#Count & Chart.Type & \#Count \\ \hline \hline Bar chart & 7269 & Heatmap & 352 \\ Stacked Bar Chart & 1159 & Line Graph & 11605 \\ Circular Bar Chart & 608 & Star Plot & 491 \\ Donut Chart & 2459 & Choropleth Map & 640 \\ Pie Chart & 2587 & Scatter Plot & 3000 \\ Sankey Diagram & 266 & Word Cloud & 406 \\ Timeline & 324 & Dendrogram & 298 \\ Box Plot & 571 & Network & 395 \\ Histogram & 695 & Circular packing chart & 139 \\ \hline \end{tabular}
\end{table}
Table 1: Our integrated dataset consists of 18 types of charts from Beagle [2] and manual collection.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Top-K} & \multirow{2}{*}{Method} & \multicolumn{4}{c|}{F1-Score} \\ \cline{3-6} & & _Type_ & _Trend_ & _Layout_ & _Color_ \\ \hline \hline \multirow{3}{*}{3} & HOG & 0.6199 & 0.6140 & 0.5449 & 0.6800 \\ & CNN & 0.9154 & 0.8043 & 0.7095 & 0.7717 \\ & Ours & **0.9549** & **0.8360** & **0.9280** & **0.8260** \\ \hline \multirow{3}{*}{5} & HOG & 0.5067 & 0.4853 & 0.3857 & 0.6045 \\ & CNN & 0.8872 & 0.7550 & 0.6324 & 0.7076 \\ & Ours & **0.9364** & **0.7944** & **0.9118** & **0.7730** \\ \hline \multirow{3}{*}{10} & HOG & 0.4124 & 0.4261 & 0.2741 & 0.5388 \\ & CNN & 0.8607 & 0.7085 & 0.5527 & 0.6520 \\ \cline{1-1} & Ours & **0.9091** & **0.7546** & **0.8812** & **0.7223** \\ \hline \end{tabular}
\end{table}
Table 3: F1-scores of _Retrieval_ results.
\begin{table}
\begin{tabular}{|c|c||c|c|c|c|} \hline \multirow{2}{*}{Top-K} & \multirow{2}{*}{Method} & \multicolumn{4}{c|}{F1-Score} \\ \cline{3-6} & & _Type_ & _Trend_ & _Layout_ & _Color_ \\ \hline \hline \multirow{3}{*}{3} & HOG & 0.6199 & 0.6140 & 0.5449 & 0.6800 \\ & CNN & 0.9154 & 0.8043 & 0.7095 & 0.7717 \\ & Ours & **0.9549** & **0.8360** & **0.9280** & **0.8260** \\ \hline \multirow{3}{*}{5} & HOG & 0.5067 & 0.4853 & 0.3857 & 0.6045 \\ & CNN & 0.8872 & 0.7550 & 0.6324 & 0.7076 \\ \cline{1-1} & Ours & **0.9364** & **0.7944** & **0.9118** & **0.7730** \\ \hline \multirow{3}{*}{10} & HOG & 0.4124 & 0.4261 & 0.2741 & 0.5388 \\ \cline{1-1} & CNN & 0.8607 & 0.7085 & 0.5527 & 0.6520 \\ \cline{1-1} & Ours & **0.9091** & **0.7546** & **0.8812** & **0.7223** \\ \hline \end{tabular}
\end{table}
Table 2: Accuracy of _Annotation_ results.
**Original Attribute Change.** The disentangled attributes are independent of each other, thus, users can replace one of the attributes while keeping others unchangeable. In Figure 10 (a), the color of the choropleth map is changed to user prompt "pink".
**New Attribute Addition.** Users can add new attributes, together with the disentangled attributes, to better describe their specific needs. In Figure 10 (b), users add "with a curve to indicate the distribution", and the results are changed to combinations of histograms and line chart.
**Existing Attribute Deletion.** As the attributes can be disentangled as users' needs, they can remove some attributes of no interest. In Figure 10 (c), the user dislikes dark background, then can discard the attribute by adding a new classifier with labels of [_"dark background"_, _"white background"_].
**Attribute Transfer.** As in Figure 10 (d), given a circular packing chart as a query chart, the user gets the attribute annotation as {_Type: "Circular Packing Chart", Color: "Sequential Colormap"_}. The user would like to get heatmaps with the same form as the query chart but with more contrast colors. S/he can get the desired results by changing _Type_ and _Color_ attributes to {_Type: "Heatmap", Color: "Diverging Colormap"_} in the retrieval stage.
#### 5.2.2 Fuzzy Retrieval by User Intent
Designing an expressive visualization incurs a steep learning rate for some novices, who may not have a specific design prior and prefer to perform trial and error on the search engine to seek a desirable design. Below, we list three scenarios to illustrate how our method supports such retrieval. Similarly, we will compare the results of the intent-free and intent-aware retrieval.
**Text Information Seeking.** Keyword-based searching is popular, but existing works approach this goal by storing text information in advance [16, 17] or relying on OCR [20], hindering generalizability and robustness of the works. Instead, our method allows to recognize text information based on knowledge transferred by the pretrained CLIP model. As shown in Figure 11 (a), to generate a chart for visualizing _"life expectancy"_, the user keeps the origin chart type _Scatter plot_ and data trend _Increase trend_, and adds the user intent as a text prompt "life expectancy". The intent-aware retrieval results imitate the famous design in Gapminder.
**Relevant Topic Finding.** Our method also has the capacity to
Figure 10: **Four cases for extending the design space by explicit attributes.** In each case, the first line presents results without user intent, while the second line presents results with user intent.
Figure 9: **Visual comparison of chart retrieval by different methods.** First row: Retrieval results by the conventional HOG method. Second row: Retrieval results by the deep-learning-based CNN method. Third to fourth rows: Retrieval results of our WYTTWYR framework.
find chart examples related to a specific topic. Previous work used word2vec [3] to model the distances between words [1], limiting the search scope within texts. We enhance the generalizability of matching text prompt with both text information and visual attributes. In Figure 11 (b), given a word cloud with the theme of nature, the user can retrieve some examples with text prompt as "about technology topic". Our system returns examples with content about technology. Moreover, with the text prompt of "all words form a shape of a tree", our system aligns the semantic information of text with visual attributes, returning examples with tree shapes instead of only containing the word "tree".
**Abstract Description Searching.** In the process of retrieval, users not having a clear search target tend to use ambiguous and abstract descriptions. In Figure 11 (c), the user inputs a basic timeline diagram with "fancy style", and gets several well-designed examples. To constrain the scope of _vertical layout_, the user adds such attribute and performs targeted retrieval.
### Qualitative Evaluation
#### 5.3.1 Study Design
Our study recruited seven participants (three women, four men) with ages 22 to 26. To demonstrate that our framework is useful for different levels of users, we enroll four experts, _i.e._, visualization designers (E1 - E4), and three novices with only experience in using visualizations in reports or courses (N1 - N3).
Before the experiment, we collected basic information from the participants through a questionnaire and introduced them to the use scenarios of chart retrieval. Then we instructed them to familiarize themselves with the system workflow. We also showed some examples of using prompts to help participants understand the prompt better and make a better choice of prompts. After understanding the background, participants were allowed to freely explore and use our system. They could upload a query chart and try to get a satisfactory design in multiple iterations. If there was no satisfactory design, they could also try combinations of attributes and prompts to see if the retrieved chart has inspiring results. Throughout the process, participants were encouraged to think aloud and give feedback whenever they wanted. After they felt it was sufficient, we conducted interviews about the usability of our system. Three questions were included: 1) Whether our approach is _effective_ in helping them find the chart that meets their intents; 2) Whether our approach is _efficient_ enough to avoid long waits; 3) Whether they are _satisfied_ with our system. The experiment lasted about 45 minutes. The participants were not compensated with money but beverages worthing about S5 dollars after the study.
#### 5.3.2 Feedback
**Effectiveness.** In general, all users agreed that our system effectively supported for retrieving the desired charts based on their intents. Most participants (6 out of 7) thought that our setting of the primary attribute was appropriate and comprehensive. E2 suggested to extract the main hue used in the chart in the annotation stage. Regarding the comprehensiveness and accuracy of the prompts, all participants felt that their search needs were largely met and the results were generally consistent with their intents. N1 spent a long time exploring the map as shown in Figure 10 (a). He found it exciting when typing "India", an India map was retrieved. Our system also effectively recognized his intents when he searched for maps of various colors.
The participants pointed out some limitations as well. When the prompt given by the user had multiple meanings, sometimes only one or two of the search results were exactly what the user expects. E4 tried to add a trend arrow on the bar chart. However, when he only typed "with arrow", some bar charts using arrow icons were also retrieved. He said that sometimes it was required to go and try several prompts to get the desired result. E3 encountered a similar situation, and it might be due to our dataset only containing limited charts that matched her specifications. Besides, sometimes participants may want to adjust attributes in the query chart, such as changing the white background to dark (N2), and changing the colorful design to black and white (E2). However, one or two of the returned results still had the features that they would like to change. "There are times when I want the query to focus more on the prompt I give and less on the original chart," said N2.
**Efficiency and Satisfaction.** All participants agreed that our system was responsive and user friendly. They were satisfied with our retrieving framework, and they pointed out some possible improvements. They found it helpful to decouple a chart into attributes and have a selective query. N2 stated, "It saved me the time of checking a lot of examples and found the desired design directly". N3 appreciated the function of adding a new classifier since it can be used to explore new types of visualizations. "It is convenient for novices who do not understand the type and content of the chart at first," he gave the reason. E1, a participant with a design background, said our results were inspiring. She was willing to see more results, even if they did not exactly match the search intents. "This could inspire me to come up with new ideas," she added.
Figure 11: **Three cases for showing fuzzy retrieval. Negative examples are highlighted with a red border.**
## 6 Discussion
WYTIWYR is an intent-aware chart query framework that can address various issues related to traditional chart retrieval and expedite the design process. In conventional retrieval, not all chart attributes may be relevant to users' intents, whilst some preferred attributes may even be absent from the chart. Instead, our system disentangles and combines attributes to ensure that the retrieved results encompass all attributes of user's intents. There are also several limitations in our current methods and avenues for future work.
### Limitations
**User Intent or Query Chart.** The input of our method consists of a query chart and user intent. The composition of two factors affects the retrieval results. As shown in Figure 10 (b), users have a strong intent of charts that have a curve, and our system would tend to return the results meeting such intents. However, the perception of global distribution may be lost. The priority between user intents as text and global distribution in the input chart is hard to tackle due to the usage scenarios. Furthermore, within the user intent, the weights of selected attributes and text prompts are hard to set in the similarity modeling. A dynamic adjustment of the priority between the user intent and query chart is demanded in this scenario.
**Prompt Sensibility.** Text prompt is integrated with query chart as joint input for retrieval in our work. However, designing effective prompts is challenging due to the significant impact of even slight wording changes on performance [2]. In our framework, prompt design is limited by the CLIP pretrained model, which is trained mainly on natural images, which makes it difficult to accurately interpret expert expressions for visualization domain. For instance, Figure 12 shows that using a more professional prompt in the third row results in negative results, when compared to the second row with identical intent. The CLIP model also struggles to identify rare or novel objects. Approaches for fine-tuning [1] the CLIP model of using advanced language-image models are promising. Obtaining enough chart-description pairs for training is also necessary, but the process is time-consuming and resource-intensive.
**Dataset.** Despite collecting a large number of real-world charts, our database remains limited in meeting vast user needs. In Figure 11 (b), only the first three charts are tree-shaped since we only have these charts that meet the user's intent in our database. The unbalanced dataset may adversely affect retrieval performance, as the attribute classifier training could become biased towards classes with larger volumes. We alleviate it by using focal loss, which assigns higher weights to minority class and misclassified examples. Due to the lack of a more detailed category in our dataset, some inaccurate results will occur in the retrieval. In Figure 12, two types of heatmaps are visible in the retrieval results.
**Chart Attributes.** In the preliminary study, we employed several chart types to determine the chart attributes that users tend to concentrate on. We randomly chose samples from both synthesized datasets and real-world collections to diversify the chart attributes. Nevertheless, the attributes may not fully cover user intents. A more comprehensive survey will be conducted to address the issue.
### Future Work
**Trade-off Control.** To balance between user intent and query chart, we plan to add a slider equipped with dynamic weight control to let the user manage this trade-off by themselves in the near future.
**Text Prompt Auto-completion.** Previous works [1, 2] utilize auto-completion as a hint to assist users in the application process, which is also helpful for our system to alleviate the interpretability shift between the user side and the model side. Specifically, we aim to build a mapping table to store the relationship between appropriate prompts and common user text expressions. Then, when a user completes the input, the keywords of it will be extracted by the named entity recognition technique [15]. Finally, the corresponding optimal prompt would be automatically completed by the search in the mapping table.
**Text-only retrieval.** Our system currently supports chart-only retrieval and chart-text retrieval. To offer a more generalized retrieval framework, we plan to add a text-only query, allowing users to obtain their retrieval by only providing text input. The text-only retrieval is beneficial to users who do not have any reference chart. To this end, we aim to prepare several basic charts with primary attributes in advance to serve as temporary query charts, which can be replaced by a more reliable chart from returned results by retrieval. The process can be regarded as an iteration of desirable results, with a clearer search goal and narrowed search scope.
## 7 Conclusion
In this paper, we propose a user intent-aware chart retrieval framework, which leverages multi-modal input to fuse explicit visual attributes and implicit user intent into the retrieval process. This pipeline consists of two core stages, namely _Annotation_, and _Retrieval_. The _Annotation_ stage disentangles visual attributes in the query chart to ensure a flexible combination of user intent attributes used in retrieval. The _Retrieval_ stage allows users to integrate the text prompt with the query chart to achieve more customized retrieval. Quantitative experiments prove the superior performance of our method compared with previous methods. Furthermore, we conduct two case studies containing two common retrieval strategies and interviews to demonstrate the effectiveness and usability. As an initial step to fuse user prompt in chart retrieval, we hope to enhance the prompt capacity to better meets users' growing needs and various usage scenarios. Dataset, code, pretrained model have been released to promote future research in this direction.
Figure 12: **A case for revealing the limitations. Negative examples are highlighted with a red border.**
## 8 Acknowledgments
The authors wish to thank anonymous reviewers for their constructive comments. The work was supported in part by National Natural Science Foundation of China (62172398), and the Red Bird Program at the Hong Kong University of Science and Technology (Guangzhou).
|
2306.00541 | Decomposing Global Feature Effects Based on Feature Interactions | Global feature effect methods, such as partial dependence plots, provide an
intelligible visualization of the expected marginal feature effect. However,
such global feature effect methods can be misleading, as they do not represent
local feature effects of single observations well when feature interactions are
present. We formally introduce generalized additive decomposition of global
effects (GADGET), which is a new framework based on recursive partitioning to
find interpretable regions in the feature space such that the
interaction-related heterogeneity of local feature effects is minimized. We
provide a mathematical foundation of the framework and show that it is
applicable to the most popular methods to visualize marginal feature effects,
namely partial dependence, accumulated local effects, and Shapley additive
explanations (SHAP) dependence. Furthermore, we introduce and validate a new
permutation-based interaction test to detect significant feature interactions
that is applicable to any feature effect method that fits into our proposed
framework. We empirically evaluate the theoretical characteristics of the
proposed methods based on various feature effect methods in different
experimental settings. Moreover, we apply our introduced methodology to three
real-world examples to showcase their usefulness. | Julia Herbinger, Marvin N. Wright, Thomas Nagler, Bernd Bischl, Giuseppe Casalicchio | 2023-06-01T10:51:12Z | http://arxiv.org/abs/2306.00541v2 | # Decomposing Global Feature Effects Based on Feature Interactions
###### Abstract
Global feature effect methods, such as partial dependence plots, provide an intelligible visualization of the expected marginal feature effect. However, such global feature effect methods can be misleading, as they do not represent local feature effects of single observations well when feature interactions are present. We formally introduce _generalized additive decomposition of global effects_ (GADGET), which is a new framework based on recursive partitioning to find interpretable regions in the feature space such that the interaction-related heterogeneity of local feature effects is minimized. We provide a mathematical foundation of the framework and show that it is applicable to the most popular methods to visualize marginal feature effects, namely partial dependence, accumulated local effects, and Shapley additive explanations (SHAP) dependence. Furthermore, we introduce a new permutation-based interaction test to detect significant feature interactions that is applicable to any feature effect method that fits into our proposed framework. We empirically evaluate the theoretical characteristics of the proposed methods based on various feature effect methods in different experimental settings. Moreover, we apply our introduced methodology to two real-world examples to showcase their usefulness.
Decomposing Global Feature Effects
interpretable machine learning, feature interactions, partial dependence, accumulated local effect, SHAP dependence
## 1 Introduction
Machine learning (ML) models are increasingly used in various application fields--such as medicine (Shipp et al., 2002) or social sciences (Stachl et al., 2020)--due to their better predictive performance compared to simpler, inherently interpretable models. The superior performance often comes from complex non-linear relationships or feature interactions in the data which can be modeled more accurately by more flexible and complex ML models. However, the more complex and flexible a model, the harder it becomes to explain its inner workings. A lack of explainability might hurt trust or might even be a deal-breaker for high-stakes decisions (Lipton, 2018). Hence, ongoing research on model-agnostic interpretation methods to explain any ML model has grown quickly in recent years.
One promising type of explanation is produced by feature effect methods which explain how features influence the model predictions (similarly to the coefficients in a linear model) (Molnar, 2022). We distinguish between local and global feature effect methods. Local
feature effect methods--such as individual conditional expectation (ICE) curves (Goldstein et al., 2015) or Shapley values / Shapley additive explanations (SHAP) (Strumbelj and Kononenko, 2014; Lundberg and Lee, 2017)--explain how each feature influences the prediction of a single observation. In contrast, global feature effect methods explain the general model behavior based on the given data. Since global feature effects of ML models are often non-linear, it is easier to visualize them as done by partial dependence (PD) plots (Friedman, 2001), accumulated local effects (ALE) plots (Apley and Zhu, 2020), or SHAP dependence (SD) plots (Lundberg et al., 2019).
Aggregating individual explanations to a global explanation (e.g., ICE to PD curves) has the advantage that the global feature effects can be presented in an easy-to-understand way. However, the aggregation step might cause information loss due to heterogeneity in local effects (e.g., see the different shapes of ICE curves in Figure 1). This so-called aggregation bias is usually induced by feature interactions leading to a global feature effect that is not representative for many individuals in the data (Herbinger et al., 2022; Mehrabi et al., 2021). We term this heterogeneity _interaction-related heterogeneity_. Therefore, global explanations might be misleading or not give a complete picture when feature interactions are present, as illustrated in the bikesharing example in Figure 1 (see also Section 7).
This is particularly relevant when ML models are trained on biased data (Mehrabi et al., 2021). The ML model might learn this bias, which might not be visible in global explanations due to aggregation (e.g., see COMPAS example in Section 7).
To bridge the gap between local and global effect explanations, so-called subgroup approaches that partition the feature space to obtain meaningful regional explanations within each partition have recently been introduced (e.g., Hu et al., 2020; Molnar et al., 2023; Scholbeck et al., 2022; Herbinger et al., 2022). Herbinger et al. (2022) introduced a recursive partitioning algorithm that finds interpretable subgroups in the data where the feature
Figure 1: Left: ICE and global PD curves of feature \(hr\) (hour of the day) of the bikesharing data set (James et al., 2022). Right: ICE and regional PD curves of \(hr\) depending on feature _workingday_. The feature effect of \(hr\) on predicted bike rentals is different on working days compared to non-working days, which is due to aggregation not visible in the global feature effect plot (left).
interactions between a specific feature of interest and other features are minimized based on ICE curves. Thus, the resulting regional PD plots of the feature of interest are more representative for the individuals within the respective subgroup. However, the method is limited to PD plots with one feature of interest and, hence, leads to different partitions if multiple features of interest are considered. The mathematical foundation of their method relies on the functional ANOVA decomposition (Stone, 1994; Hooker, 2007) of the prediction function. Being able to decompose the predictions into main and higher-order effects is very appealing to better understand how features individually and jointly influence the predictions. However, the decomposition might not be unique and the respective estimation complex in the presence of feature interactions (Hooker, 2007; Lengerich et al., 2020).
Contributions.We introduce the framework GADGET, which partitions the feature space into interpretable subspaces by minimizing feature interactions based on some feature effect method. We prove that the objective of GADGET minimizes feature interactions of any feature subset and for any feature effect method that satisfies the _local decomposability_ axiom (Section 4.1 and 4.2). We show that the most popular feature effect methods (PD, ALE, and SD) satisfy the _local decomposability_ axiom. For each method, we introduce an estimation and visualization technique for the regional feature effect and the interaction-related heterogeneity (Section 4.3-4.5). Moreover, we propose several measures to quantify feature interactions based on GADGET, which provide more insights into the learned effects and remaining interaction-related heterogeneity (Section 4.6). To the best of our knowledge, we are the first who introduce such a flexible framework for more insights into regional feature effects and feature interactions. Additionally, we introduce the permutation interaction test (PINT) algorithm to detect significant feature interactions based on the underlying feature effect method (Section 5). Finally, we empirically evaluate the theoretical characteristics of the different methods based on several simulation settings (Section 6).
Open Science and Reproducibility.The implementation of the proposed methods as well as reproducible scripts for the experiments are provided in Online Appendix 1.
## 2 Background and Related Work
In this section, we introduce relevant notation and summarize related work as well as the required methodological background for this paper.
### General Notation
We consider a feature space \(\mathcal{X}\in\mathbb{R}^{p}\) and a target space \(\mathcal{Y}\) that for instance, in the case of regression is \(\mathcal{Y}=\mathbb{R}\). The random variables for the features are denoted by \(X=(X_{1},\ldots,X_{p})\) and \(Y\) for the target variable. The realizations of these random variables are sampled i.i.d. from the joint probability distribution \(\mathds{P}_{X,Y}\) (which is unknown) and are denoted by \(\mathcal{D}=\{(\mathbf{x}^{(i)},y^{(i)})\}_{i=1}^{n}\). The \(i\)-th observation of \(\mathcal{D}\) is denoted by \(\mathbf{x}^{(i)}=\left(x_{1}^{(i)},\ldots,x_{p}^{(i)}\right)^{T}\) and the \(j\)-th feature by \(\mathbf{x}_{j}=\left(x_{j}^{(1)},\ldots,x_{j}^{(n)}\right)^{T}\). A true function \(f\) maps the feature space to the target space \(f:\mathcal{X}\rightarrow\mathcal{Y}\). In ML, we strive to approximate this true relationship by a
prediction model \(\hat{f}\), which is learned on \(\mathcal{D}\). Furthermore, we denote \(-j=\{1,\ldots,p\}\setminus j\) to be the set of all features besides feature \(j\).
### Functional ANOVA Decomposition
The functional ANOVA decomposition has already been studied by Stone (1994) to explain prediction models. Hooker (2004, 2007), Rahman (2014), and Li and Rabitz (2012) (amongst others) generalized the approach, suggested further estimation algorithms, and provided several analyses with regard to decomposing an ML prediction function into so-called main and higher-order (interaction) effects. If the influence of the feature \(\mathbf{x}_{j}\) on the prediction function cannot solely be described by the main effect of \(\mathbf{x}_{j}\) because the prediction changes depending on another feature \(\mathbf{x}_{k}\), then the prediction function exhibits an interaction between the features \(\mathbf{x}_{j}\) and \(\mathbf{x}_{k}\). More formally, Friedman and Popescu (2008) defined the presence of an interaction between two features \(\mathbf{x}_{j}\) and \(\mathbf{x}_{k}\) by \(\mathbb{E}\left[\frac{\delta^{2}\hat{f}(X)}{\delta X_{j}\delta X_{k}}\right] ^{2}>0\). If the feature \(\mathbf{x}_{j}\) does not interact with any other feature in \(\mathbf{x}_{-j}\), the prediction function \(\hat{f}(\mathbf{x})\) can be additively decomposed into a function \(h_{j}(\mathbf{x}_{j})\) that only depends on feature \(\mathbf{x}_{j}\) and another function \(h_{-j}(\mathbf{x}_{-j})\) that only depends on all other features \(\mathbf{x}_{-j}\), i.e., \(\hat{f}(\mathbf{x})=h_{j}(\mathbf{x}_{j})+h_{-j}(\mathbf{x}_{-j})\).
The prediction function \(\hat{f}(\mathbf{x})\) (if it is square-integrable) can be decomposed by using the functional ANOVA decomposition as follows:
\[\hat{f}(\mathbf{x})=g_{0}+\sum_{j=1}^{p}g_{j}(\mathbf{x}_{j})+\sum_{j\neq k}g_ {jk}(\mathbf{x}_{j},\mathbf{x}_{k})+\ldots+g_{12\ldots p}(\mathbf{x})=\sum_{k =1}^{p}\sum_{\begin{subarray}{c}W\subseteq\{1,\ldots,p\},\\ |W|=k\end{subarray}}g_{W}(\mathbf{x}_{W}), \tag{1}\]
where \(g_{0}\) represents a constant, \(g_{j}(\mathbf{x}_{j})\) denotes the main effect of the \(j\)-th feature, \(g_{jk}(\mathbf{x}_{j},\mathbf{x}_{k})\) is the pure two-way interaction effect between features \(\mathbf{x}_{j}\) and \(\mathbf{x}_{k}\), and so forth. The last component \(g_{12\ldots p}(\mathbf{x})\) contains the residual term, which always allows for an exact decomposition.
We will now introduce the standard functional ANOVA decomposition and a generalization of it that is more suitable in the context of correlated features.
Standard Functional ANOVA Decomposition.The standard functional ANOVA decomposition (Hooker, 2004; Rahman, 2014) assumes that all features are independent. Hence, the joint probability density function \(w(\mathbf{x})\) can be written as a product-type probability measure \(w(\mathbf{x})=\prod_{j=1}^{p}w_{j}(\mathbf{x}_{j})\), with \(w_{j}:\mathbb{R}\rightarrow\mathbb{R}_{0}^{+}\) being the marginal probability density function of the \(j\)-th feature. In this case, the component functions \(g_{W}(\mathbf{x}_{W})\) can be optimally and uniquely defined for a fixed prediction function \(\hat{f}(\mathbf{x})\) if the _vanishing condition_ is satisfied. The _vanishing condition_ is given by \(\int g_{W}(\mathbf{x}_{W})w_{j}(\mathbf{x}_{j})d\mathbf{x}_{j}=0\)\(\forall j\in W\neq\emptyset\), with \(w_{j}(\mathbf{x}_{j})\leq 0\) and \(\int_{\mathbb{R}}w_{j}(\mathbf{x}_{j})d\mathbf{x}_{j}=1\)(see, e.g., Li and Rabitz, 2012; Rahman, 2014, for a detailed definition1). Following from that, the component functions can be determined sequentially by
Footnote 1: In Rahman (2014) this is called the strong annihilating condition.
\[g_{W}(\mathbf{x}_{W})=\int_{\mathbf{x}_{-W}}\left(\hat{f}(\mathbf{x})w( \mathbf{x})-\sum_{V\subset W}g_{V}(\mathbf{x}_{V})\right)d\mathbf{x}_{-W}. \tag{2}\]
Generalized Functional ANOVA Decomposition.When features are not independent from each other, the _vanishing condition_ does not hold. Therefore, Hooker (2007) introduces a relaxed version of the _vanishing condition2_ that leads to a unique decomposition of \(\hat{f}(\mathbf{x})\), which they call the weighted functional ANOVA. The functional components cannot be determined sequentially (as in Eq. 2) and must be determined by solving a computationally more expensive optimization problem.
Footnote 2: The relaxed vanishing condition is given by \(\int_{\mathbb{R}}g_{W}(\mathbf{x}_{W})w_{W}(\mathbf{x}_{W})d\mathbf{x}_{j}=0\) for \(j\in W\neq\emptyset\), with \(w(\mathbf{x})\) being a general probability density.
Thus, it is possible to decompose the prediction function in many different (valid) ways, with differing interpretations in the presence of feature interactions. Functional ANOVA decomposition is one possibility, which always leads to a unique decomposition. Hence, recent research has focused either on proposing models that directly include the respective conditions in the optimization process by constraints (e.g., Sun et al., 2022) or on purifying the resulting decomposition for specific models within the modelling process (e.g., Lengerich et al., 2020; Hu et al., 2022). While these approaches are first steps for computing the generalized functional ANOVA decomposition more efficiently, estimating the true underlying data distribution remains an open challenge that also influences the decomposition itself (Lengerich et al., 2020). First attempts in this direction have been proposed, for example by Sun et al. (2022) using an adaptive kernel method. Note that the estimation of the generalized functional ANOVA decomposition is only complex in the presence of feature interactions, otherwise the prediction function can easily and uniquely be decomposed into the main effects of each feature--e.g., estimated by a generalized additive model (GAM).
### Visualizing Feature Effects
The visualization of feature effects provides a better understanding of how features individually or jointly influence the predicted outcome of an ML model. In this section, we introduce some of the most important global feature effect methods and relate them to the functional ANOVA decomposition.
Partial Dependence.The _PD plot_ introduced by Friedman (2001) visualizes the marginal effect of a set of features \(W\subset\{1,\ldots,p\}\) by integrating over the joint distribution over all other features in \(-W=\{1,\ldots,p\}\setminus W\), which we denote by \(\mathbb{P}\left(\mathbf{x}_{-W}\right)\). Therefore, the PD function is defined by
\[f_{W}^{PD}(\mathbf{x}_{W})=E_{X_{-W}}[\hat{f}(\mathbf{x}_{W},X_{-W})]=\int\hat {f}(\mathbf{x}_{W},\mathbf{x}_{-W})d\mathbb{P}\left(\mathbf{x}_{-W}\right). \tag{3}\]
As the joint distribution \(\mathbb{P}\left(\mathbf{x}_{-W}\right)\) is usually unknown, the PD function is estimated using Monte-Carlo integration by \(f_{W}^{PD}(\mathbf{x}_{W})=\frac{1}{n}\sum_{i=1}^{n}\hat{f}(\mathbf{x}_{W}, \mathbf{x}_{-W}^{(i)})\). Since the PD function is usually estimated for visualization purposes, the number of features in \(W\) is chosen to be one or two, and is visualized by the pairs \(\{(\mathbf{x}_{W}^{(k)},\hat{f}_{W}^{PD}(\mathbf{x}_{W}^{(k)}))\}_{k=1}^{m}\) for \(m\) grid points3.
Footnote 3: Instead of using all feature values \(\mathbf{x}_{W}\), an equidistant grid or a grid based on randomly selected feature values or quantiles of \(\mathbf{x}_{W}\) are commonly chosen (Molnar et al., 2022).
The PD curve for \(|W|=1\) averages over heterogeneous effects that are induced by feature interactions between the feature \(\mathbf{x}_{W}\) and other features. These heterogeneous effects can be visualized using _ICE plots_ (Goldstein et al., 2015), which measure
prediction of each observation changes when the value of feature \(\mathbf{x}_{W}\) changes. Thus, ICE plots visualize each individual curve \(\{(\mathbf{x}_{W}^{(k)},\hat{f}(\mathbf{x}_{W}^{(k)},\mathbf{x}_{-W}^{(i)}))\}_{k =1}^{m}\) for all \(i\in\{1,\ldots,n\}\), with the PD curve being the average over all ICE curves (see Figure 2).
While ICE plots can help to spot interactions between the feature of interest and other features by visual inspection, they do not reveal with which other features \(\mathbf{x}_{W}\) interacts and in which way these interactions influence the marginal effect of the feature of interest. To that end, Inglis et al. (2022) suggest different visualization techniques--such as Zenplots--that only show relevant 2-dimensional PD plots in a user-friendly layout. Other works focus on grouping ICE curves with similar shapes to find regions within the feature space where the regional PD plot can be interpreted reliably. For example, Britton (2019) suggests using an unsupervised clustering approach to group ICE curves based on their partial derivatives. A similar approach has been introduced by Zhang et al. (2021). However, Herbinger et al. (2022) showed that this approach can produce misleading interpretations and suggest a supervised approach to find interpretable regions where feature interactions are minimized and resulting regional PD estimates are more representative for the underlying observations.
Friedman (2001) argues that using PD functions as additive components in a functional decomposition can recover the prediction function up to a constant. Thus, the prediction function can be decomposed into an intercept \(g_{0}\) and the sum of mean-centered PD functions with the same sequential procedure, as done for the standard functional ANOVA decomposition where lower-order effects are subtracted:
\[\hat{f}(\mathbf{x})=g_{0}+\sum_{k=1}^{p}\sum_{\begin{subarray}{c}W\subseteq\{ 1,\ldots,p\},\\ |W|=k\end{subarray}}\left(\hat{f}_{W}^{PD,c}(\mathbf{x}_{W})-\sum_{V\subset W} \hat{f}_{V}^{PD,c}(\mathbf{x}_{V})\right), \tag{4}\]
with \(\hat{f}_{W}^{PD,c}(\mathbf{x}_{W})=\hat{f}_{W}^{PD}(\mathbf{x}_{W})-\frac{1}{ m}\sum_{k=1}^{m}\hat{f}_{W}^{PD}(\mathbf{x}_{W}^{(k)})\). Tan et al. (2018) furthermore note that if the prediction function can be written as a sum of main effects, it can be exactly decomposed by an intercept plus the sum of all mean-centered 1-dimensional PD functions.
Similar to standard functional ANOVA, Hooker (2007) illustrate that the decomposition via PD functions is misleading when features are highly correlated due to placing too much weight in sparse regions, which causes extrapolation.
Extrapolation Problem and ALESince the marginal distribution is used in PD functions, we _extrapolate_ in empty or sparse regions of the feature space that might even be unrealistic (e.g., predicting a disease status for a pregnant man) when features are highly correlated. This can lead to inaccurate PD estimates, especially in the case of non-parametric models such as neural networks (Apley and Zhu, 2020).
Rather than integrating over marginal distributions (see Eq. 3), one possible solution to this challenge is to integrate over conditional distributions, which is known as a marginal (M) plot (Friedman, 2001). However, the M plot of feature \(\mathbf{x}_{j}\) not only represents the feature effect of the feature itself, but also includes the partial effects of features correlated with the feature of interest (see Appendix A for an example). Therefore, we cannot additively decompose the prediction function into individual M plot components, as done for PD in Eq. (4)4. As we focus on methods that allow an additive decomposition of the prediction
function and the isolated interpretation of individual feature effects, we will not cover M plots (i.e., conditional PD plots) in greater detail here.
ALE plots are based on conditional expectations and thus avoid extrapolation. However, they solely reflect the influence of the feature of interest on the predictions (Apley and Zhu, 2020). The uncentered ALE \(f_{W}^{ALE}(x)\) for \(|W|=1\) at feature value \(x\sim\mathds{P}(\mathbf{x}_{W})\) and with \(z_{0}=\min(\mathbf{x}_{W})\) is defined by
\[f_{W}^{ALE}(x)=\int\limits_{z_{0}}^{x}\mathds{E}\bigg{[}\frac{\partial\hat{f} (X)}{\partial X_{W}}\bigg{|}X_{W}=z_{W}\bigg{]}dz_{W}=\int\limits_{z_{0}}^{x} \int\frac{\partial\hat{f}(z_{W},\mathbf{x}_{-W})}{\partial z_{W}}d\mathds{P}( \mathbf{x}_{-W}|z_{W})dz_{W}. \tag{5}\]
ALE first calculates the local derivatives that are weighted by the conditional distribution \(\mathds{P}(\mathbf{x}_{-W}|z_{W})\) and then accumulates the local effects to generate a global effect curve.
Eq. (5) is usually estimated by splitting the value range of \(\mathbf{x}_{W}\) in intervals and calculating the partial derivatives for all observations within each interval. The partial derivatives are estimated by the differences between the predictions of the upper (\(z_{k,W}\)) and lower (\(z_{k-1,W}\)) bounds of the \(k\)-th interval for each observation. The accumulated effect up to observation \(x\) is then calculated by summing up the average partial derivatives (weighted by the number of observations \(n(k)\) within each interval) over all intervals until the interval that includes observation \(x\), which is denoted by \(k(x)\):
\[\hat{f}_{W}^{ALE}(x)=\sum\limits_{k=1}^{k(x)}\frac{1}{n(k)}\sum\limits_{i: \;\mathbf{x}_{W}^{(i)}\in\;]z_{k-1,W},z_{k,W}]}\left[\hat{f}(z_{k,W},\mathbf{ x}_{-W}^{(i)})-\hat{f}(z_{k-1,W},\mathbf{x}_{-W}^{(i)})\right]. \tag{6}\]
For interpretability reasons, ALE curves are usually centered by the average of the uncentered ALE curve to obtain \(f_{W}^{ALE,c}(x)=f_{W}^{ALE}(x)-\int f_{W}^{ALE}(\mathbf{x}_{W})\,d\mathds{P}( \mathbf{x}_{W})\).
Another advantage of ALE is that they are computationally less expensive than PD functions. Compared to M plots, ALE also satisfies the additive recovery5. Note that Apley and Zhu (2020) defined the \(W-\)th order ALE function by the pure \(W\)-th order (interaction) effect, as done similarly for the functional ANOVA decomposition in Eq.(1). Furthermore, Apley and Zhu (2020) show that the ALE decomposition has an orthogonality-like property, which guarantees (similar to the generalized functional ANOVA) that the following decomposition is unique:
Footnote 5: Meaning, the prediction function can be additively decomposed into main and higher-order effects by ALE functions (such as PD functions), as defined in Eq. (7).
\[\hat{f}(\mathbf{x})=g_{0}+\sum\limits_{\begin{subarray}{c}W\subseteq\{1,..., p\},\\ |W|=k\end{subarray}}\hat{f}_{W}^{ALE,c}(\mathbf{x}_{W}). \tag{7}\]
For details on estimating higher-order ALE functions, we refer to Apley and Zhu (2020).
The choice of a feature effect method will typically depend on the underlying research question. Generally, one distinguishes between understanding the model behavior or understanding the data-generating process. While PD generally answers the first question, ALE is more focused on answering the second question.
SHAP Dependence.Another method that quantifies feature effects on a local level are Shapley values which have been transferred from game theory to an ML context (Shapley, 1953; Strumbelj and Kononenko, 2014). The general idea behind this method is to distribute the payout of a single prediction of an ML model fairly among all features.
The Shapley value of a feature is then defined by the fair contribution of this feature to the prediction. Furthermore, Herren and Hahn (2022) showed that the Shapley value of feature \(\mathbf{x}_{j}\) at feature value \(x_{j}=\mathbf{x}_{j}^{(i)}\) can be decomposed according to the functional ANOVA decomposition into main and interaction effects6:
Footnote 6: Similar decompositions and their estimation have been introduced in Hiabu et al. (2023) and Bordt and von Luxburg (2023).
\[\phi_{j}^{(i)}(x_{j}) = \sum_{k=0}^{p-1}\frac{1}{k+1}\sum_{\begin{subarray}{c}W\subseteq- j:\\ |W|=k\end{subarray}}\left(\mathds{E}[\hat{f}(x_{j},X_{-j})|X_{W}=\mathbf{x}_{W}^{(i )}]-\sum_{V\subset\{W\cup j\}}\mathds{E}[\hat{f}(X)|X_{V}=\mathbf{x}_{V}^{(i)}]\right) \tag{8}\] \[= g_{j}^{c}(x_{j})+\sum_{k=1}^{p-1}\frac{1}{k+1}\sum_{W\subseteq-j: |W|=k}g_{W\cup j}^{c}(x_{j},\mathbf{x}_{W}^{(i)}).\]
While Shapley values only quantify the local effect of a feature on the prediction, Lundberg et al. (2019) propose the SHAP dependence (SD) plot as an alternative to PD plots. SD plots show the global behavior by visualizing the local Shapley values for all or a sample of observations of one feature, similarly to an ICE plot (see Figure 5). Thus, feature interactions between the feature of interest and other features also influence the shape of the resulting point cloud. Lundberg et al. (2019) suggest coloring the points according to another (potentially) interacting feature. However, if feature effects are very heterogeneous due to feature interactions, it might not be possible to identify a clear trend of the marginal feature effect (as observed similarly for ICE plots).
As with PD plots, M plots, or ALE plots, the means of estimating the conditional expectation for Shapley values is an ongoing discussion in current research (see, e.g., Sundararajan and Najmi, 2020; Chen et al., 2020). The approach taken might also influence the SD plot, since the estimation of Shapley values can generally be based on either marginal sampling (i.e., an interventional approach, similar to PD plots) or conditional sampling (i.e., an observational approach, similar to M plots). While the interventional approach also bears the problem of extrapolation, the observational approach requires the estimation of the conditional data distribution, which is still a challenging task (Aas et al., 2021). Chen et al. (2020) argue that the interventional approach is not generally wrong and can be useful if we want to derive explanations that are true to the model, while the observational approach should be used when we want to extract interpretations that are true to the data.
### Quantification of Interaction Effects
Visualizing feature effects is a powerful technique to obtain a better understanding of how features influence the prediction function. However, useful visualizations are usually limited to a maximum of two features. Hence, to better comprehend which feature effects might depend on other features due to interactions, we are interested in detecting and quantifying the strength of feature interactions.
Based on the definition of feature interactions in Section 2.2 and on the PD function, Friedman and Popescu (2008) introduced the H-Statistic as a global interaction measure between subsets of features. To quantify the interaction strength between a feature of interest \(\mathbf{x}_{j}\) and all other features \(\mathbf{x}_{-j}\), the H-Statistic is calculated by
\[\hat{\mathcal{H}}_{j}^{2}=\frac{\sum_{i=1}^{n}\left(\hat{f}^{c}(\mathbf{x}^{(i) })-\hat{f}_{j}^{PD,c}(\mathbf{x}_{j}^{(i)})-\hat{f}_{-j}^{PD,c}(\mathbf{x}_{- j}^{(i)})\right)^{2}}{\sum_{i=1}^{n}\left(\hat{f}^{c}(\mathbf{x}^{(i)}) \right)^{2}}. \tag{9}\]
Hence, \(\hat{\mathcal{H}}_{j}^{2}\) quantifies how much of the prediction function's variance7 can be attributed to the interaction between feature \(\mathbf{x}_{j}\) and all other features \(\mathbf{x}_{-j}\)8. Eq. (9) can be adjusted to quantify interactions of different order, such as two-way interactions (see Friedman and Popescu, 2008). Other global interaction measures to quantify two-way interactions based on PDs or SHAP interaction values have been suggested by Greenwell et al. (2018) and Herbinger et al. (2022). While these methods focus on detecting and quantifying two-way feature interactions, Hooker (2004) suggests an algorithm based on the idea of functional ANOVA decomposition to detect all important higher-order terms and visualizes the feature relationships in a network graph. However, the feature interactions are not quantified and thus cannot be ranked according to their influence on the prediction.
Footnote 7: The denominator of Eq. (9) represents the prediction function’s variance, where \(\hat{f}^{c}(\mathbf{x}^{(i)})=\hat{f}(\mathbf{x}^{(i)})-\frac{1}{n}\sum_{i=1}^ {n}\hat{f}(\mathbf{x}^{(i)})\).
Footnote 8: If the feature \(\mathbf{x}_{j}\) does not interact with any other feature in \(\mathbf{x}_{-j}\), then the mean-centered joint effect function can be additively decomposed into: \(f^{c}(\mathbf{x})=f_{j}^{PD,c}(\mathbf{x}_{j})+f_{-j}^{PD,c}(\mathbf{x}_{-j})\), leading to \(\hat{\mathcal{H}}_{j}^{2}=0\).
Next to global interaction measures, there exist multiple local interaction measures that quantify feature interactions for a single observation (e.g., Lundberg et al., 2019; Tsai et al., 2023; Blucher et al., 2022; Kumar et al., 2021).
## 3 Motivation: REPID and its Limitations
To obtain a better understanding of how features globally affect the predictions in the presence of feature interactions, Herbinger et al. (2022) propose the REPID method, which decomposes the global PD into regional PDs such that individual effects (ICE curves) are more homogeneous within each region. This section provides a short explanation of the REPID method and illustrates its limitations, which are addressed by our new framework introduced in Section 4.
The REPID method is based on a recursive partitioning algorithm (similar to CART by Breiman et al., 1984) that splits each parent node into two child nodes until a stop criterion is met. Compared to a common decision tree, the inputs are not the observations \(\mathbf{x}\) themselves, but the ICE curves belonging to the observations of one feature of interest \(\mathbf{x}_{j}\). Herbinger et al. (2022) showed that their chosen objective minimizes interaction effects between the feature of interest and all other features. Therefore, the reduction at each split quantifies the interaction strength between the feature of interest and the feature used for splitting. Thus, the method provides more representative PD estimates in interpretable regions for a feature of interest as well as a ranking of considered two-way feature interactions.
For illustration purposes, consider the following simulation example that is often used in a slightly modified form in the context of feature interactions (see e.g., Goldstein et al.,
2015; Herbinger et al., 2022): Let \(X_{1},X_{2},X_{3}\sim\mathbb{U}(-1,1)\) be independently distributed and the true underlying relationship be defined by \(Y=3X_{1}\mathbbm{1}_{X_{3}>0}-3X_{1}\mathbbm{1}_{X_{3}\leq 0}+X_{3}+\epsilon\), with \(\epsilon\sim\mathbb{N}(0,0.09)\). We then draw 500 observations from these random variables and fit a feed-forward neural network (NN) with a single hidden layer of size 10 and weight decay of 0.001.9 The \(R^{2}\) measured on a separately drawn test set of size 10000 following the same distribution is 0.94. The ICE and PD curves of feature \(\mathbf{x}_{1}\) for the training data are shown in the left plot of Figure 2. The ICE curves clearly show that \(\mathbf{x}_{1}\) influences the prediction differently depending on another feature--in this case, \(\mathbf{x}_{3}\). If we would only consider the global PD plot, the nearly-horizontal PD curve of \(\mathbf{x}_{1}\) (grey line) would indicate no influence on the model's predictions and thus is not representative for the underlying local feature effects (ICE curves). Here, REPID can be applied to the ICE curves of \(\mathbf{x}_{1}\) to search for the best split point within the feature subset in \(-j\) (here: \(\mathbf{x}_{2}\) and \(\mathbf{x}_{3}\)) and minimize the feature interactions between features in \(j\) and \(-j\). In this example, the best split point found is \(\mathbf{x}_{3}=0\), which is also optimal according to the data-generating process. The regional PD plots found by REPID reflect the contradicting influence of feature \(\mathbf{x}_{1}\) on the predictions compared to the global PD plot. The respective split reduces the interaction-related heterogeneity almost completely (by 98%). Hence, the resulting regional marginal effects of \(\mathbf{x}_{1}\) can be approximated well by the respective main effects (regional PD), which are therefore more representative for the individual observations within each region.
Footnote 9: The hyperparameters’ size (number of units of the hidden layer) and weight decay for a single-hidden-layer feed-forward NN were tuned via 5-fold cross-validation using grid search and a maximum of 1000 iterations. Considered grid values for weight decay: \((0.5,0.1,0.01,0.001,0.0001,0.00001)\), and for size: \((3,5,10,20,30)\).
Figure 2: ICE curves and PD curves (grey) for the uncorrelated (left) and correlated case (right) of the simulation example of Section 3. ICE curves are colored w.r.t. the first split when REPID is applied. The split feature is in both cases \(\mathbf{x}_{3}\) with split points \(-0.01\) (left) and \(-0.15\) (right). The blue color represents the left and orange the right region according to the split point. The thicker lines represent the respective regional PD curves. The rug plot shows the distribution of \(\mathbf{x}_{1}\) according to the split point. The black points are the underlying observations.
Herbinger et al. (2022) have shown that REPID provides meaningful results if 1) only one feature is of interest, and 2) if global feature effects are visualized by PD plots. These two assumptions limit the general applicability of REPID for two reasons.
First, applying REPID to different features of interest will usually result in different regions, as the method produces feature-specific regional PD plots. Hence, in the above-mentioned simulation example, REPID produces different regions when the feature of interest is \(\mathbf{x}_{3}\) instead of \(\mathbf{x}_{1}\), which complicates interpretations when effects of multiple features are of interest. Moreover, we might be interested to receive regions within the feature space where interactions between multiple features are minimized, and thus the joint effect within each region can be decomposed into the main effects of the regarded features.
Second, there are other feature effect methods that might be more suitable, depending on the underlying data set and research question. Thus, a general framework that allows finding regions within the feature space where feature interactions are minimized w.r.t. an individually chosen feature effect method would be extremely useful. One main disadvantage specifically for ICE and PD plots is the extrapolation problem in the presence of feature interactions and correlations, as described in Section 2.3. To illustrate how this problem affects REPID, we consider the same simulation example as before, but with \(X_{1}=X_{3}+\delta\) and \(\delta\sim\mathbb{N}(0,0.0625)\). We again draw 500 observations and fit an NN with the same specification and receive an \(R^{2}\) of 0.92 on a separately drawn test set of size 10000 following the same distribution. Thus, the high correlation between \(\mathbf{x}_{1}\) and \(\mathbf{x}_{3}\) barely influences the model performance. However, Figure 2 clearly shows that ICE curves are extrapolating in low-density regions. The rug plot on the bottom indicates the distribution of feature \(\mathbf{x}_{1}\) depending on feature \(\mathbf{x}_{3}\). Thus, the model was not trained on observations with small \(\mathbf{x}_{1}\) values and simultaneously large \(\mathbf{x}_{3}\) values, and vice versa. However, since we integrate over the marginal distribution, we also predict in these "out-of-distribution" areas. This leads to the so-called extrapolation problem and, hence, to uncertain predictions in extrapolated regions that must be interpreted with caution. In Section 6.1, we will analyze how severe the extrapolation problem is with regard to finding the correct split feature and split point.
## 4 Generalized Additive Decomposition of Global Effects (GADGET)
Here, we introduce a new framework called generalized additive decomposition of global effects (GADGET), which additively decomposes global feature effects based on minimizing feature interactions. Through this approach, one or multiple features (up to all \(p\) features) can be considered to find interpretable regions where feature effects of all regarded features are more representative for the underlying individual observations. We will first introduce the methodology of GADGET, which is applicable to many different feature effect methods. We formally define the axiom that must be satisfied by the desired feature effect method to be used in GADGET. We then show that the most popular feature effect methods satisfy this axiom and that REPID is a special case of GADGET.
### The GADGET Algorithm
Let \(h(x_{j},\mathbf{x}_{-j}^{(i)})\) be the local feature effect of a feature \(\mathbf{x}_{j}\) of the \(i\)-th observation at some feature value \(x_{j}\in\mathcal{X}_{j}\) measured by a local feature effect function \(h:\mathbb{R}^{p}\rightarrow\mathbb{R}\), and let
\(\mathds{E}[h(x_{j},X_{-j})|\mathcal{A}_{g}]\) be the expected feature effect of \(\mathbf{x}_{j}\) at \(x_{j}\) w.r.t. \(X_{-j}\) conditioned on the subspace \(\mathcal{A}_{g}\subseteq\mathcal{X}\).10 Then, we can define the deviation between the local feature effect and the expected feature effect at a specific feature value \(x_{j}\) for subspace \(\mathcal{A}_{g}\) using a point-wise loss function, e.g., the squared distance11:
Footnote 10: The expected value is always taken w.r.t. the random variables within the expected value. We only use a subscript for the expected value if it cannot uniquely be defined by the above notation.
Footnote 11: Other distance metrics are also possible. However, we chose the squared distance, since the interpretation in terms of variance is most intuitive in this context.
\[\mathcal{L}\left(\mathcal{A}_{g},x_{j}\right) = \sum_{i:\mathbf{x}^{(i)}\in\mathcal{A}_{g}}\left(h(x_{j},\mathbf{ x}^{(i)}_{-j})-\mathds{E}[h(x_{j},X_{-j})|\mathcal{A}_{g}]\right)^{2}. \tag{10}\]
The risk function \(\mathcal{R}\) of the \(j\)-th feature and subspace \(\mathcal{A}_{g}\) is defined by aggregating the point-wise loss of Eq. (10) over a sample of feature values of \(\mathbf{x}_{j}\):
\[\mathcal{R}\left(\mathcal{A}_{g},\mathbf{x}_{j}\right) = \sum_{k:k\in\{1,\ldots,m\}\wedge\mathbf{x}^{(k)}_{j}\in\mathcal{A }_{g}}\mathcal{L}\left(\mathcal{A}_{g},\mathbf{x}^{(k)}_{j}\right). \tag{11}\]
For instance, feature values of size \(m\) can be sampled values of \(\mathbf{x}_{j}\)12 and their constraint depends on feasible values within the subspace \(\mathcal{A}_{g}\).
Footnote 12: The choice depends on the underlying local feature effect function \(h\) and on the data distribution. Other common choices are quantiles, or an equidistant grid from the feature range of \(\mathbf{x}_{j}\).
The local feature effect function must be defined in such a way that the risk function in Eq. (11) minimizes the interaction-related heterogeneity of features in \(S\), i.e., the feature interactions between \(\mathbf{x}_{j}\in\mathbf{x}_{S}\) with \(S\subseteq\{1,\ldots,p\}\) and features of a previously defined feature set \(Z\subseteq\{1,\ldots,p\}\) (see Section 4.2). The feature subset \(S\) is chosen to be the subset of features for which we want to receive representative regional feature effect plots by minimizing their interaction-related heterogeneity. Features in \(Z\) are considered as split features and thus aim to partition the feature space in such a way that feature interactions with features in \(S\) are minimized. Algorithm 1 defines a single partitioning step of GADGET, which is inspired by the CART algorithm (Breiman et al., 1984) and is recursively repeated until a certain stop criterion is met (see Section 4.7). We greedily search for the best split point within the feature subset \(Z\) such that the interaction-related heterogeneity of feature subset \(S\) is minimized in the two resulting subspaces. The interaction-related heterogeneity is measured by the variance of the local feature effects of \(\mathbf{x}_{j}\) within the new subspace (see Eq. 11). Hence, for each split feature \(z\) and split point \(t\), we sum up the risk of the two resulting subspaces for all features in \(S\) (line 6 in Algorithm 1) and then choose the split point of the split feature \((\hat{t},\hat{z})\) that minimizes the interaction-related heterogeneity of all features \(j\in S\) (line 9 in Algorithm 1).13
Footnote 13: In the case where \(z\in S\), we also include the heterogeneity reduction of the \(z\)-th feature in the objective, since we aim to reduce the overall interaction-related heterogeneity of all features in \(S\).
### Theoretical Foundation of GADGET
To apply GADGET, a suitable local feature effect function \(h\) must be defined. General properties that GADGET requires from this function are provided by Axiom 1.
**Axiom 1** (Local Decomposability): _A local feature effect function \(h:\mathbb{R}^{p}\rightarrow\mathbb{R}\) satisfies the local decomposability axiom if and only if the decomposition of the \(i\)-th local effect \(h(x_{j},\mathbf{x}_{-j}^{(i)})\) of feature \(\mathbf{x}_{j}\) at \(x_{j}\) solely depends on main and higher-order effects of feature \(\mathbf{x}_{j}\):_
\[h(x_{j},\mathbf{x}_{-j}^{(i)})=g_{j}(x_{j})+\sum_{k=1}^{p-1}\sum_{ \begin{subarray}{c}W\subset-j,\\ |W|=k\end{subarray}}g_{W\cup j}(x_{j},\mathbf{x}_{W}^{(i)}).\]
The _local decomposability_ axiom must be satisfied by the chosen local feature effect function \(h\). Thus, the \(i\)-th local feature effect \(h(x_{j},\mathbf{x}_{-j}^{(i)})\) for feature \(\mathbf{x}_{j}\) at \(x_{j}\) must be defined such that it only depends on the main effect of feature \(\mathbf{x}_{j}\) as well as the interaction effects between \(\mathbf{x}_{j}\) and all other features in \(-j\). This decomposition is not generally given by every local feature effect method. For example, the decomposition of ICE curves depends not only on effects including the feature of interest \(\mathbf{x}_{j}\) but also on effects of other features, which leads to additive shifts. However, we can usually transform the local feature effects in a meaningful manner to receive the decomposition provided in Axiom 1. ICE curves, for instance, must be mean-centered (see Appendix B.4) to satisfy the _local decomposability_ axiom. If the local feature effect function satisfies Axiom 1, then Theorem 2 guarantees that the loss function defined in Eq. (10) quantifies the interaction-related heterogeneity of local effects (feature interactions) of feature \(\mathbf{x}_{j}\) at \(x_{j}\) within the regarded subspace \(\mathcal{A}_{g}\).
**Theorem 2**: _If the local feature effect function \(h(x_{j},\mathbf{x}_{-j}^{(i)})\) satisfies Axiom 1, then the loss function \(\mathcal{L}\left(\mathcal{A}_{g},x_{j}\right)\) defined in Eq. (10) only depends on feature interactions between the feature \(\mathbf{x}_{j}\) at \(x_{j}\) and features in \(-j\):_
\[\mathcal{L}\left(\mathcal{A}_{g},x_{j}\right)=\sum_{i\mathbf{x}^{(i)}\in \mathcal{A}_{g}}\left(\sum_{k=1}^{p-1}\sum_{\begin{subarray}{c}W\subset-j, \\ |W|=k\end{subarray}}g_{W\cup j}(x_{j},\mathbf{x}_{W}^{(i)})-\mathbb{E}[g_{W \cup j}(x_{j},X_{W})|\mathcal{A}_{g}]\right)^{2}.\]
_The proof can be found in Appendix B.1._
Since we only use features in \(Z\) for splitting and we aggregate the resulting risk over all features \(j\in S\), the objective function in GADGET minimizes the feature interactions between all features in \(S\) and interacting features in \(Z\) (see Theorem 3). Furthermore, if \(Z\) contains all features interacting with any feature in \(S\) (i.e., no feature in \(-Z\) interacts with any feature in \(S\)), then the theoretical minimum of the objective minimized in Algorithm 1 is zero (see Corollary 4). This means that all feature interactions present in feature effects of the features in \(S\) can be reduced such that only main effects of these features remain in each subspace. Thus, the joint feature effect \(f_{S|\mathcal{A}_{g}}\) within each subspace \(\mathcal{A}_{g}\) can be uniquely and additively decomposed into the univariate feature effects \(f_{j|\mathcal{A}_{g}}\):
\[f_{S|\mathcal{A}_{g}}(\mathbf{x}_{S})=\sum_{j\in S}f_{j|\mathcal{A}_{g}}( \mathbf{x}_{j}). \tag{12}\]
However, if \(Z\) is chosen such that the features contained in its complement \(-Z\) interact with features contained in \(S\), then the theoretical minimum of the objective is larger than \(0\). Thus, heterogeneous effects due to feature interactions between features in \(S\) and features in \(-Z\) remain. The approach to choose the subsets \(S\) and \(Z\) is discussed in Section 5.
**Theorem 3**: _If the local feature effect function \(h\) satisfies Axiom 1 and if all features contained in \(Z\) and all features in \(-Z=Z^{\complement}\) are pairwise independent, then the objective function \(\mathcal{I}(t,z)\) of Algorithm 1 based on the loss function in Eq. (10) minimizes feature interactions between features within the subset \(S\) and features in \(Z\), but does not generally minimize feature interactions between features in \(S\) and \(-Z\). Since the partitions found by the GADGET algorithm to minimize the feature interactions of \(S\) only depend on features in \(Z\) and are independent of features in \(-Z\), interactions between each \(j\in S\) and features in \(-Z\) are independent of the partitioning in Algorithm 1:_
\[\left(\sum_{l=1}^{|-Z\setminus j|}\sum_{\begin{subarray}{c}-Z_{l}\subseteq-Z \setminus j,\\ |-Z_{l}|=l\end{subarray}}g_{-Z_{l}\cup j}(x_{j},\mathbf{x}_{-Z_{l}}^{(i)})- \mathds{E}[g_{-Z_{l}\cup j}(x_{j},X_{-Z_{l}})|\mathcal{A}_{b}^{t,z}]\right) \perp\!\!\!\perp\mathcal{A}_{b}^{t,z}.\]
_The proof can be found in Appendix B.2._
**Corollary 4**: _If the local feature effect function \(h\) satisfies Axiom 1 and if the feature subset \(-Z=Z^{\complement}\) does not contain any features interacting with any \(j\in S\), then the theoretical minimum of the objective function in Algorithm 1 is \(\mathcal{I}(t^{*},z^{*})=0\). The proof can be found in Appendix B.3._
The fulfillment of the _local decomposability_ axiom depends on the definition of the underlying local feature effect function \(h\). In the following sections, we show the validity of this axiom for common feature effect methods and provide estimates as well as visualizations for the resulting regional effect curves and their remaining interaction-related heterogeneity.
### Gadget-Pd
Here, we show the applicability of PD as feature effect method within the GADGET algorithm which we call the GADGET-PD.
Method.The PD plot is based on ICE curves (local feature effects) and one of the most popular global feature effect methods. However, ICE curves of feature \(\mathbf{x}_{j}\) do not satisfy Axiom 1, since the decomposition of the \(i\)-th ICE curve also contains main or interaction effects of the \(i\)-th observation that are independent of feature \(\mathbf{x}_{j}\) (see Appendix B.4). These effects can be cancelled out by centering ICE curves w.r.t. the mean of each curve (i.e., \(\mathds{E}[\hat{f}(X_{j},\mathbf{x}_{-j}^{(i)})]\)). The resulting mean-centered ICE curves satisfy Axiom 1 (see Appendix B.4). Hence, they are chosen as local feature effect method within GADGET: \(h(x_{j},\mathbf{x}_{-j}^{(i)})=\hat{f}^{c}(x_{j},\mathbf{x}_{-j}^{(i)})=\hat{f} (x_{j},\mathbf{x}_{-j}^{(i)})-\mathds{E}[\hat{f}(X_{j},\mathbf{x}_{-j}^{(i)})| \mathcal{A}_{g}]\).
The loss function used within GADGET-PD to minimize the interaction-related heterogeneity is then defined by the variability of mean-centered ICE curves:
\[\mathcal{L}^{PD}\left(\mathcal{A}_{g},x_{j}\right)=\sum_{i:\mathbf{x}^{(i)} \in\mathcal{A}_{g}}\left(\hat{f}^{c}(x_{j},\mathbf{x}_{-j}^{(i)})-\mathds{E} [\hat{f}^{c}(x_{j},X_{-j})|\mathcal{A}_{g}]\right)^{2}. \tag{13}\]
By choosing mean-centered ICE curves as local feature effect function \(h\), the loss function in Eq. (13) for GADGET-PD results in the same loss function used within REPID. In Appendix B.4, we show that the REPID method is--for this specific loss function--a special case of GADGET-PD, where we have only one feature of interest (i.e., \(S=j\)) and where we consider all other features to be potential split features (i.e., \(Z=-j\)).
Note that since REPID never splits with regard to the visualized feature of interest \(\mathbf{x}_{j}\), the constant used for mean-centering (i.e., \(\mathds{E}[\hat{f}(X_{j},\mathbf{x}_{-j}^{(i)})|\mathcal{A}_{g}]=\mathds{E} [\hat{f}(X_{j},\mathbf{x}_{-j}^{(i)})]\)) always stays the same. In contrast, for the more general GADGET algorithm, the mean-centering constant depends on how \(S\) and \(Z\) are defined. Thus, the mean-centering constants of features in \(S\) might change if we also use them for splitting. For example, in Figure 3, we use feature \(\mathbf{x}_{3}\) as a feature of interest in \(S\) and as a split feature in \(Z\). In this case, the range of the visualized feature \(\mathbf{x}_{3}\) is also split according to the split point found by the GADGET algorithm. Hence, the expected value conditioned on the new subspace changed (i.e., \(\mathds{E}[\hat{f}(X_{j},\mathbf{x}_{-j}^{(i)})|\mathcal{A}_{g}]\neq\mathds{E }[\hat{f}(X_{j},\mathbf{x}_{-j}^{(i)})]\)) and thus must be adjusted to avoid additive (non-interaction) effects in the new subspace.
Illustration.Figure 3 visualizes the result when applying GADGET-PD on the uncorrelated simulation example of Section 3 by choosing \(S=Z=\{1,2,3\}\). Hence, we are interested in the feature effect of all available features and consider all features as possible interaction (split) features. GADGET-PD performs one split with regard to \(\mathbf{x}_{3}\approx 0\). Thus, the correct split feature and its corresponding split point are found by GADGET such that the interaction-related heterogeneity of all features in \(S\) is almost completely reduced and only main effects within the subspaces remain.
Decomposition.If \(Z\) contains all features interacting with features in \(S\) and if GADGET is applied such that the theoretical minimum of the objective function is reached, then according to Corollary 4, the joint mean-centered PD function \(f^{PD,c}_{S|\mathcal{A}_{g}}\) within each final subspace \(\mathcal{A}_{g}\) can be decomposed into the respective 1-dimensional mean-centered PD functions:
\[f^{PD,c}_{S|\mathcal{A}_{g}}(\mathbf{x}_{S})=\sum_{j\in S}f^{PD,c}_{j| \mathcal{A}_{g}}(\mathbf{x}_{j}). \tag{14}\]
Eq. (14) is justified by the assumption that no more interactions between features in S and other features are present in the final regions (Friedman and Popescu, 2008).
Furthermore, if the subset \(-S\) is the subset of features that do not interact with any other features (local feature effects are homogeneous), then according to Eq. (14) and Eq. (4), the prediction function \(\hat{f}_{\mathcal{A}_{g}}\) within each final subspace \(\mathcal{A}_{g}\) can be decomposed into the 1-dimensional mean-centered PD functions of all \(p\) features plus a constant value \(g_{0}\):
\[\hat{f}_{\mathcal{A}_{g}}(\mathbf{x})=g_{0}+\sum_{j=1}^{p}f_{j|\mathcal{A}_{g} }^{PD,c}(\mathbf{x}_{j}).\]
Thus, depending on how we choose the subsets \(S\) and \(Z\) and the extent to which we are able to minimize the interaction-related heterogeneity of feature effects by recursively applying Algorithm 1, we might be able to approximate the prediction function by an additive function of main effects of all features within the final regions.
This can also be shown for the simulation example illustrated in Figure 3, where \(\hat{f}_{2|\mathcal{A}_{g}}^{PD,c}(\mathbf{x}_{2})=0\) (the regional effect of feature \(\mathbf{x}_{2}\) after the split is still 0 and a has low interaction-related heterogeneity). Instead, the regional effects of \(\mathbf{x}_{1}\) and \(\mathbf{x}_{3}\) vary compared to the root node and strongly reduce the interaction-related heterogeneity. Since the regional effects of \(\mathbf{x}_{1}\) and \(\mathbf{x}_{3}\) are approximately linear, we can estimate the prediction function
Figure 3: Visualization of applying GADGET with \(S=Z=\{1,2,3\}\) to mean-centered ICE curves of the uncorrelated simulation example of Section 3 with \(Y=3X_{1}\mathbbm{1}_{X_{3}>0}-3X_{1}1_{X_{3}\leq 0}+X_{3}+\epsilon\) with \(\epsilon\sim\mathbb{N}(0,0.09)\). The upper plots show the mean-centered ICE and PD curves on the entire feature space, while the lower plots represent the respective mean-centered ICE and PD curves after partitioning the feature space w.r.t. \(\mathbf{x}_{3}=-0.003\).
within each subspace by
\[\hat{f}_{\mathcal{A}_{l}}(\mathbf{x})=g_{0}+\frac{-3.04}{1.05}\mathbf{x}_{1}+ \frac{0.99}{0.94}\mathbf{x}_{3}=g_{0}-2.9\mathbf{x}_{1}+1.05\mathbf{x}_{3}\]
and
\[\hat{f}_{\mathcal{A}_{r}}(\mathbf{x})=g_{0}+\frac{3.08}{1.05}\mathbf{x}_{1}+ \frac{0.89}{0.83}\mathbf{x}_{3}=g_{0}+2.93\mathbf{x}_{1}+1.07\mathbf{x}_{3},\]
which is a close approximation to the underlying data-generating process and thus provides a better understanding of how the features of interest influence the prediction function compared to only considering the global PD plots.
Note that besides the decomposability property in Eq. (14), each global PD of features in \(S\) is a weighted additive combination of the final regional PDs. Thus, each global PD can be additively decomposed into regional PD.
Estimates and Visualization.To estimate and visualize the expected regional effect, one can choose between the regional PD curve \(\hat{f}_{j|\mathcal{A}_{g}}^{PD}\) or its mean-centered version \(\hat{f}_{j|\mathcal{A}_{g}}^{PD,c}\) for each feature \(j\in S\), which are calculated by Monte-Carlo integration for each final region \(\mathcal{A}_{g}\). The interaction-related heterogeneity for each feature in \(S\) and final subspace \(\mathcal{A}_{g}\) is measured by the risk function in Eq. (11) with the loss function in Eq. (13). Thus, the interaction-related heterogeneity quantifies the variation of mean-centered ICE curves within each region. For visualization purposes, we calculate the 95% intervals of interaction-related heterogeneity based on this variation. We then suggest a plot for each feature in \(S\) that shows the final regional PD curves and 95% interaction-related heterogeneity intervals (see e.g., Figure 11).
The main issue with local ICE curves and resulting global PD plots is the extrapolation problem when features are correlated, as demonstrated in Section 3. This problem remains for GADGET when mean-centered ICE curves are chosen as local feature effect function \(h\).
### Gadget-Ale
An alternative global feature effect method that allows an additive decomposition of the prediction function and the interpretation of individual feature effects are ALE plots (Apley and Zhu, 2020), which we summarized in Section 2.3. Here, we show their applicability within the GADGET algorithm which we term GADGET-ALE.
Method.While ALE curves compared to PD curves do not suffer from extrapolation, they do not directly entail a local feature effect visualization that shows the heterogeneity induced by feature interactions as ICE curves for PD plots. However, ALE plots are also based on corresponding local feature effects that provide information about the underlying interaction-related heterogeneity and that can be used within the GADGET algorithm to receive more representative ALE curves in the final regions. These local feature effects are the derivatives w.r.t. the feature of interest \(\mathbf{x}_{j}\) of the prediction function (see Eq. 5). In Appendix B.5, we show that by choosing \(h=\frac{\partial f(x_{j},\mathbf{x}_{-j}^{(i)})}{\partial x_{j}}\), Axiom 1 is met, which leads to the following loss function used within the objective in GADGET-ALE:
\[\mathcal{L}^{ALE}\left(\mathcal{A}_{g},x_{j}\right)=\sum_{\begin{subarray}{c}i :\mathbf{x}^{(i)}\in\mathcal{A}_{g}\wedge\\ \mathbf{x}^{(i)}\in\mathrm{P}(\mathbf{x}_{-j}|x_{j})\end{subarray}}\left( \frac{\partial\hat{f}(x_{j},\mathbf{x}_{-j}^{(i)})}{\partial x_{j}}-\mathrm{ E}\left[\frac{\partial\hat{f}(X_{j},X_{-j})}{\partial x_{j}}\bigg{|}\mathcal{A}_{g} \wedge X_{j}=x_{j}\right]\right)^{2}. \tag{15}\]
The derivatives are calculated by defining \(m\) intervals for feature \(\mathbf{x}_{j}\) and quantifying for each interval the prediction difference between the upper and lower boundary for observations lying in this interval (see Eq. 6). Hence, the loss function in Eq. (15) measures the variance of the derivatives of observations where the feature values of \(\mathbf{x}_{j}\) lie within the boundaries of the regarded interval and, thus, measures the interaction-related heterogeneity of local effects of feature \(\mathbf{x}_{j}\) within this interval. The risk function in Eq. (11) then aggregates this loss over all \(m\) intervals. Note that the conditional expectation in Eq. (15) is the expected conditional derivative (estimated by the average derivative within the regarded interval) and not the ALE curve itself. However, the ALE curve is calculated by integrating the expected conditional derivative up to the regarded value \(x_{j}\) (see Eq. 5).
IllustrationFigure 4 illustrates the first split for the two simulation examples of Section 3 using GADGET-ALE. The heterogeneity of the local effects (derivatives) before applying GADGET is very high, spanning across negative to positive values (grey boxplots) within each interval. With GADGET, we then partition the feature space w.r.t. one of the features in \(Z\) such that this interaction-related heterogeneity is minimized. When GADGET-ALE is used, for both the uncorrelated and the correlated case, we receive the correct split feature and approximately the correct split point, which clearly shows a high reduction in heterogeneity of these local effects after the first split. While the shapes of the centered ALE curves look very similar to PD plots (see Figure 2) for the uncorrelated case, the ALE curves for the correlated case do not extrapolate and, thus, are more representative for the feature effect with regard to the underlying data distribution.
DecompositionEquivalently to PD plots, ALE plots also contain an additive recovery and, thus, can be decomposed additively into main and interaction effects (see Section 2.3). Furthermore, if \(Z\) is defined such that all features interacting with features in \(S\) are included and if GADGET is applied so that the theoretical minimum of the objective function is reached, then--according to Corollary 4--the joint mean-centered ALE function \(f^{ALE,c}_{S|\mathcal{A}_{g}}\) within each final subspace \(\mathcal{A}_{g}\) can be decomposed into the 1-dimensional mean-centered ALE functions of features in \(S\) (see Eq. 12). Thus, for ALE plots, we might also be able to decompose the prediction function into the regional features' main effects, depending on how we choose the subsets \(S\) and \(Z\) within the GADGET algorithm. More details on the decomposition when using GADGET-ALE and an exemplary illustration of the uncorrelated simulation example of Section 3 can be found in Appendix C.1.
Estimates and VisualizationThe regional effect is estimated by the regional centered ALE curve \(\hat{f}^{ALE,c}_{j|\mathcal{A}_{g}}\) for each feature in \(S\) and is calculated as in Eq. (6) for each final subspace \(\mathcal{A}_{g}\). The interaction-related heterogeneity for each feature in \(S\) and final subspace \(\mathcal{A}_{g}\) is measured by the risk function in Eq. (11) with the loss function in Eq. (15). Thus, the interaction-related heterogeneity quantifies the variation of partial derivatives within each subspace. Since the partial derivatives cannot be meaningfully visualized within the ALE plot itself, we suggest to visualize it in combination with a plot for the interaction-related heterogeneity measured by the standard deviation of partial derivatives within each interval, which is inspired by the derivative ICE plots of Goldstein et al. (2015) (see e.g., Figure 12).
### Gadget-Sd
While PD and ALE plots visualize the global feature effect for which we define the appropriate local feature effect function (mean-centered ICE curves and derivatives) for the GADGET algorithm, the SD plot (Section 2.3) is comparable to an ICE plot and, thus, does not show the global feature effect itself. Here, we provide an estimate for the global effect and show the applicability of SD within GADGET which we term GADGET-SD.
Method.Herren and Hahn (2022) (amongst others) showed that the Shapley value \(\phi_{j}^{(i)}(x_{j})\) of observation \(i\) for feature value \(x_{j}=\mathbf{x}_{j}^{(i)}\) can be decomposed as defined in Eq. (8) to
\[\phi_{j}^{(i)}(x_{j})=g_{j}^{c}(x_{j})+\sum_{k=1}^{p-1}\frac{1}{k+1}\sum_{W \subseteq-j:|W|=k}g_{W\cup j}^{c}(x_{j},\mathbf{x}_{W}^{(i)}),\]
with \(g_{W\cup j}^{c}(x_{j},\mathbf{x}_{W}^{(i)})=\mathrm{E}[\hat{f}(x_{j},X_{-j})| X_{W}=\mathbf{x}_{W}^{(i)}]-\sum_{V\subset\{W\cup j\}}\mathrm{E}[\hat{f}(X)|X_{V}= \mathbf{x}_{V}^{(i)}]\), which satisfies Axiom 1 (see Appendix B.6). The global feature effect for the SD plot of feature
Figure 4: Visualization of derivatives of \(\hat{f}\) w.r.t. \(\mathbf{x}_{1}\) (top) and respective ALE curves (bottom) for the uncorrelated (left) and correlated case (right) of the simulation example of Section 3, with \(Y=3X_{1}\mathbb{1}_{X_{3}>0}-3X_{1}\mathbb{1}_{X_{3}\leq 0}+X_{3}+\epsilon\) and \(\epsilon\sim\mathbb{N}(0,0.09)\). Derivatives (top) and mean-centered observational values (bottom) are colored w.r.t. the first split when GADGET-ALE with \(S=1\) and \(Z=\{2,3\}\) is applied. The boxplots (top) and lines (bottom) in grey show the variation of derivatives and the global centered ALE curves, respectively. The colored curves show the regional centered ALE curves after the first split with GADGET-ALE.
\(\mathbf{x}_{j}\) at the feature value \(x_{j}=\mathbf{x}_{j}^{(i)}\) can then be defined by
\[f_{j}^{SD}(x_{j})=\mathds{E}_{X_{W}}[\phi_{j}(x_{j})]=g_{j}^{c}(x_{j})+\sum_{k=1 }^{p-1}\frac{1}{k+1}\sum_{W\subseteq-j:|W|=k}\mathds{E}[g_{W\cup j}^{c}(x_{j},X _{W})]. \tag{16}\]
Following from that, according to Theorem 2, the respective loss function used in GADGET depends only on interaction effects between \(\mathbf{x}_{j}\) and features in \(-j\) and is given by
\[\mathcal{L}^{SD}(\mathcal{A}_{g},x_{j})=\sum_{i:\mathbf{x}^{(i)}\in\mathcal{A }_{g}/\mathbf{x}_{j}^{(i)}=x_{j}}\left(\phi_{j}^{(i)}(x_{j})-\mathds{E}_{X_{W} }[\phi_{j}(x_{j})|\mathcal{A}_{g}]\right)^{2}. \tag{17}\]
For more details, see Appendix B.
While there exist estimators for the global effect in PD and ALE plots, a pendant for the SD plot has not been introduced yet. Hence, a suitable estimator to estimate the expected value of Shapley values of feature \(\mathbf{x}_{j}\) in Eq. (17) must be chosen. Here, we use univariate GAMs with splines to estimate the expected value.15 Thus, the estimated GAM for feature \(\mathbf{x}_{j}\) represents the regional SD feature effect of feature \(\mathbf{x}_{j}\) within subspace \(\mathcal{A}_{g}\).
Footnote 15: Splines are functions that are defined in a piece-wise manner by polynomials. Splines are often preferred over polynomial regression, since they provide more flexibility with already low-order polynomials.
IllustrationFigure 5 visualizes the SD plot for feature \(\mathbf{x}_{1}\) of the simulation example described in Section 3. Similarly to the least-square estimate in linear regression, we search for the GAM that minimizes the squared distance (\(\Delta^{2}\)) of the Shapley values of \(\mathbf{x}_{1}\). With GADGET, we now split such that the fitted GAMs within the two new subspaces minimize the squared distances between them and the Shapley values within the respective subspace. Since the GAMs are fitted on the Shapley values (local feature effects), in contrast to PDs, they do not extrapolate with regard to \(\mathbf{x}_{1}\) in the correlated scenario. However, as defined in the beginning of this section, Shapley values are based on expected values that must be estimated. If they are calculated using the interventional approach (as we do here), it is still possible that the predictions considered in the Shapley values extrapolate in sparse regions. Hence, the definition of Shapley values via expected values differs from those of ICE curves and derivatives for ALE, which are based on local predictions. Similarly to estimating the mean-centering constants for ICE curves, it follows that the expected values within the Shapley value estimation must consider the regarded subspace \(\mathcal{A}_{g}\) to acknowledge the full heterogeneity reduction due to interactions within each subspace. This means that we must recalculate the Shapley values after each partitioning step for each new subspace to receive regional SD effects in the final subspaces that are representative of the underlying main effects within each subspace. However, without recalculating the conditional expected values, we still minimize the unconditional expected value (i.e., the feature interactions on the entire feature space). The two different approaches lead to a different acknowledgment of interaction effects. In general, it can be said that the faster approach without recalculation will be less likely to detect feature interactions of higher order compared to the exact approach with recalculation. The differences of the two approaches for the simulation example of Section 3 are explained in Appendix C.2 and further discussed on a more complex simulation example in Section 6.2.
Decomposition.The decomposition of Shapley values differs from that of mean-centered ICE curves in the way that feature interactions in Shapley values are fairly distributed between involved features, which leads to decreasing weights the higher the order of the interaction effect (see Eq. 8). In contrast, all interaction terms receive the same weight as the main effects in ICE curves. Hence, interactions of high order lead to less heterogeneity in SD plots compared to ICE plots. However, by using the interventional approach to calculate Shapley values, they can be decomposed by weighted PD functions (see Eq. 8 and Herren and Hahn, 2022). Hence, the same decomposition rules as defined for PD plots apply for the SD feature effect, as defined in Eq. (16). Meaning, if \(Z\) contains all features interacting with features in \(S\), and if GADGET is applied such that the theoretical minimum of the objective function is reached, then according to Corollary 4, the regional joint SD effect of features in \(S\) can be decomposed into 1-dimensional regional SD effect functions, as in Eq. (12). In this special case and if Shapley values are estimated by the interventional approach, it can be shown that \(f^{SD,c}_{j|\mathcal{A}_{g}}=f^{PD,c}_{j|\mathcal{A}_{g}}\) (see Appendix C.2).
Again, we might be able to decompose the prediction function into the regional features' main effects, depending on how we choose the feature subsets \(S\) and \(Z\) within the GADGET algorithm. More details and an exemplary illustration of the uncorrelated simulation example of Section 3 can be found in Appendix C.2.
Estimates and Visualization.The regional SD effect is estimated by a GAM for each feature in \(S\) and final region \(\mathcal{A}_{g}\). The interaction-related heterogeneity for each feature in
Figure 5: Visualization of Shapley values \(\phi_{1}\) w.r.t. \(\mathbf{x}_{1}\) for the uncorrelated (left) and correlated case (right) of the simulation example of Section 3, with \(Y=3X_{1}\mathbbm{1}_{X_{3}>0}-3X_{1}\mathbbm{1}_{X_{3}\leq 0}+X_{3}+\epsilon\) and \(\epsilon\sim\mathbb{N}(0,0.09)\). Shapley values (points) and the respective regional feature effect curves (GAMs) are colored w.r.t. the first split when GADGET-SD with \(S=1\) and \(Z=\{2,3\}\) is applied. The grey curve represents the global feature effect based on the entire feature space. \(\Delta_{0}^{2}\) represents the squared distance between a given Shapley value and the global SD curve (grey), while \(\Delta_{1}^{2}\) measures the squared distance between this Shapley value and the respective regional SD curve. The GAMs are fitted such that these squared distances over all Shapley values in the respective regions are minimized.
\(S\) and final subspace \(\mathcal{A}_{g}\) is measured by the risk function in Eq. (11) with the loss function in Eq. (17). The risk function quantifies the variation of Shapley values of feature \(\mathbf{x}_{j}\) within each region. We visualize the interaction-related heterogeneity by the Shapley values that are recalculated conditioned on the subspace \(\mathcal{A}_{g}\) (see, e.g., Figure 18 in Appendix E).
An overview of introduced estimates and visualization techniques for the different effect methods can be found in Appendix C.3. Moreover, an explanation of how categorical features are handled within GADGET for each of the presented feature effect methods is provided in Appendix C.4.
### Quantifying Feature Interactions
Since GADGET minimizes the interaction-related heterogeneity based on the underlying feature effect method, we can quantify feature interactions by measuring the heterogeneity reduction in each partitioning step. We introduce several measures (inspired by Herbinger et al., 2022) to gain more insights into learned feature interactions and the meaningfulness of the final regional effects based on the interaction-related heterogeneity reduction.
Analysis of Single Partitioning StepsWe can quantify the extent to which interaction-related heterogeneity has been reduced in each of the features in \(S\) after splitting according to the chosen split feature \(z\in Z\). Here, we denote \(G\) to be the total number of nodes after applying GADGET. Furthermore, we denote \(P\) to be the index of a parent node and the indices \(l\) and \(r\) to be the respective child nodes. We can quantify the relative interaction-related heterogeneity reduction after the respective partitioning step for feature \(j\in S\) by
\[I(\mathcal{A}_{P},\mathbf{x}_{j})=\frac{\mathcal{R}(\mathcal{A}_{P}, \mathbf{x}_{j})-\mathcal{R}(\mathcal{A}_{l},\mathbf{x}_{j})-\mathcal{R}( \mathcal{A}_{r},\mathbf{x}_{j})}{\mathcal{R}(\mathcal{X},\mathbf{x}_{j})},\]
meaning that we quantify the risk reduction of the regarded split relative to the risk on the entire feature space (root node) for one feature of interest \(j\in S\). For example, in Figure 3, the interaction-related heterogeneity reduction of the first split (\(I(\mathcal{X},\mathbf{x}_{1})\)) for feature \(\mathbf{x}_{1}\) is \(0.986\), which means that almost all of the interaction-related heterogeneity of \(\mathbf{x}_{1}\) is reduced after the first split.
Analysis of Single Split FeaturesInstead of considering only a single partitioning step, one might be more interested in how much interaction-related heterogeneity reduction a specific split feature \(z\in Z\) is responsible for w.r.t. all performed partitioning steps of an entire tree. To that end, we can quantify (either for each feature of interest in \(S\) or for all features in \(S\) together) the relative amount of heterogeneity reduction due to feature \(z\). Hence, we receive \(I_{z,j}(\mathbf{x}_{j})\) by summing up \(I(\mathcal{A}_{P},\mathbf{x}_{j})\) for the set of parent subspaces that used feature \(z\) as split feature and that we denote by \(\mathcal{B}_{z}\subset\{\mathcal{A}_{1},\ldots,\mathcal{A}_{G}\}\):
\[I_{z,j}(\mathbf{x}_{j})=\sum_{\mathcal{A}_{P}\in\mathcal{B}_{z}}I(\mathcal{A} _{P},\mathbf{x}_{j}).\]
We obtain the overall interaction-related heterogeneity reduction \(I_{z}\) of the \(z\)-th split feature by first aggregating over all features in \(S\):
\[I_{z}=\frac{\sum_{\mathcal{A}_{P}\in\mathcal{B}_{z}}\sum_{j\in S}(\mathcal{R} (\mathcal{A}_{P},\mathbf{x}_{j})-\mathcal{R}(\mathcal{A}_{l},\mathbf{x}_{j}) -\mathcal{R}(\mathcal{A}_{r},\mathbf{x}_{j}))}{\sum_{j\in S}\mathcal{R}( \mathcal{X},\mathbf{x}_{j})}.\]
In our previous example, we obtain \(I_{3,1}(\mathbf{x}_{1})=I(\mathcal{X},\mathbf{x}_{1})=0.986\), since \(z=3\) was only used once for splitting, while the interaction-related heterogeneity reduction of all three features is \(I_{3}=0.99\) for \(z=3\).
Goodness of FitA further aggregation level would be to sum up \(I_{z,j}(\mathbf{x}_{j})\) over all \(z\in Z\) and, thus, receive the interaction-related heterogeneity reduction for feature \(\mathbf{x}_{j}\) between the entire feature space and the final subspaces (hence, over the entire tree). This is related to the concept of \(R^{2}\), which is a well-known measure in statistics to quantify the goodness of fit. We apply this concept here to quantify how well the final regional effect curves (in the final subspaces \(\mathcal{B}_{t}\subset\{\mathcal{A}_{1},\ldots,\mathcal{A}_{G}\}\)) fit the underlying local effects compared to the global feature effect curve on the entire feature space. We distinguish between the feature-related \(R^{2}_{j}\), which represents the goodness of fit for the feature effects of feature \(\mathbf{x}_{j}\):
\[R^{2}_{j}=\sum_{z\in Z}I_{z,j}(\mathbf{x}_{j})=1-\frac{\sum_{\mathcal{A}_{t} \in\mathcal{B}_{t}}\mathcal{R}(\mathcal{A}_{t},\mathbf{x}_{j})}{\mathcal{R}( \mathcal{X},\mathbf{x}_{j})},\]
and the \(R^{2}_{Tot}\), which quantifies the goodness of fit for the feature effects of all features in \(S\):
\[R^{2}_{Tot}=\sum_{z\in Z}I_{z}=1-\frac{\sum_{\mathcal{A}_{t}\in\mathcal{B}_{t} }\sum_{j\in S}\mathcal{R}(\mathcal{A}_{t},\mathbf{x}_{j})}{\sum_{j\in S} \mathcal{R}(\mathcal{X},\mathbf{x}_{j})}.\]
Both \(R^{2}\) measures take values between 0 and 1, with values close to 1 signalling that almost all heterogeneity in the final subspaces compared to the entire feature space has been reduced--either for a specific feature of interest (\(R^{2}_{j}\)) or for all features of interest (\(R^{2}_{Tot}\)). In our example, \(R^{2}_{1}\) for feature \(\mathbf{x}_{1}\) is the same as for \(I_{3,1}(\mathbf{x}_{1})\), since GADGET performed only one split. The total interaction-related heterogeneity reduction over all features in \(S\) is \(R^{2}_{Tot}=I_{3}=0.99\). Hence, the interaction-related heterogeneity of all features in \(S\) has been reduced by 99% after the first split.
These measures provide a tool set to better understand how features interact with each other and how well the final regional effect plots represent the underlying local effects.
### Choosing Stop Criteria
The question of how many partitioning steps should be performed depends on the underlying research question. If the user is more interested in reducing the interaction-related heterogeneity as much as possible, they might split rather deeply, depending on the complexity of interactions learned by the model. However, this might lead to many regions that are more challenging to interpret. If the user is more interested in a small number of regions, they might prefer a shallow tree, thus reducing only the heterogeneity of the features that interact the most.
Here, we suggest the following stopping criteria to control the number of partitioning steps in GADGET: First, we could choose common hyperparameters of a decision tree, like the tree depth or the minimum number of observations per leaf node. Another option is to apply an early stop mechanism based on the interaction-related heterogeneity reduction--either in each split or in total. According to our proposed split-wise measure, a further split is only performed if the relative improvement of the split to be performed is at least
\(\gamma\in[0,1]\) times the total relative interaction-related heterogeneity reduction of the previous split: \(\gamma\times\frac{\sum_{j\in S}(\mathcal{R}(\mathcal{A}_{P},\mathbf{x}_{j}))- \mathcal{I}(\hat{t},\hat{z})}{\sum_{j\in S}(\mathcal{R}(\mathcal{X},\mathbf{x}_{ j}))}\). Another possibility is to stop splitting as soon as a pre-defined total reduction of heterogeneity (\(R_{Tot}^{2}\)) is reached. In general, it holds that the higher we choose \(\gamma\) and the lower we choose the threshold for \(R_{Tot}^{2}\), the fewer partitioning steps will be performed, and vice versa.
## 5 Significance Test for Global Feature Interactions
In addition to the hyperparameters for early stopping, there are two more hyperparameters in Algorithm 1 that must be specified--namely, the features of interest contained in \(S\) and the interacting (splitting) feature subset \(Z\). Choosing the features to be contained in \(S\) and \(Z\) strongly depends on the underlying research question. If the user is interested in how a specific set of features (\(S\)) influences the model's predictions depending on another user-defined feature set (\(Z\)), then \(S\) and \(Z\) are chosen based on domain knowledge. However, the user does not know which feature effects and interactions were inherently learned by the ML model. Thus, choosing \(S\) and \(Z\) based on domain knowledge does not guarantee that all interacting features are considered and that we can additively decompose the prediction function into univariate feature effects.
Thus, if our goal is to minimize feature interactions between all features to additively decompose the prediction function into mainly univariate effects, we can define \(S=Z=\{1,\ldots,p\}\). With that choice, we aim to reduce the overall interaction-related heterogeneity in all features (\(S\)), since we also consider all features to be possible interacting features (\(Z\)). However, this choice has two disadvantages. First, we must loop over all features and possible split points in each partitioning step, which may be slow in medium- or high-dimensional settings. This might also lead to less stable results, since only a few features might interact with each other, although many more features are considered for splitting. Second, if features are correlated, we might obtain spurious interactions (which we do not want to consider, since we are only interested in true interaction effects). Hence, to solely split according to feature interactions and only measure the interaction-related heterogeneity reduction, we must define beforehand the subset of features that actually interact. Since features interact with each other and all involved features will usually show heterogeneity in their local effects while being responsible for the heterogeneity of other involved features, we will usually choose \(S=Z\).16 Thus, we will furthermore only use \(S\) as the globally interacting feature subset to be defined.
Footnote 16: \(Z\) might differ in the case of non-symmetrical interactions (see Section 7).
Since the H-Statistic (as defined in Section 2.4) is a global interaction measure, it could be used to define \(S\) by choosing all features with a high H-Statistic value. However, the question remains of which value is considered "high". Furthermore, the H-Statistic is based on PDs. Thus, the interacting features are always chosen depending on the interaction quantification of PDs, which might differ from other feature effect methods (e.g., see the discussion of Shapley values versus ICE curves in Section 4.5). Since it is based on PDs, the H-Statistic might also suffer from detecting spurious interactions (Sorokina et al., 2008).
We introduce a new statistical permutation interaction test (PINT) that is inspired by the permutation importance (PIMP) algorithm of Altmann et al. (2010) to test for
significant feature importance values. The goal of PINT (see Algorithm 2) is to define the feature subset \(S\) that contains all features that significantly interact with each other, w.r.t. a predefined significance level \(\alpha\) and a chosen feature effect method. With the risk function defined in Eq. (11) and based on the chosen local feature effect method, we can quantify the interaction-related heterogeneity of each feature \(\mathbf{x}_{j}\) within the feature space \(\mathcal{X}\). Due to correlations between features, the estimated heterogeneity might also include spurious interactions. Thus, we must define a null distribution to determine which heterogeneity is actually due to feature interactions (i.e., significant w.r.t. the null distribution) and which heterogeneity is due to other reasons, such as correlations or noise. This is achieved by permuting the target variable \(y\) (line 4 in Algorithm 2). With that, we break the association between the features and the target variable, but the underlying data structure remains. We refit the given ML model based on the data set \(\tilde{\mathcal{D}}\) with the permuted target variable and calculate the respective risk \(\tilde{\mathcal{R}}_{j}\) for each feature \(\mathbf{x}_{j}\) based on the chosen feature effect method \(h\) (lines 5-8 in Algorithm 2). This is repeated \(s\) times, producing \(s\) permuted risk values that represent the null distribution for the unpermuted risk value \(\mathcal{R}(\mathcal{X},\mathbf{x}_{j})\) of the \(j\)-th feature. Then, we perform a statistical test based on the null hypothesis \(H_{0}:\mathcal{R}(\mathcal{X},\mathbf{x}_{j})\leq\tilde{\mathcal{R}}_{j}^{(s (1-\alpha))}\). Hence, if the unpermuted risk value is larger than the \((1-\alpha)\)-quantile of the null distribution, then the \(j\)-th feature is significant w.r.t. the defined \(\alpha\)-level and belongs to the interacting feature subset \(S\) (lines 11-18 in Algorithm 2).
```
1:input: data set \(\mathcal{D}\), prediction function \(\hat{f}\), number of permutations \(s\), risk function \(\mathcal{R}\), significance level \(\alpha\)
2:output: feature subset \(S\)
3:for\(k\in\{1,\ldots,s\}\)do
4: permute \(y\) of \(\mathcal{D}\) denoted by \(\tilde{y}^{k}\) and \(\tilde{\mathcal{D}}^{k}=\{\mathbf{x},\tilde{y}^{k}\}\)
5: refit model on \(\tilde{\mathcal{D}}^{k}\) to obtain the prediction function \(\tilde{f}^{k}\)
6:for\(j\in\{1,\ldots,p\}\)do
7: calculate risk \(\tilde{\mathcal{R}}_{j}^{(k)}=\tilde{\mathcal{R}}^{k}(\mathcal{X},\mathbf{x}_ {j})\) for \(j\)-the feature based on \(\tilde{\mathcal{D}}^{k}\) and \(\tilde{f}^{k}\)
8:endfor
9:endfor
10:for\(j\in\{1,\ldots,p\}\)do
11: a) calculate risk \(\mathcal{R}(\mathcal{X},\mathbf{x}_{j})\) for \(j\)-the feature based on \(\mathcal{D}\) and \(\hat{f}\)
12: b) sort \(\tilde{\mathcal{R}}_{j}\) in increasing order
13: c) determine the \((1-\alpha)\)-quantile of permuted risk values \(z_{j}^{1-\alpha}=\tilde{\mathcal{R}}_{j}^{(s(1-\alpha))}\)
14:if\(\mathcal{R}(\mathcal{X},\mathbf{x}_{j})>z_{j}^{1-\alpha}\)then
15:\(j\in S\)
16:else
17:\(j\notin S\)
18:endif
19:endfor
```
**Algorithm 2**PINT
We illustrate the performance of PINT compared to the H-Statistic on an example which might be affected by spurious interactions. Therefore, we consider that
\(U(-1,1)\) and \(X_{3}=X_{2}+\epsilon\) with \(\epsilon\sim N(0,0.09)\). We draw \(n=\{300,500\}\) observations of these four random variables and assume the following relationship between \(\mathbf{y}\) and \(\mathbf{x}\): \(\mathbf{y}=\mathbf{x}_{1}+\mathbf{x}_{2}+\mathbf{x}_{3}-2\mathbf{x}_{1}\mathbf{ x}_{2}\). Hence, \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) have a negative linear interaction effect, while \(\mathbf{x}_{2}\) and \(\mathbf{x}_{3}\) are highly linearly positively correlated but do not interact. Thus, there might be a spurious interaction between \(\mathbf{x}_{1}\) and \(\mathbf{x}_{3}\). We calculate PINT and the H-Statistic for all features using a support vector machine (SVM) as the underlying ML model (with specifications defined in Section 6.3). We repeat the experiment 30 times. Figure 8 shows that for \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\), the PINT test is significant for almost all repetitions and effect methods, while it is never significant for features \(\mathbf{x}_{3}\) and \(\mathbf{x}_{4}\) w.r.t. a significance level of \(\alpha=0.05\). Hence, besides a few exceptions for ALE, the PINT algorithm returns the correct interacting feature subset \(S=\{1,2\}\). The H-Statistic shows that \(\mathbf{x}_{1}\) interacts most with all other features, then \(\mathbf{x}_{2}\), followed by \(\mathbf{x}_{3}\) and \(\mathbf{x}_{4}\) in the rankings. Depending on which threshold is chosen, one would possibly include the non-interacting feature \(\mathbf{x}_{3}\) in \(S\), which shows over all repetitions values ranging from 0.1 to 0.2--possibly due to spurious interactions.
Hence, PINT not only allows to more clearly and (in this example) correctly define the subset \(S\) than the H-Statistic, but PINT also has the advantage that it can be used with any feature effect method that we use for GADGET. Thus, PINT can be applied according to the objective we want to minimize. This is analyzed in more detail in Section 6.
PINT also entails two hyperparameters \(\alpha\) and \(s\) that must be specified. The significance level \(\alpha\) can be chosen depending on the underlying research question. If we are only interested in a small set of very strong interactions, we choose \(\alpha\) to be very small. If we want to find all (also small) interaction effects, we choose \(\alpha\) to be larger. However, a larger significance level might also lead to detecting spurious interactions. The number of permutations \(s\) should be chosen to be as high as possible in order to obtain accurate results. However, since PINT must refit the model within each permutation, the computational burden increases with more permutations. One possible solution to address this trade-off for PIMP was proposed by Altmann et al. (2010), where the authors use a smaller number of permutations (e.g., 100) to approximate the empirical null distribution and then fit a theoretical distribution (e.g., normal, log-normal or gamma distribution) on the empirical distribution. Based on a Kolmogorov-Smirnov test, the theoretical distribution that best fits the empirical distribution is selected to approximate the null distribution. If the Kolmogorov-Smirnov test is not significant for any of the theoretical distributions, then the empirical distribution is used as the null distribution.17 This approach can similarly be applied for PINT. More suggestions to decrease the computational burden of PINT are provided in Appendix D.
Footnote 17: Note that Altmann et al. (2010) do not adjust for multiple testing, which can lead to false positives in higher dimensions (Molnar et al., 2022).
## 6 Simulations
In this section, we analyze different hypotheses to (1) empirically validate that GADGET generally minimizes feature interactions and (2) show how different characteristics of the underlying data--such as correlations between features and different settings of the data-generating process--might influence GADGET, depending on the chosen feature effect method. The structure of the following sections and the concrete definition of the hypotheses is based on (2). However, the simulation examples themselves are designed in such a way
that we know the underlying ground-truth of feature interactions for each example. Thus, we are able to empirically validate that GADGET generally minimizes feature interactions.
### Extrapolation
Hypothesis.For increasing correlation between features, due to extrapolation, we receive results that are less stable for methods using the marginal distribution for integration (especially PD, as shown in Section 3, but also SD) compared to methods using the conditional distribution (such as ALE). Note that in this context, "stability" refers to the ability of the different methods to find the correct split feature and split point in GADGET over various repetitions.
Experimental Setting.To address this hypothesis, we choose the (simple) simulation example of Section 3 and compare GADGET based on PD, ALE, and SD for four different correlation coefficients \(\rho_{13}\) between \(\mathbf{x}_{1}\) and \(\mathbf{x}_{3}\)--\(0,0.4,0.7\), and \(0.9\). The data is generated as follows: Let \(X_{2},X_{3}\sim\mathbb{U}(-1,1)\) be independently distributed and \(X_{1}=c\cdot X_{3}+(1-c)\cdot Z\) with \(Z\sim\mathbb{U}(-1,1)\), where \(c\) takes values between \(0\) and \(0.7\), which correspond to the above-mentioned \(\rho_{13}\) values. The true underlying relationship is defined as before by \(Y=3X_{1}\mathbbm{1}_{X_{3}>0}-3X_{1}\mathbbm{1}_{X_{3}\leq 0}+X_{3}+\epsilon\) with \(\epsilon\sim\mathbb{N}(0,0.09)\). We draw \(1000\) observations and fit a GAM with correctly specified main and interaction effects as well as an NN with the previously defined specifications to the data. We repeat the experiment \(30\) times.
We apply GADGET to each setting and model within each repetition using PD, ALE, and SD as feature effect methods. We consider all features as features of interest as well as potential interacting (split) features, meaning \(S=Z=\{1,2,3\}\). As stopping criteria, we choose a maximum depth of \(6\), minimum number of observations per leaf of \(40\), and set the improvement parameter \(\gamma\) to \(0.2\).
Results.Figure 6 shows that independent of the model or correlation degree, \(\mathbf{x}_{2}\) has (correctly) never been considered as split feature. For correlation strengths \(\rho_{13}\) between \(0\) and \(0.7\), \(\mathbf{x}_{3}\) is always chosen as the only split feature, with \(I_{3}\) taking values between \(0.75\) and \(1\) and, thus, reducing most of the interaction-related heterogeneity of all features with one split. Thus, for low to medium correlations, there are only minor differences between the various feature effect methods and models. However, the observed behavior changes substantially for \(\rho_{13}=0.9\). For the correctly-specified GAM, we still receive quite consistent results, apart from one repetition where \(\mathbf{x}_{1}\) is chosen as the split feature when PD is used. In contrast, there is more variation of \(I_{3}\) when the NN is considered as the underlying ML model, especially when SD is used. For SD, \(\mathbf{x}_{1}\) is chosen once for splitting, and for PD, this is also the case for \(30\%\) of all repetitions (see Table 1). While using ALE, GADGET always correctly performs one split with regard to \(\mathbf{x}_{3}\). Additionally, GADGET performs a second split once when PD is used and \(7\) times when SD is used.
Thus, for a high correlation between features, we already observe in this very simple setting that the extrapolation problem influences the splitting within GADGET for effect methods based on marginal distributions, while methods based on conditional distributions such as ALE are less affected. This is particularly relevant for learners that model very locally (e.g., NNs) and thus, tend to have wiggly prediction functions and oscillate in ex
trapolating regions. This is also notable on the split value range, which increases for the NN in the case of high correlations for PD and SD but not for ALE (see Table 1).
### Higher-Order Effects
Hypothesis.While SD puts less weight on interactions with increasing order, all interactions (independent of the order) receive the same weight in PD and ALE. Hence, when
\begin{table}
\begin{tabular}{|l c|c c c|c c c|c|} \hline & & \multicolumn{3}{c|}{Split feature \(\mathbf{x}_{3}\) in \%} & \multicolumn{3}{c|}{Split value range} & \multicolumn{3}{c|}{MSE} \\ \hline Model & \(\rho_{13}\) & ALE & PD & SD & ALE & PD & SD & mean (sd) \\ \hline GAM & 0 & 1.00 & 1.00 & 1.00 & 0.10 & 0.10 & 0.12 & 0.329 (0.008) \\ GAM & 0.4 & 1.00 & 1.00 & 1.00 & 0.13 & 0.10 & 0.10 & 0.201 (0.003) \\ GAM & 0.7 & 1.00 & 1.00 & 1.00 & 0.09 & 0.10 & 0.09 & 0.137 (0.002) \\ GAM & 0.9 & 1.00 & 0.97 & 1.00 & 0.11 & 0.11 & 0.12 & 0.107 (0.001) \\ \hline NN & 0 & 1.00 & 1.00 & 1.00 & 0.10 & 0.09 & 0.09 & 0.152 (0.018) \\ NN & 0.4 & 1.00 & 1.00 & 1.00 & 0.08 & 0.08 & 0.07 & 0.124 (0.007) \\ NN & 0.7 & 1.00 & 1.00 & 1.00 & 0.11 & 0.10 & 0.10 & 0.111 (0.005) \\ NN & 0.9 & 1.00 & 0.70 & 0.97 & 0.10 & 0.19 & 0.15 & 0.104 (0.003) \\ \hline \end{tabular}
\end{table}
Table 1: Overview of how often \(\mathbf{x}_{3}\) was chosen as the first split feature over all 30 repetitions, including the respective split value range. The last column shows the test performance of the respective model by the mean (standard deviation) of the mean squared error (MSE).
Figure 6: Boxplots showing the interaction-related heterogeneity reduction \(I_{z}\) per split feature over 30 repetitions when PD, ALE, or SD is used in GADGET. The columns show the results depending on the correlation between \(\mathbf{x}_{1}\) and \(\mathbf{x}_{3}\), while the rows show the respective boxplots when GADGET is applied, based on the predictions of a correctly-specified GAM (upper) and of an NN (lower).
GADGET-SD is used, we might not be able to detect interactions of high order--especially when the approximation without recalculation after each split is used (see Section 4.5). However, it should be more likely to detect these interactions with PD and ALE.
Experimental Setting.To investigate this hypothesis, we consider \(5\) features with \(X_{1}\sim\mathbb{U}(0,1)\) and \(X_{2},X_{3},X_{4},X_{5}\sim\mathbb{U}(-1,1)\) and draw \(1000\) samples. The data-generating process is defined by a series of interactions between different features: \(y=f(\mathbf{x})+\epsilon\) with \(f(\mathbf{x})=\mathbf{x}_{1}\cdot\mathbbm{1}_{\mathbf{x}_{3}\leq 0}\mathbbm{1}_{ \mathbf{x}_{4}>0}+4\mathbf{x}_{1}\cdot\mathbbm{1}_{\mathbf{x}_{3}\leq 0} \mathbbm{1}_{\mathbf{x}_{4}\leq 0}-\mathbf{x}_{1}\cdot\mathbbm{1}_{\mathbf{x}_{3}>0} \mathbbm{1}_{\mathbf{x}_{5}\leq 0}\mathbbm{1}_{\mathbf{x}_{2}\leq 0}-5 \mathbf{x}_{1}\cdot\mathbbm{1}_{\mathbf{x}_{3}>0}\mathbbm{1}_{\mathbf{x}_{5}>0} \) and \(\epsilon\sim\mathrm{N}(0,0.01\cdot\sigma^{2}(f(\mathbf{x}))\). These interactions can be seen as one hierarchical structure between all features, where the slope of \(\mathbf{x}_{1}\) depends on the subspace defined by the interacting features (see Figure 7).
We fit an xgboost (XGB) model with correctly-specified feature interactions and a random forest (RF) on the data set and apply GADGET using PD, ALE, SD with recalculation after each split, and SD without recalculation on each model. We repeat the experiment \(30\) times, where the XGB showed an average (standard deviation) MSE of \(0.068(0.009)\) and the RF of \(0.121(0.012)\) on a separate test set of the same distribution. For GADGET, we consider one feature of interest \(S=1\) and all other features as potential interacting features \(Z=\{2,3,4,5\}\). As stop criteria, we choose a maximum tree depth of \(7\), minimum number of observations of \(40\), and \(\gamma=0.1\). We can assume that if the underlying model learned the effects of the data-generating process correctly, GADGET must split as shown in Figure 7 to maximally reduce the interaction-related heterogeneity of \(\mathbf{x}_{1}\).
Results.For the first partitioning step, all methods used \(\mathbf{x}_{3}\) as the first split feature in all repetitions. Table 2 shows that a second-level split was performed in only \(10\%\) of the repetitions for XGB when SD without recalculation is used within GADGET, while the second-level split frequency for all other methods is approximately \(90\%\). A similar trend is observable when RF is used as underlying ML model but with higher variation in the relative frequencies, which might be due to to different learned effects. If further splits for the second depth of the tree are performed, the correct split features \(\mathbf{x}_{4}\) and \(\mathbf{x}_{5}\) are always chosen by all methods, which shows that GADGET generally minimizes feature interactions (see Figure 7). Table 3 shows the same difference in relative frequencies between SD without recalculation and all other methods for the third-level splits as for the previous splits. This is confirmed by Table 4, which shows that while SD without recalculation demonstrates a high variation of final subspaces, the median number of subspaces is \(2\), indicating that this method stops after the first split (with \(\mathbf{x}_{3}\)). The other methods show a higher number of final subspaces; PD and SD with recalculation typically show the correct final number of subspaces (i.e., \(5\)), while ALE tends to split slightly deeper.
To summarize, when SD without recalculation is used, the two-way interaction with \(\mathbf{x}_{3}\) is primarily detected, while features of a third (\(\mathbf{x}_{4}\) and \(\mathbf{x}_{5}\)) or fourth order (\(\mathbf{x}_{2}\)) interaction are rarely considered for splitting. By contrast, for most repetitions of the other three methods, we find that interactions of a higher order are also detected. This supports our hypothesis regarding higher-order effects and the theoretical differences of the considered feature effect methods.
Note that due to recalculating the Shapley values after each split, the order of interactions is reduced. For example, recalculating Shapley values in the subspace \(\{\mathcal{X}|\mathbf{x}_{3}\leq 0\}\) reduces the three-way interaction \(\mathbf{x}_{1}\cdot\mathbbm{1}_{\mathbf{x}_{3}\leq 0}\mathbbm{1}_{\mathbf{x}_{4}>0}\) to the two-way interaction \(\mathbf{x}_{1}\cdot\mathbbm{1}_{\mathbf{x}_{4}>0}\) and,
thus, the weight of the interaction increases for the next split. Consequently, we receive similar results for these settings as we do for PD. However, one might consider that the recalculation can be computationally expensive.
\begin{table}
\begin{tabular}{|l l|r r r|} \hline \multicolumn{2}{|c|}{} & \multicolumn{2}{|c|}{No. of Subspaces} \\ \hline Model & Method & Min & Max & Med \\ \hline XGB & ALE & 3 & 15 & 7 \\ XGB & PD & 2 & 7 & 5 \\ XGB & SD\_not\_rc & 2 & 7 & 2 \\ XGB & SD\_rc & 2 & 8 & 5 \\ RF & ALE & 3 & 16 & 9 \\ RF & PD & 2 & 11 & 5 \\ RF & SD\_not\_rc & 2 & 11 & 2 \\ RF & SD\_rc & 3 & 12 & 5 \\ \hline \end{tabular}
\end{table}
Table 4: Minimum, maximum and median number of final subspaces over 30 repetitions after GADGET is applied with ALE, PD, SD_not_rc and SD_rc.
### PINT vs. H-Statistic: Spurious Interactions
HypothesisWhen using PINT to pre-select \(S=Z\subseteq\{1,\ldots,p\}\), we are more likely to actually split according to feature interactions compared to when choosing all features (\(S=Z=\{1,\ldots,p\}\)) or using the H-Statistic values for pre-selection, especially when potential spurious interactions are present.
Experimental SettingWe use the following simulation example described previously in Section 5: We consider four features with \(X_{1},X_{2},X_{4}\sim U(-1,1)\) and \(X_{3}=X_{2}+\epsilon\) with \(\epsilon\sim N(0,0.09)\). For 30 repetitions, we draw \(n=\{300,500\}\) observations and create the dependent variable, including a potential spurious interaction between \(\mathbf{x}_{1}\) and \(\mathbf{x}_{3}\): \(\mathbf{y}=\mathbf{x}_{1}+\mathbf{x}_{2}+\mathbf{x}_{3}-2\mathbf{x}_{1} \mathbf{x}_{2}\). For each sample size \(n\) and each repetition, we fit an SVM based on a radial basis function (RBF) kernel with cost parameter \(C=1\) and choose the inverse kernel width based on the data. We receive very similar model performance values--measured by the mean (standard deviation) of the MSE on a separate test set of size \(100000\) of the same distribution--of \(0.028(0.010)\) and \(0.027(0.010)\) for \(n=300\) and \(n=500\), respectively.
We calculate PINT using PD, ALE, and SD for each repetition and sample size with \(s=100\) and \(\alpha=0.05\) by approximating the null distribution as described in Section 5 and Altmann et al. (2010). We apply GADGET using PD, ALE, and SD with recalculation, where \(S=Z\) is based on the feature subset chosen by PINT for the respective feature effect method. We compare these results with considering all features as features of interest and potential split features (i.e. \(S=Z=\{1,\ldots,p\}\)). We use a maximum tree depth of 6, minimum number of observations of 40, and \(\gamma=0.15\) as stop criteria.
ResultsThe two left plots in Figure 8 show that \(\mathbf{x}_{3}\) and \(\mathbf{x}_{4}\) are always correctly identified as insignificant (according to the chosen \(\alpha\) level), while the interacting features \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) are always significant and thus considered in \(S\) (apart from a few exceptions for ALE). The sample size does not seem to have a clear influence on PINT in this setting. The right plots in Figure 8 show that even the H-Statistic values of the non-influential and uncorrelated feature \(\mathbf{x}_{4}\) are larger than 0 for both sample sizes. The H-Statistic values of \(\mathbf{x}_{3}\) might support considering \(\mathbf{x}_{3}\) in \(S\), depending on which threshold is chosen. Since this choice is not clear for the H-Statistic, it is not very suitable as a pre-selection method for GADGET.
Figure 9 illustrates that we correctly only consider \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) when PINT is applied upfront, while GADGET also splits w.r.t. \(\mathbf{x}_{3}\) for all settings and (in some cases) even w.r.t. \(\mathbf{x}_{4}\) if PINT is not applied upfront. The influence of these two features seems to be higher for the smaller sample size according to \(I_{z}\). Note that of the various effect methods used in GADGET, PD and SD attribute most of the heterogeneity reduction to \(\mathbf{x}_{1}\) and a small part to \(\mathbf{x}_{2}\), while ALE attributes the heterogeneity more equally between the two split features. A possible explanation might be the correlation between \(\mathbf{x}_{2}\) and \(\mathbf{x}_{3}\), which particularly affects the two methods based on marginal distributions (i.e., PD and SD). Furthermore, Table 5 shows that we tend to obtain shallower trees when we use PINT upfront, while we retain the level of heterogeneity reduction by obtaining similar \(R_{j}^{2}\) values for the two interacting features \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\).
Thus, PINT reduces the number of features to consider in the interacting feature subset for GADGET depending on the regarded feature effect method. This leads to better results in GADGET in the sense that we only split w.r.t. truly interacting features defined by PINT and receive shallower (and thus, more interpretable) trees.
Figure 8: Boxplots showing the distribution of p-values of each feature for different sample sizes and effect methods over all repetitions when PINT is applied (left) and the distribution of feature-wise H-Statistic values for both sample sizes (right).
Figure 9: Boxplots showing the interaction-related heterogeneity reduction \(I_{z}\) per split feature over 30 repetitions when PD, ALE, or SD is used within GADGET. The rows show the results for the two different sample sizes, and the columns indicate if GADGET is used based on all features without using PINT upfront (left) or GADGET is used based on the feature subset resulting from PINT (right).
## 7 Real-World Applications
In this section, we show the usefulness of our introduced methodology on two real-world application examples. These examples demonstrate that we can both obtain more insights about the learned effects and interactions of the underlying model as well as potentially be able to detect potential bias in the data or model.
COMPAS Data SetDue to potential subjective judgement and a resulting bias in the decision-making process of the criminal justice system (Blair et al., 2004), ML models have been used to predict recidivism of defendants to provide a more objective guidance for judges. However, if the underlying training data is biased (e.g., different socioeconomic groups have been treated differently for the same crime in the past), the ML model might learn the underlying data bias and, due to its black-box nature, explanations for its decision-making process and a potential recourse are harder to achieve (Fisher et al., 2019).
We want to use GADGET here to obtain more insights in how different characteristics about the defendant and their criminal record influence the risk of recidivism within different subgroups and if "inadmissible" characteristics such as ethnicity or gender cause a different risk evaluation. For our analysis, we use a data set to predict the risk of violent recidivism gathered by ProPublica (Larson et al., 2016) from the Broward County Clerk's Office, the Broward County Sheriff's Office, and the Florida Department of Corrections, based on the commercial black-box model of Northpointe Inc. called the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) data set.
As Fisher et al. (2019), we choose the three admissible features (1) age of the defendant, (2) number of prior crimes, and (3) if the crime is considered a felony versus misdemeanor, and the two "inadmissible" features (1) ethnicity and (2) gender of the defendant. We use
\begin{table}
\begin{tabular}{|c c c|c c|r r r|} \hline & & & \multicolumn{2}{c|}{\(R_{j}^{2}\)} & \multicolumn{2}{c|}{Number of Subspaces} \\ \hline n & Method & PINT & \(R_{1}^{2}\) mean(sd) & \(R_{2}^{2}\) mean(sd) & Min & Max & Median \\ \hline
300 & ALE & no & 0.83 (0.05) & 0.83 (0.05) & 4 & 10 & 7.50 \\
300 & ALE & yes & 0.83 (0.07) & 0.82 (0.05) & 1 & 10 & 7.00 \\
300 & PD & no & 0.94 (0.02) & 0.84 (0.06) & 2 & 7 & 4.00 \\
300 & PD & yes & 0.92 (0.03) & 0.80 (0.08) & 2 & 5 & 3.00 \\
300 & SD & no & 0.93 (0.04) & 0.90 (0.05) & 2 & 11 & 5.00 \\
300 & SD & yes & 0.93 (0.04) & 0.90 (0.05) & 2 & 11 & 4.50 \\
500 & ALE & no & 0.83 (0.04) & 0.83 (0.06) & 4 & 10 & 7.00 \\
500 & ALE & yes & 0.86 (0.04) & 0.82 (0.05) & 1 & 11 & 6.50 \\
500 & PD & no & 0.94 (0.03) & 0.84 (0.06) & 2 & 7 & 4.00 \\
500 & PD & yes & 0.93 (0.03) & 0.82 (0.08) & 2 & 5 & 4.00 \\
500 & SD & no & 0.93 (0.04) & 0.90 (0.05) & 2 & 11 & 5.00 \\
500 & SD & yes & 0.93 (0.04) & 0.90 (0.05) & 2 & 9 & 5.00 \\ \hline \end{tabular}
\end{table}
Table 5: Interaction-related heterogeneity reduction per feature for \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) by mean (standard deviation) of \(R_{j}^{2}\) and minimum, maximum and median number of final subspaces after applying GADGET based on different sample sizes, effect methods and with and without using PINT upfront.
the subset of African-American and Caucasian defendants and apply the data pre-processing steps suggested by ProPublica and applied in Fisher et al. (2019), which leaves us with 3373 defendants of the original pool of 4743 defendants. We consider a binary target variable indicating a high (\(=1\)) or low (\(=0\)) recidivism risk, which is based on a categorization of ProPublica. We perform our analysis on the full data set, using a tuned SVM.18
Footnote 18: More details on the model selection process can be found in Appendix E.
Since the features do not show high correlations, we use ICE and PD for our analysis.19 Figure 10 shows that the average predicted risk visibly decreases with age, while the predicted risk first steeply increases with the number of prior crimes until 10 and then slightly decreases for higher values. The PD values for the type of crime do not differ substantially. When considering the two "inadmissible" features, there is on average a slightly higher risk of recidivism for African-American versus Caucasian and female versus male defendants. For all features, we can observe highly differing local effects. In particular, the heterogeneous ICE curves for age and number of prior crimes indicate potential feature interactions.
Footnote 19: We received similar results by using SD instead of PD within GADGET, see Appendix E.
We first apply PINT with PD to define our subset \(S\) for GADGET. To that end, we choose \(s=200\) and \(\alpha=0.05\) and approximate the null distribution using the procedure described in Section 5. All five features are significant w.r.t. the chosen significance level, and thus we choose \(S=Z=\{age,crime,ethnicity,gender,priors\_count\}\) for GADGET. We apply GADGET with a maximum depth of 3 and \(\gamma=0.15\). The effect plots for the four resulting regions are shown in Figure 11. GADGET performed the first split according to age and the splits on the second depth according to the number of prior crimes. The total interaction-related heterogeneity reduction is \(R_{Tot}^{2}=0.86\). The highest heterogeneity reduction is given by age and number of prior crimes, which interact the most (see Figure 11). For defendants around 20 years old, the predicted risk is generally very high. However, for defendants with a small number of prior crimes, the predicted risk decreases very quickly with increasing age, reaching a low risk and remaining so for people older than 32. In contrast, for defendants with more than four prior crimes, the predicted risk decreases only slowly with increasing age. The regional feature effects of the number of prior crimes show that the interaction-related heterogeneity is small for the subgroup of younger people, while
Figure 10: ICE and PD curves of considered features of the COMPAS application example.
some heterogeneity still remains for defendants older than 32. Additionally, the interaction-related heterogeneity of the three binary features (ethnicity, gender, and crime severity) was reduced, indicating an interaction between each of them and age as well as the number of prior crimes. While the effect on the predicted risk only differs slightly between the categories of the three binary features for older defendants with a lower number of prior crimes and for younger defendants with a high number of prior crimes, greater differences were observed for the other two subgroups. The overall difference in predicted risk for the two inadmissible features of ethnicity and gender seems to be especially high for people older than 32 with a higher number of prior crimes as well as for people younger than 32 with a lower number of prior crimes.
Thus, our analysis has discovered a potentially learned bias regarding ethnicity and gender of the defendant, potentially resulting in more severe predicted risk of recidivism for some defendants than for others. Note that we applied GADGET on an ML model that is fitted on the COMPAS scores and not directly on COMPAS. Consequently, we are not able to draw conclusions about the learned effects of the underlying commercial black-box model. However, GADGET is model-agnostic and can be applied to any accessible black-box model.
Bikesharing Data Set.As a second application example, we choose the bikesharing data set provided in James et al. (2022). The target variable of this regression task is the hourly count of rented bikes for the years 2011 and 2012 in the Capital bikeshare system. The goal here is to predict the hourly rented bikes based on seasonal as well as weather information. We include ten features in our model for this prediction task: the day and hour the bike was rented, if the current day is a working day, the season, and the number of casual bikers. Weather-related features we included are: normalized temperature, perceived temperature, wind speed, humidity and the weather situation (categorical, e.g., clear).
We fit an RF on the data set and use the training data for our further analysis.20
Figure 11: Regional PD plots for all features of the COMPAS application after applying GADGET. The grey areas of the numeric features and the error bars of the categorical features indicate the 95% interaction-related heterogeneity interval as defined in Section 4.3 and Appendix C.3.
Again, we first apply PINT (as done in the COMPAS example above) with \(s=200\) and \(\alpha=0.05\) on all features to define the interacting feature subset \(S\) for GADGET. Since some of the features--such as season, temperature, and perceived temperature--are correlated, we use ALE for our analysis. While the features hour and workingday are highly significant, the p-values of all other features are close to 1, indicating that only the heterogeneity of local effects for hour and workingday are caused by interactions. Hence, we define \(S=Z=\{hr,workingday\}\) and apply GADGET with a maximum depth of 3 and \(\gamma=0.15\). GADGET splits once w.r.t. the binary feature workingday, which reduces the interaction-related heterogeneity of the two features by \(R_{Tot}^{2}=0.88\). The middle plots of Figure 12 show the regional ALE plots after applying GADGET. High peaks are prominently visible on working days during rush hours, while there is a drop during noon and afternoon hours. However, on non-working days, the trend is the opposite. This interaction is not visible in the global ALE plot of the feature hour (left plot).
The interaction-related heterogeneity of the feature hour is reduced compared to the global plot, although there is still some variation apparent for working days. From a domain perspective, we might also consider an interaction of the temperature with hour and working day (as done in Hiabu et al., 2023). Thus, we include temperature in feature subsets \(S\) and \(Z\) and apply GADGET again with the same settings as described above. The feature workingday governs the first split. However, for the region of working days, GADGET splits again according to temperature, as shown in the right plot of Figure 12. While the interaction-related heterogeneity of feature temperature was barely reduced within GADGET (\(R_{j}^{2}=0.03\))--which supports the results of PINT--using temperature in the subset of splitting features \(Z\) further reduced the interaction-related heterogeneity of hour by 15%. Thus, feature interactions can be asymmetric, and extending \(Z\) based on domain knowledge might be a valid option in some cases.
Figure 12: Global (left), regional ALE plots after applying GADGET for feature hour when \(S=Z=\{hr,workingday\}\) (middle) and when \(S=Z=\{hr,workingday,temp\}\) (right) of the bikesharing application. The upper plots show the interaction-related heterogeneity as defined in Section 4.4 and Appendix C.3.
## 8 Conclusion and Discussion
We introduced the new framework GADGET, which partitions the feature space into interpretable and distinct regions such that feature interactions of any subset of features are minimized. This allows to decompose joint effects into features' main effects within the found regions. GADGET can be used with any feature effect method that satisfies the _local decomposability_ axiom (Axiom 1). We showed applicability to the well-known global feature effect methods--namely PD, ALE, and SD--and provide respective estimates and visualizations for regional feature effects and interaction-related heterogeneity. Furthermore, we introduced different measures to quantify and analyze feature interactions by the reduction of interaction-related heterogeneity within GADGET. To define the interacting feature subset, we introduced PINT, a novel permutation-based significance test to detect global feature interactions that is applicable to any feature effect method used within GADGET.
Our experiments showed that PINT is often able to detect the true interacting feature subset in the presence of spurious interactions and that the pre-selection thus leads to more meaningful and interpretable results in GADGET. Moreover, considering feature effect methods within GADGET that are based on conditional distribution tend to lead to more stable results compared to considering feature effect methods based on marginal distribution when features are highly correlated. Furthermore, due to a different weighting scheme in the decomposition of Shapely values compared to the other considered feature effect methods, higher-order terms are less likely to be detected by GADGET, especially if Shapley values are not recalculated after each partitioning step. This approach might be computationally expensive, which can be seen as a possible limitation. However, recent research has focused on fast approximation techniques of Shapley values (e.g., Lundberg and Lee, 2017; Jethani et al., 2021; Lundberg et al., 2020; Chau et al., 2022) and may offer solutions to overcoming this limitation.
In general, our proposed method works well if learned feature interactions are not overly local and if observations can be grouped based on feature interactions such that local feature effects within the groups are homogeneous and, at the same time, show heterogeneous feature effects between different groups. With that, we can avoid an aggregation bias of global feature effect methods, obtain more insights into the learned effects, and may detect potential biases within different subgroups (as illustrated in Section 7). One of the real-world examples also showed that the frequently-made assumption of symmetric feature interactions (e.g., in Shapley values) is not always the case (see also Masoomi et al., 2023, for research on asymmetrical feature interactions). Thus, including domain knowledge to define the interacting feature subset \(Z\) might sometimes be meaningful.
This work has been partially supported by the Bavarian Ministry of Economic Affairs, Regional Development and Energy as part of the program "Bayerischen Verbundforderprograms (BayVFP) - Forderlinie Digitalisierung - Forderbereich Informations- und Kommunikationstechnik" under the grant DIK-2106-0007 // DIK0260/02. The authors of this work take full responsibility for its content.
## Appendix A Details on M Plot
The M plot for feature \(\mathbf{x}_{j}\) at feature value \(x_{j}\) is defined as the marginal effect of feature \(\mathbf{x}_{j}\) using the conditional distribution (compared to the marginal distribution used in PD functions):
\[f_{j}^{M}(x_{j})=\mathds{E}[\hat{f}(X_{j},X_{-j})|X_{j}=x_{j}]=\int\hat{f}(x_{j },\mathbf{x}_{-j})d\mathds{P}(\mathbf{x}_{-j}|x_{j}).\]
Hence, similar to PD plots, the local feature effect of M plots are also predictions at a specific feature value of \(\mathbf{x}_{j}\) but w.r.t. the conditional distribution \(\mathds{P}(\mathbf{x}_{-j}|x_{j})\). Therefore, M plots can be seen as an average over ICE curves, which are restricted based on the given correlation structure. This leads to the inclusion of the feature effects of correlated features, as illustrated in the following example.
We draw 1000 samples of two multivariate normally distributed random variables \(X_{1}\) and \(X_{2}\) with \(\mu_{1}=\mu_{2}=0\), \(\sigma_{1}=\sigma_{2}=1\) and \(\sigma_{12}=0.9\). The true data-generating process is given by \(y=-\mathbf{x}_{1}+2\mathbf{x}_{2}+\epsilon\) with \(\epsilon\sim\mathcal{N}(0,0.2)\). We train a linear model on the given data set. Based on the learned effects of the linear model, we would assume that the feature \(\mathbf{x}_{1}\) has a negative influence on the predictions, as shown by the PD plot in Figure 13 (left). On the other hand, the M Plot accounts for the effect of the correlated feature \(\mathbf{x}_{2}\), which has a positive effect and (in absolute terms) a higher influence on the predictions than \(\mathbf{x}_{1}\), which leads to a positive slope in the right plot of Figure 13.
## Appendix B Theoretical Evidence of GADGET
In this appendix, we provide the proofs for the theorems and the corollary of Section 4.2. Furthermore, we show the applicability of the feature effect methods PD, ALE, and SD within the GADGET algorithm by defining respective local feature effect functions that fulfill Axiom 1.
### Proof of Theorem 2
_Proof Sketch_ If the function \(\hat{f}(\mathbf{x})\) can be decomposed as in Eq. (1) and if Axiom 1 holds for the local feature effect function \(h\), then the main effect of feature \(\mathbf{x}_{j}\) at \(x_{j}\) is cancelled
Figure 13: PD plot (left) and M Plot (right) for feature \(\mathbf{x}_{1}\).
out within the loss function in Eq. (10). Thus, the loss function measures the interaction-related heterogeneity of feature \(\mathbf{x}_{j}\) at \(x_{j}\), since the variability of local effects in the subspace \(\mathcal{A}_{g}\) are only based on feature interactions between the \(j\)-th feature and features in \(-j\).
**Proof**
\[\mathcal{L}\left(\mathcal{A}_{g},x_{j}\right) = \sum_{i\in\mathcal{X}^{(i)}\in\mathcal{A}_{g}}\left(h(x_{j}, \mathbf{x}_{-j}^{(i)})-\mathds{E}[h(x_{j},X_{-j})|\mathcal{A}_{g}]\right)^{2}\] \[= \sum_{i\in\mathcal{X}^{(i)}\in\mathcal{A}_{g}}\left(g_{j}(x_{j})+ \sum_{k=1}^{p-1}\sum_{\begin{subarray}{c}W\subset-j,\\ |W|=k\end{subarray}}g_{W\cup j}(x_{j},\mathbf{x}_{W}^{(i)})-\mathds{E}[g_{j}(x _{j})+\sum_{k=1}^{p-1}\sum_{\begin{subarray}{c}W\subset-j,\\ |W|=k\end{subarray}}g_{W\cup j}(x_{j},X_{W})|\mathcal{A}_{g}]\right)^{2}\] \[= \sum_{i\in\mathcal{X}^{(i)}\in\mathcal{A}_{g}}\left(\sum_{k=1}^{p -1}\sum_{\begin{subarray}{c}W\subset-j,\\ |W|=k\end{subarray}}g_{W\cup j}(x_{j},\mathbf{x}_{W}^{(i)})-\mathds{E}[g_{W \cup j}(x_{j},X_{W})|\mathcal{A}_{g}]\right)^{2}\]
### Proof of Theorem 3
_Proof Sketch_ We show in two steps that the objective in Algorithm 1 minimizes interaction-related heterogeneity between features in \(S\) and \(Z\). First, if the function \(\hat{f}(\mathbf{x})\) can be decomposed as in Eq. (1) and if Axiom 1 holds for the local feature effect function \(h\), then we can show (based on Theorem 2) that the objective \(\mathcal{I}(t,z)\) in Algorithm 1 is defined by feature interactions between each feature \(j\in S\) and features in \(-j\). Second, since we only consider features in \(Z\) for splitting and thus minimize the interaction-related heterogeneity of features in \(S\), we can show that feature interactions between features in \(S\) and features in \(-Z\) are independent of the partitioning in Algorithm 1 (if all features contained in \(Z\) and all features in \(-Z\) are pair-wise independent) and thus are not directly minimized in the objective \(\mathcal{I}(t,z)\).
**Proof**
\[\mathcal{I}(t,z) = \sum_{j\in S}\sum_{b\in\{l,r\}}\left(\mathcal{R}(\mathcal{A}_{b}^ {t,z},\mathbf{x}_{j})\right)\] \[\stackrel{{\text{Eq.}\eqref{eq:2}}}{{=}} \sum_{j\in S}\sum_{b\in\{l,r\}}\left(\sum_{\begin{subarray}{c}k \in\{1,\ldots,m\},\\ \sum_{x_{j}^{(k)}\in\mathcal{A}_{b}^{t,z}}^{(k)}\end{subarray}}\mathcal{L}( \mathcal{A}_{b}^{t,z},x_{j}^{(k)})\right)\] \[\stackrel{{\text{Eq.}\eqref{eq:2}}}{{=}} \sum_{j\in S}\sum_{b\in\{l,r\}}\sum_{\begin{subarray}{c}k\in\{1, \ldots,m\},\\ \sum_{x_{j}^{(k)}\in\mathcal{A}_{b}^{t,z}}^{(k)}\end{subarray}}\sum_{i\in \mathcal{X}^{(i)}\in\mathcal{A}_{b}^{t,z}}\left(h(x_{j}^{(k)},\mathbf{x}_{-j}^ {(i)})-\mathds{E}[h(x_{j}^{(k)},X_{-j})|\mathcal{A}_{b}^{t,z}]\right)^{2}\] \[\stackrel{{\text{T.}^{2}}}{{=}} \sum_{j\in S}\sum_{b\in\{l,r\}}\sum_{\begin{subarray}{c}k\in\{1, \ldots,m\},\\ \sum_{x_{j}^{(k)}\in\mathcal{A}_{b}^{t,z}}^{(k)}\end{subarray}}\sum_{i\in \mathcal{X}^{(i)}\in\mathcal{A}_{b}^{t,z}}\left(\sum_{l=1}^{p-1}\sum_{ \begin{subarray}{c}W\subset-j,\\ |W|=l\end{subarray}}g_{W\cup j}(x_{j},\mathbf{x}_{W}^{(i)})-\mathds{E}[g_{W \cup j}(x_{j},X_{W})|\mathcal{A}_{b}^{t,z}]\right)^{2}\]
The objective function \(\mathcal{I}(t,z)\) is defined by the interaction-related heterogeneity between all features \(j\in S\subseteq\{1,\ldots,p\}\) and features in \(-j\) for the sum of the left and right subspace \(\mathcal{A}_{t}^{t,z}\) and \(\mathcal{A}_{t}^{t,z}\). Since the aim is to minimize this term, we choose the split feature \(z\in Z\subseteq\{1,\ldots,p\}\) and split point \(t\), which minimizes the sum of the risk values. However, since we only split w.r.t. features in \(Z\) and not w.r.t. feature in \(-Z\), the objective focuses on minimizing the heterogeneity between features in \(S\) and features in \(Z\), while interactions between features in \(S\) and features in \(-Z\) are generally not minimized, since the heterogeneity is independent of the subspace \(\mathcal{A}_{b}^{t,z}\).21 This can be shown by decomposing the risk function for features \(j\in S\) into interaction effects between feature \(\mathbf{x}_{j}\) and features in \(Z\) (first term), interaction effects between feature \(\mathbf{x}_{j}\) and features in \(-j\) with at least one feature of the subset \(Z\) and at least one feature of the subset \(-Z\) (second term), and interaction effects between feature \(\mathbf{x}_{j}\) and features in \(-Z\) (third term):
Footnote 21: This might be different if features in \(Z\) and \(-Z\) are highly correlated.
\[\mathcal{R}(\mathcal{A}_{b}^{t,z},\mathbf{x}_{j}) = \sum_{\begin{subarray}{c}k:k\in\{1,\ldots,m\}\\ \wedge x_{j}^{(k)}\in\mathcal{A}_{b}^{t,z}\end{subarray}}\sum_{\mathbf{x}(i) \in\mathcal{A}_{b}^{t,z}}\left(\sum_{l=1}^{p-1}\sum_{\begin{subarray}{c}W \subset-j\\ |W|=l\end{subarray}}g_{W\cup j}(x_{j},\mathbf{x}_{W}^{(i)})-\mathds{E}[g_{W\cup j }(x_{j},X_{W})|\mathcal{A}_{b}^{t,z}]\right)^{2}\] \[= \sum_{\begin{subarray}{c}k:k\in\{1,\ldots,m\}\\ \wedge x_{j}^{(k)}\in\mathcal{A}_{b}^{t,z}\end{subarray}}\sum_{\mathbf{x}(i) \in\mathcal{A}_{b}^{t,z}\end{subarray}}\left(\left(\sum_{l=1}^{|Z\setminus j|} \sum_{\begin{subarray}{c}Z_{l}\subseteq Z\setminus j,\\ |Z_{l}|=l\end{subarray}}g_{Z_{l}\cup j}(x_{j},\mathbf{x}_{Z_{l}}^{(i)})- \mathds{E}[g_{Z_{l}\cup j}(x_{j},X_{Z_{l}})|\mathcal{A}_{b}^{t,z}]\right)\] \[+ \left(\sum_{l=2}^{p-1}\sum_{\begin{subarray}{c}W\subset-j\\ \wedge 2Z_{l}\subseteq Z\setminus j,Z_{l}\subset W\\ \wedge 2-Z_{l}\subseteq-Z\setminus j,-Z_{l}\subset W,\\ |W|=l\end{subarray}}g_{W\cup j}(x_{j},\mathbf{x}_{Z_{l}}^{(i)},\mathbf{x}_{-Z _{l}}^{(i)})-\mathds{E}[g_{W\cup j}(x_{j},X_{Z_{l}},X_{-Z_{l}})|\mathcal{A}_{ b}^{t,z}]\right)\] \[+ \left(\sum_{l=1}^{|-Z\setminus j|}\sum_{\begin{subarray}{c}-Z_{l} \subset-Z\setminus j,\\ |-Z_{l}|=l\end{subarray}}g_{-Z_{l}\cup j}(x_{j},\mathbf{x}_{-Z_{l}}^{(i)})- \mathds{E}[g_{-Z_{l}\cup j}(x_{j},X_{-Z_{l}})|\mathcal{A}_{b}^{t,z}]\right) \right)^{2}\]
While the first two terms contain feature interactions between \(j\) and features in \(Z\), the last term does not. Furthermore, this term is (at least, in an uncorrelated setting) independent of the regarded subspace \(\mathcal{A}_{b}^{t,z}\) and, thus, is not directly minimized by the given objective:
\[\left(\sum_{l=1}^{|-Z\setminus j|}\sum_{\begin{subarray}{c}-Z_{l}\subset-Z \setminus j,\\ |-Z_{l}|=l\end{subarray}}g_{-Z_{l}\cup j}(x_{j},\mathbf{x}_{-Z_{l}}^{(i)})- \mathds{E}[g_{-Z_{l}\cup j}(x_{j},X_{-Z_{l}})|\mathcal{A}_{b}^{t,z}]\right) \perp\!\!\!\perp\mathcal{A}_{b}^{t,z}\]
since \(\mathds{E}[g_{-Z_{l}\cup j}(x_{j},X_{-Z_{l}})|\mathcal{A}_{b}^{t,z}]=\mathds{E }[g_{-Z_{l}\cup j}(x_{j},X_{-Z_{l}})]\).
### Proof of Corollary 4
_Proof Sketch_ Based on Theorem 3, we can show that the theoretical minimum of the objective is \(\mathcal{I}(t^{*},z^{*})=0\) if no feature in \(S\) interacts with any feature in \(-Z\). In the following
proof, we apply the same decomposition of the risk function \(\mathcal{R}(\mathcal{A}_{b}^{t,z},\mathbf{x}_{j})\), as done in the proof of Theorem 3. Since we assume that no feature in \(S\) interacts with any feature in \(-Z\), the second and third term--which contain interactions between these two feature subsets--are zero. Hence, the risk function is only defined by feature interactions between features in \(S\) and \(Z\), which are minimized by the objective in Algorithm 1.
If the feature subset \(Z\) contains all features interacting with features in \(S\), and hence no feature in \(-Z\) interacts with any feature in \(S\), then (w.r.t. the decomposition of the risk function in the proof of Theorem 3) the risk function for feature \(\mathbf{x}_{j}\) within a subspace \(\mathcal{A}_{b}^{t,z}\) reduces to the variance of feature interactions between feature \(\mathbf{x}_{j}\) and features in \(Z\):
\[\mathcal{R}(\mathcal{A}_{b}^{t,z},\mathbf{x}_{j}) = \sum_{k:k\in\{1,\ldots,m\}\atop\wedge x_{j}^{(k)}\in\mathcal{A}_{ b}^{t,z}}\sum_{\mathbf{x}^{(i)}\in\mathcal{A}_{b}^{t,z}}\left(\Bigg{(}\sum_{l=1}^ {|Z\setminus j|}\sum_{Z_{l}\subseteq Z\setminus j,\atop|Z_{l}|=l}g_{Z_{l} \cup j}(x_{j},\mathbf{x}_{Z_{l}}^{(i)})-\mathrm{E}[g_{Z_{l}\cup j}(x_{j},X_{Z _{l}})|\mathcal{A}_{b}^{t,z}]\Bigg{)}\right.\] \[+ \left(\sum_{l=2}^{p-1}\sum_{\begin{subarray}{c}W\subset-j\\ \Delta 2j_{l}\subseteq Z\setminus j:Z_{l}\subset W\\ |W|=l\end{subarray}}\underbrace{g_{W\cup j}(x_{j},\mathbf{x}_{Z_{l}}^{(i)}, \mathbf{x}_{-Z_{l}}^{(i)})-\mathrm{E}[g_{W\cup j}(x_{j},X_{Z_{l}},X_{-Z_{l}}) |\mathcal{A}_{b}^{t,z}]}_{\stackrel{{\Delta}}{{=}}0}\right)\] \[+ \left(\sum_{l=1}^{|-Z\setminus j|}\sum_{\begin{subarray}{c}Z_{l} \subset-Z\setminus j,\atop|-Z_{l}|=l}g_{-Z_{l}\cup j}(x_{j},\mathbf{x}_{-Z_{l }}^{(i)})-\mathrm{E}[g_{-Z_{l}\cup j}(x_{j},X_{-Z_{l}})|\mathcal{A}_{b}^{t,z} ]\\ =0\end{subarray}}\right)^{2}\] \[= \sum_{k:k\in\{1,\ldots,m\}\atop\wedge x_{j}^{(k)}\in\mathcal{A}_{ b}^{t,z}}\sum_{\mathbf{x}^{(i)}\in\mathcal{A}_{b}^{t,z}}\left(\sum_{l=1}^{|Z \setminus j|}\sum_{\begin{subarray}{c}Z_{l}\subseteq Z\setminus j,\atop|Z_{l} |=l}g_{Z_{l}\cup j}(x_{j},\mathbf{x}_{Z_{l}}^{(i)})-\mathrm{E}[g_{Z_{l}\cup j }(x_{j},X_{Z_{l}})|\mathcal{A}_{b}^{t,z}]\right)^{2}\]
Since the objective is defined such that it minimizes these interactions for all \(j\in S\) by splitting the feature space w.r.t. features in \(Z\), we can split deep enough to achieve \(g_{Z_{l}\cup j}(x_{j},\mathbf{x}_{Z_{l}}^{(i)})=\mathrm{E}[g_{Z_{l}\cup j}(x_{ j},X_{Z_{l}})|\mathcal{A}_{b}^{t,z}]\) for all terms within the sums of the risk function and for all \(j\in S\). In other words, the individual interaction effect is equal to the expected interaction effect within a subspace, and thus the theoretical minimum of the objective is \(\mathcal{I}(t^{*},z^{*})=0\). \(\blacksquare\)
### Applicability of PD within GADGET
Here, we show how \(h\) must be defined to fulfill Axiom 1 defined in Section 4.2 for the feature effect method PD.
_Local Decomposition_: The local feature effect method used in PDs are ICE curves. The \(i\)-th ICE curve of feature \(\mathbf{x}_{j}\) can be be decomposed as follows:
\[\hat{f}(\mathbf{x}_{j},\mathbf{x}_{-j}^{(i)}) =\underbrace{g_{0}}_{\text{constant term}}+\underbrace{g_{j}( \mathbf{x}_{j})}_{\text{main effect of }\mathbf{x}_{j}}+\underbrace{\sum_{k\in-j}g_{k}( \mathbf{x}_{k}^{(i)})}_{\text{main effect of all other}}\] \[+\underbrace{\sum_{k=1}^{p-1}\sum_{\begin{subarray}{c}W\subset-j,\\ |W|=k\end{subarray}}g_{W\cup\{j\}}(\mathbf{x}_{j},\mathbf{x}_{W}^{(i)})}_{ \text{$\mathbf{x}_{j}$ and features in $-j$ for observation $i$}}+\underbrace{\sum_{k=2}^{p-1}\sum_{ \begin{subarray}{c}W\subset-j,\\ |W|=k\end{subarray}}g_{W}\big{(}\mathbf{x}_{W}^{(i)}\big{)}}_{\text{$\mathbf{ k}$-order interaction between}}\]
However, this decomposition of the local feature effect of \(\mathbf{x}_{j}\) contains not only feature effects that depend on \(\mathbf{x}_{j}\), but also other effects (e.g., \(i\)-th main effects of features in \(-j\)), and thus Axiom 1 is not fulfilled by ICE curves. However, by mean-centering ICE curves, constant and feature effects independent of \(\mathbf{x}_{j}\) are cancelled out, and thus \(\hat{f}^{c}(\mathbf{x}_{j},\mathbf{x}_{-j}^{(i)})\) can be decomposed into the mean-centered main effect of \(\mathbf{x}_{j}\) and the \(i\)-th mean-centered interaction effect between \(\mathbf{x}_{j}\) and features in \(-j\). Hence, the mean-centered ICE of feature \(\mathbf{x}_{j}\) at \(x_{j}\) can be decomposed as follows:
\[\hat{f}^{c}(x_{j},\mathbf{x}_{-j}^{(i)}) =\hat{f}(x_{j},\mathbf{x}_{-j}^{(i)})-\mathds{E}\left[\hat{f}(X_ {j},\mathbf{x}_{-j}^{(i)})\right]\] \[=g_{0}+g_{j}(x_{j})+\sum_{k\in-j}g_{k}(\mathbf{x}_{k}^{(i)})+ \sum_{k=1}^{p-1}\sum_{\begin{subarray}{c}W\subset-j,\\ |W|=k\end{subarray}}g_{W\cup\{j\}}(x_{j},\mathbf{x}_{W}^{(i)})+\sum_{k=2}^{p- 1}\sum_{\begin{subarray}{c}W\subset-j,\\ |W|=k\end{subarray}}g_{W\cup\{j\}}(\mathbf{x}_{W}^{(i)})\] \[-g_{0}-E\left[g_{j}(X_{j})\right]-\sum_{k\in-j}g_{k}(\mathbf{x}_ {k}^{(i)})-E\left[\sum_{k=1}^{p-1}\sum_{\begin{subarray}{c}W\subset-j,\\ |W|=k\end{subarray}}g_{W\cup\{j\}}(X_{j},\mathbf{x}_{W}^{(i)})\right]-\sum_{k=2 }^{p-1}\sum_{\begin{subarray}{c}W\subset-j,\\ |W|=k\end{subarray}}g_{W\cup\{j\}}(\mathbf{x}_{W}^{(i)})\] \[=g_{j}(x_{j})-E\left[g_{j}(X_{j})\right]+\sum_{k=1}^{p-1}\sum_{ \begin{subarray}{c}W\subset-j,\\ |W|=k\end{subarray}}g_{W\cup\{j\}}(x_{j},\mathbf{x}_{W}^{(i)})-E\left[\sum_{k= 1}^{p-1}\sum_{\begin{subarray}{c}W\subset-j,\\ |W|=k\end{subarray}}g_{W\cup\{j\}}(X_{j},\mathbf{x}_{W}^{(i)})\right]\] \[=\underbrace{g_{j}^{c}(x_{j})}_{\text{main effect of }\mathbf{x}_{j}}+ \underbrace{\sum_{k=1}^{p-1}\sum_{\begin{subarray}{c}W\subset-j,\\ |W|=k\end{subarray}}g_{W\cup\{j\}}^{c}g_{W\cup\{j\}}(x_{j},\mathbf{x}_{W}^{(i)})}_ {\text{mean-centered interaction effect of}}\]
Thus, Axiom 1 is satisfied by mean-centered ICE curves and can be used as local feature effect \(h\) within GADGET. Following from that, the mean-centered PD for feature \(\mathbf{x}_{j}\) at \(x_{j}\) can be decomposed by:
\[f_{j}^{PD,c}(x_{j})=\mathds{E}[\hat{f}^{c}(x_{j},X_{-j})]=g_{j}^{c}(x_{j})+ \sum_{k=1}^{p-1}\sum_{\begin{subarray}{c}W\subseteq-j,\\ |W|=k\end{subarray}}E\left[g_{W\cup\{j\}}^{c}(x_{j},X_{W})\right],\]
which is the mean-centered main effect of feature \(\mathbf{x}_{j}\) and the expected mean-centered interaction effect with feature \(\mathbf{x}_{j}\) at feature value \(x_{j}\). Based on these decompositions and for
\(h=\hat{f}^{c}(x_{j},\mathbf{x}_{-j}^{(i)})\), we can show that the loss function only depends on the feature interaction effect between the \(j\)-th feature and features in \(-j\) (Theorem 2):
\[\mathcal{L}^{PD}\left(\mathcal{A}_{g},x_{j}\right) = \sum_{i:\mathbf{x}^{(i)}\in\mathcal{A}_{g}}\left(\hat{f}^{c}(x_{j},\mathbf{x}_{-j}^{(i)})-\mathbb{E}[\hat{f}^{c}(x_{j},X_{-j})|\mathcal{A}_{g}] \right)^{2}\] \[= \sum_{i:\mathbf{x}^{(i)}\in\mathcal{A}_{g}}\left(g_{j}^{c}(x_{j} )+\sum_{k=1}^{p-1}\sum_{\begin{subarray}{c}W\subset-j:\\ |W|=k\end{subarray}}g_{W\cup\{j\}}^{c}(x_{j},\mathbf{x}_{W}^{(i)})\right.\] \[\left.-g_{j}^{c}(x_{j})-\sum_{k=1}^{p-1}\sum_{\begin{subarray}{c }W\subset-j:\\ |W|=k\end{subarray}}\mathbb{E}\left[g_{W\cup\{j\}}^{c}(x_{j},X_{W})|\mathcal{ A}_{g}\right]\right)^{2}\] \[= \sum_{i:\mathbf{x}^{(i)}\in\mathcal{A}_{g}}\left(\sum_{k=1}^{p-1 }\sum_{\begin{subarray}{c}W\subset-j:\\ |W|=k\end{subarray}}g_{W\cup\{j\}}^{c}(x_{j},\mathbf{x}_{W}^{(i)})-\mathbb{E} \left[g_{W\cup\{j\}}^{c}(x_{j},X_{W})|\mathcal{A}_{g}\right]\right)^{2}\]
_REPID as special case of GADGET._ The objective function \(\mathcal{I}(t,z)\) in Algorithm 1 for \(h=\hat{f}^{c}(x_{j},\mathbf{x}_{-j}^{(i)})\) is defined by the above loss function \(\mathcal{L}^{PD}\left(\mathcal{A}_{g},x_{j}\right)\) as follows:
\[\mathcal{I}(t,z)=\sum_{j\in S}\sum_{g\in\{l,r\}}\sum_{k:k\in\{1,\ldots,m\} \wedge\mathbf{x}_{j}^{(k)}\in\mathcal{A}_{g}}\mathcal{L}^{PD}\left(\mathcal{A }_{g},\mathbf{x}_{j}^{(k)}\right)\]
For the special case where we consider one feature of interest that we want to visualize (\(S=j\)) and all other features as possible split features (\(Z=-j\)), the objective function of GADGET reduces to:
\[\mathcal{I}(t,z)=\sum_{g\in\{l,r\}}\sum_{k=1}^{m}\mathcal{L}^{PD}\left( \mathcal{A}_{g},\mathbf{x}_{j}^{(k)}\right),\]
which is the same objective used within REPID. Thus, for the special case where we choose mean-centered ICE curves as local feature effect method and \(S=j\) and \(Z=-j\), GADGET is equivalent to REPID.
### Applicability of ALE Within GADGET
Here, we show the fulfillment of Axiom 1 defined in Section 4.2 for the underlying local feature effect function in ALE.
_Local Decomposition_: The local feature effect method used in ALE is the partial derivative of the prediction function at \(\mathbf{x}_{j}=x_{j}\). Thus, we define the local feature effect function \(h\) by \(h(x_{j},\mathbf{x}_{-j}^{(i)}):=\frac{\partial\hat{f}(x_{j},\mathbf{x}_{-j}^{(i )})}{\partial x_{j}}\). We can decompose \(h\) such that it only depends on main and interaction effects of and with feature \(\mathbf{x}_{j}\):
\[\frac{\partial\hat{f}(x_{j},\mathbf{x}_{-j}^{(i)})}{\partial x_{j}} = \frac{\partial\left(g_{0}+\sum_{j=1}^{p}g_{j}(x_{j})+\sum_{j\neq k} g_{jk}(x_{j},\mathbf{x}_{k}^{(i)})+\ldots+g_{12\ldots p}(\mathbf{x}^{(i)}) \right)}{\partial x_{j}}\] \[= \frac{\partial g_{j}(x_{j})}{\partial x_{j}}+\sum_{k=1}^{p-1} \sum_{\begin{subarray}{c}W\subset-j,\\ |W|=k\end{subarray}}\frac{\partial g_{W\cup j}(\mathbf{x}_{j},\mathbf{x}_{W}^{( i)})}{\partial x_{j}}\]
Taking the conditional expectation over the local feature effects (partial derivatives) at \(x_{j}\) yields the (conditional) expected (i.e., global) feature effect at \(x_{j}\):
\[\mathds{E}\Bigg{[}\frac{\partial\hat{f}(X_{j},X_{-j})}{\partial x_{j}}\bigg{|} X_{j}=x_{j}\Bigg{]}=\frac{\partial g_{j}(x_{j})}{\partial x_{j}}+\sum_{k=1}^{p-1} \sum_{\begin{subarray}{c}W\subset-j,\\ |W|=k\end{subarray}}\mathds{E}\left[\frac{\partial g_{W\cup j}(X_{j},X_{W})}{ \partial x_{j}}\bigg{|}X_{j}=x_{j}\right]\]
Based on these decompositions, we can show that the loss function for ALE only depends on the feature interaction effect between the \(j\)-th feature and features in \(-j\) (Theorem 2):
\[\mathcal{L}^{ALE}\left(\mathcal{A}_{g},x_{j}\right) = \sum_{\begin{subarray}{c}i\times(i)\in\mathcal{A}_{g}\wedge\\ \mathbf{x}^{(i)}\in\mathcal{P}(\mathbf{x}_{-j}|x_{j})\end{subarray}}\left( \frac{\partial\hat{f}(x_{j},\mathbf{x}_{-j}^{(i)})}{\partial x_{j}}-\mathds{E }\left[\frac{\partial\hat{f}(X_{j},X_{-j})}{\partial x_{j}}\Big{|}\mathcal{A}_ {g}\wedge X_{j}=x_{j}\right]\right)^{2}\] \[= \sum_{\begin{subarray}{c}i\times(i)\in\mathcal{A}_{g}\wedge\\ \mathbf{x}^{(i)}\in\mathcal{P}(\mathbf{x}_{-j}|x_{j})\end{subarray}}\left( \frac{\partial g_{j}(x_{j})}{\partial x_{j}}+\sum_{k=1}^{p-1}\sum_{ \begin{subarray}{c}W\subset-j,\\ |W|=k\end{subarray}}\frac{\partial g_{W\cup j}(\mathbf{x}_{j},\mathbf{x}_{W}^{( i)})}{\partial x_{j}}\] \[- \frac{\partial g_{j}(x_{j})}{\partial x_{j}}-\sum_{k=1}^{p-1}\sum_ {\begin{subarray}{c}W\subset-j,\\ |W|=k\end{subarray}}\mathds{E}\left[\frac{\partial g_{W\cup j}(X_{j},X_{W})}{ \partial x_{j}}\bigg{|}\mathcal{A}_{g}\wedge X_{j}=x_{j}\right]\right)^{2}\] \[= \sum_{\begin{subarray}{c}i\times(i)\in\mathcal{A}_{g}\wedge\\ \mathbf{x}^{(i)}\in\mathcal{P}(\mathbf{x}_{-j}|x_{j})\end{subarray}}\left( \sum_{k=1}^{p-1}\sum_{\begin{subarray}{c}W\subset-j,\\ |W|=k\end{subarray}}\left(\frac{\partial g_{W\cup j}(\mathbf{x}_{j},\mathbf{x}_{ W}^{(i)})}{\partial x_{j}}-\mathds{E}\left[\frac{\partial g_{W\cup j}(X_{j},X_{W})}{ \partial x_{j}}\bigg{|}\mathcal{A}_{g}\wedge X_{j}=x_{j}\right]\right)\right)^ {2}\]
### Applicability of SD Within GADGET
Here, we show the fulfillment of Axiom 1 defined in Section 4.2 for Shapley values, which are the underlying local feature effect in SD plots.
Local DecompositionThe local feature effect function in the SD plot is the Shapley value. We define \(h(x_{j},\mathbf{x}_{-j}^{(i)}):=\phi_{j}^{(i)}(x_{j})\) to be the Shapley value for the \(i\)-th local feature effect at a fixed value \(x_{j}\), which is typically the \(i\)-th feature value of \(\mathbf{x}_{j}\) (i.e., \(x_{j}=\mathbf{x}_{j}^{(i)}\)). In Eq. (8),
we defined \(\phi_{j}^{(i)}(x_{j})\) according to Herren and Hahn (2022) by the following decomposition:
\[\phi_{j}^{(i)}(x_{j}) = \sum_{k=0}^{p-1}\frac{1}{k+1}\sum_{\begin{subarray}{c}W\subseteq-j :\\ |W|=k\end{subarray}}\left(\mathds{E}[\hat{f}(x_{j},X_{-j})|X_{W}=\mathbf{x}_{W}^ {(i)}]-\sum_{V\subset\{W\cup j\}}\mathds{E}[\hat{f}(X)|X_{V}=\mathbf{x}_{V}^{( i)}]\right)\] \[= g_{j}^{c}(x_{j})+\sum_{k=1}^{p-1}\frac{1}{k+1}\sum_{W\subseteq-j :|W|=k}g_{W\cup j}^{c}(x_{j},\mathbf{x}_{W}^{(i)}),\]
with \(g_{W\cup j}^{c}(x_{j},\mathbf{x}_{W}^{(i)})=\mathds{E}[\hat{f}(x_{j},X_{-j})|X _{W}=\mathbf{x}_{W}^{(i)}]-\sum_{V\subset\{W\cup j\}}\mathds{E}[\hat{f}(X)|X_ {V}=\mathbf{x}_{V}^{(i)}]\).
Hence, we can decompose \(h\) such that it only depends on main effects of and interaction effects with feature \(\mathbf{x}_{j}\).
Taking the expectation over the local feature effects \(h=\phi_{j}\) at \(x_{j}\) yields the expected (i.e., global) feature effect of Shapley values at \(\mathbf{x}_{j}=x_{j}\).
\[\mathds{E}_{X_{W}}[\phi_{j}(x_{j})]=g_{j}^{c}(x_{j})+\sum_{k=1}^{p-1}\frac{1} {k+1}\sum_{W\subseteq-j:|W|=k}\mathds{E}[g_{W\cup j}^{c}(x_{j},X_{W})]\]
Based on these decompositions, we can show that the loss function for SD only depends on feature interactions between the \(j\)-th feature and features in \(-j\) (Theorem 2):
\[\mathcal{L}^{SD}(\mathcal{A}_{g},x_{j}) = \sum_{i\mathbf{x}^{(i)}\in\mathcal{A}_{g}\wedge\mathbf{x}_{j}^{( i)}=x_{j}}\left(\phi_{j}^{(i)}(x_{j})-\mathds{E}_{X_{W}}[\phi_{j}(x_{j})| \mathcal{A}_{g}]\right)^{2}\] \[= \sum_{i\mathbf{x}^{(i)}\in\mathcal{A}_{g}\wedge\mathbf{x}_{j}^{( i)}=x_{j}}\left(g_{j}^{c}(x_{j})+\sum_{k=1}^{p-1}\frac{1}{k+1}\sum_{W\subseteq-j:|W|=k }g_{W\cup j}^{c}(x_{j},\mathbf{x}_{W}^{(i)})\right.\] \[\left.-g_{j}^{c}(x_{j})+\sum_{k=1}^{p-1}\frac{1}{k+1}\sum_{W \subseteq-j:|W|=k}\mathds{E}[g_{W\cup j}^{c}(x_{j},X_{W})|\mathcal{A}_{g}] \right)^{2}\] \[= \sum_{i\mathbf{x}^{(i)}\in\mathcal{A}_{g}\wedge\mathbf{x}_{j}^{( i)}=x_{j}}\left(\sum_{k=1}^{p-1}\frac{1}{k+1}\sum_{W\subseteq-j:|W|=k}\left(g_{W \cup j}^{c}(x_{j},\mathbf{x}_{W}^{(i)})-\mathds{E}[g_{W\cup j}^{c}(x_{j},X_{W })|\mathcal{A}_{g}]\right)\right)^{2}\]
## Appendix C Further Characteristics of Feature Effect Methods
In this section, we cover further characteristics of the different feature effect methods used within GADGET. As illustrated for PD in Section 4.3, we also show here for ALE and SD that the joint feature effect (and possibly the prediction function) within the final regions of GADGET can be approximated by the sum of univariate feature effects. Hence, the joint feature effect can be additively decomposed into the features' main effects within the final regions. Furthermore, we provide an overview on estimates and visualization techniques for the regional feature effects and interaction-related heterogeneity for the different feature effect methods. We also explain how categorical features are handled within GADGET, depending on the underlying feature effect method.
### Decomposability of ALE
Figure 14 shows the effect plots when GADGET is applied to the uncorrelated simulation example of Section 3 when ALE (the underlying derivatives) and \(S=Z=\{1,2,3\}\) are used. The grey curves before the split illustrate the global ALE curves. Since they do not show the interaction-related heterogeneity of the underlying local effects, we added a plot that visualizes this heterogeneity by providing the standard deviation of the derivatives within each interval as a (yellow) curve along the range of \(\mathbf{x}_{j}\), similar to the idea of Goldstein et al. (2015) for derivative ICE curves. This shows us that local effects for feature \(\mathbf{x}_{2}\) are very homogeneous over the entire range of \(\mathbf{x}_{2}\), while the local feature effects of \(\mathbf{x}_{1}\) show a constant heterogeneous behavior and a regional high heterogeneity around \(\mathbf{x}_{3}=0\) is visible for \(\mathbf{x}_{3}\). GADGET chooses \(\mathbf{x}_{3}=-0.003\) as the best split point that reduces the interaction-related heterogeneity of the three features almost completely. Thus, the ALE curves we receive in the subspaces are more representative for the underlying individuals.
Equivalently to PD plots, ALE plots also contain an additive recovery and, thus, can be decomposed additively in main and interaction effects (see Section 2.3). Furthermore, if \(Z\) is defined such that all features interacting with features in \(S\) are included and if GADGET is applied such that the theoretical minimum of the objective function is reached, then
Figure 14: Visualization of applying GADGET with \(S=Z=\{1,2,3\}\) to derivatives of ALE for the uncorrelated simulation example of Section 3 with \(Y=3X_{1}\mathbbm{1}_{X_{3}>0}-3X_{1}\mathbbm{1}_{X_{3}\leq 0}+X_{3}+\epsilon\) with \(\epsilon\sim\mathbb{N}(0,0.09)\). The upper plots show the standard deviation of the derivatives (yellow) and the ALE curves (grey) on the entire feature space, while the lower plots represent the respective standard deviation of the derivatives and regional ALE curves after partitioning the feature space w.r.t. \(\mathbf{x}_{3}=-0.003\).
according to Corollary 4, the joint mean-centered ALE function \(f^{ALE,c}_{S|\mathcal{A}_{g}}\) within each final subspace \(\mathcal{A}_{g}\) can be decomposed into the 1-dimensional mean-centered ALE functions of features in \(S\). Hence, since there are no more interactions between features in \(S\) and other features present in the final regions, \(f^{ALE,c}_{S|\mathcal{A}_{g}}\) can be uniquely decomposed into the mean-centered main effects of features in \(S\)--just as in PD functions (Apley and Zhu, 2020):
\[f^{ALE,c}_{S|\mathcal{A}_{g}}(\mathbf{x}_{S})=\sum_{j\in S}f^{ALE,c}_{j| \mathcal{A}_{g}}(\mathbf{x}_{j}) \tag{18}\]
Moreover, let \(-S\) be the subset of features that do not interact with any other features. Then, according to Eq. (18) and Eq. (4), the prediction function \(\hat{f}_{\mathcal{A}_{g}}\) within the region \(\mathcal{A}_{g}\) can be decomposed into the 1-dimensional mean-centered ALE functions of all \(p\) features, plus some constant value \(g_{0}\):
\[\hat{f}_{\mathcal{A}_{g}}(\mathbf{x})=g_{0}+\sum_{j=1}^{p}f^{ALE,c}_{j| \mathcal{A}_{g}}(\mathbf{x}_{j}).\]
We can again derive this decomposition from Figure 14, where \(\mathbf{x}_{2}\) shows an effect of 0 with low heterogeneity before and after the split. The feature effects of \(\mathbf{x}_{1}\) and \(\mathbf{x}_{3}\) show high heterogeneity before the split, which is almost completely minimized after the split w.r.t. \(\mathbf{x}_{3}\). Hence, the resulting regional (linear) ALE curves are representative estimates for the underlying local effects. Therefore, we can approximate the prediction function within each subspace by
\[\hat{f}_{\mathcal{A}_{l}}(\mathbf{x})=g_{0}+\frac{-2.85}{0.99}\mathbf{x}_{1}+ \frac{1.3}{1.03}\mathbf{x}_{3}=g_{0}-2.89\mathbf{x}_{1}+1.26\mathbf{x}_{3}\]
and
\[\hat{f}_{\mathcal{A}_{r}}(\mathbf{x})=g_{0}+\frac{2.89}{0.99}\mathbf{x}_{1}+ \frac{0.94}{0.96}\mathbf{x}_{3}=g_{0}+2.92\mathbf{x}_{1}+0.98\mathbf{x}_{3}.\]
Particularities of ALE Estimation.As seen for the continuous feature \(\mathbf{x}_{3}\) in the simulation example presented here, abrupt interactions ("jumps")22 might be difficult to estimate for models that learn smooth effects, such as NNs (used here) or SVMs--especially when compared to models such as decision trees. Hence, depending on the model, these type of feature interactions can lead to very high partial derivatives in a region around the "jump" point instead of a high partial derivative at exactly the one specific "jump" point (here: \(\mathbf{x}_{3}=0\)), thus leading to non-reducible heterogeneity. This is illustrated in the upper right plot of Figure 14. The standard deviation of the derivatives of \(\mathbf{x}_{3}\) are very high in the region around and not exactly at \(\mathbf{x}_{3}=0\). This interaction-related heterogeneity should be (almost) completely reduced when splitting w.r.t. \(\mathbf{x}_{3}=0\). However, high values will remain, since the model did not perfectly capture this kind of interaction. To account for this issue in the estimation and partitioning process within GADGET, we use the following procedure for continuous features: In the two new subspaces after a split, if the derivatives of feature values close to the split point vary at least twice as much (measured by the
standard deviation) as the derivatives of the other observations within each subspace, then the derivatives of feature values close to the split point are replaced by values drawn from a normally distributed random variable where mean and variance are estimated by the derivatives of the remaining observation within each subspace.
### Decomposability of SD
#### Recalculation Versus No Recalculation of Shapley Values.
In Section 4.5, we argued that Shapley values must be recalculated after each partitioning step in order for each new subspace to receive SD effects in the final subspaces that are representative of the underlying main effects within each subspace. Meanwhile, the unconditional expected value (i.e., the feature interactions on the entire feature space) are minimized without recalculating the conditional expected values.
The difference in the final feature effects within the subspaces is illustrated when comparing the left plot of Figure 5 (split without recalculation) with the respective plots of feature \(\mathbf{x}_{1}\) of Figure 15. Without recalculation, the effect of feature \(\mathbf{x}_{1}\) is still regarded as an interaction effect between \(\mathbf{x}_{1}\) and \(\mathbf{x}_{3}\), and hence only half of the joint interaction effect is assigned to \(\mathbf{x}_{1}\) (i.e., the respective slope within the regions is \(1.5\) and \(-1.5\) instead of \(3\) and \(-3\)), and the other half of the joint interaction effect is assigned to \(\mathbf{x}_{3}\). When Shapley values are recalculated after the first partitioning step within each subspace, we can see in Figure 15 that no more interactions are present between \(\mathbf{x}_{1}\) and \(\mathbf{x}_{3}\) within each subspace, due to the split w.r.t. \(\mathbf{x}_{3}\). Hence, the effect of \(\mathbf{x}_{1}\) is recognized as the main effect with the slope approximately defined in the data-generating process. Furthermore, due to interactions with \(\mathbf{x}_{1}\), the heterogeneity of feature effects of \(\mathbf{x}_{3}\) is also reduced after the split, owing to recalculation.
#### Note:
If the feature we use for partitioning the feature space (\(z\in Z\)) coincides with the features of interest (\(S\)), then the Shapley values should be recalculated in Algorithm 1 to find the best split point (at least, if we choose the approach with recalculation after each partitioning step). The reason is that if \(z\in S\), we also want to reduce the interaction-related heterogeneity within \(z\) that is not accounted for if we do not recalculate the Shapley values within the new subspace. For example, in Figure 15, we split according to \(\mathbf{x}_{3}\), which is also a feature of interest (\(3\in S\)). If we do not recalculate the Shapley values for \(\mathbf{x}_{3}\) within the splitting process, then the sum of the risk of any two subspaces for \(\mathbf{x}_{3}\) will always be approximately the same as the risk of the parent node, and thus the heterogeneity reduction for \(\mathbf{x}_{3}\) (which is shown in the regional plots of Figure 15) is not considered in the objective of Algorithm 1.
#### Decomposition.
Herren and Hahn (2022) show that Shapley values can be decomposed by weighted PD functions (see Eq. 8). Hence, if the global SD feature effect as defined in Eq. (16) is considered, the same decomposition rules as defined for PD plots apply. In other words, if \(Z\) contains all features that interact with features in \(S\) and if GADGET is applied such that the theoretical minimum of the objective function is reached, then according to Corollary 4, the following decomposition in 1-dimensional global SD effect functions of features in \(S\) holds:
\[f^{SD}_{S|\mathcal{A}_{g}}(\mathbf{x}_{S})=\sum_{j\in S}f^{SD}_{j|\mathcal{A}_ {g}}(\mathbf{x}_{j}). \tag{19}\]
If all features containing heterogeneous effects (feature interactions) are included in the subset \(S\), and the subset \(Z\) consists of all features that interact with features in \(S\), then according to Eq. (19) and Eq. (4), the prediction function \(\hat{f}_{\mathcal{A}_{g}}\) within the region \(\mathcal{A}_{g}\) can be uniquely decomposed into the 1-dimensional global SD effect functions of all \(p\) features, plus some constant value \(g_{0}\):
\[\hat{f}_{\mathcal{A}_{g}}(\mathbf{x})=g_{0}+\sum_{j=1}^{p}f_{j|\mathcal{A}_{g} }^{SD}(\mathbf{x}_{j}).\]
Again, we can derive this decomposition from Figure 14 in the same way we did for PD and ALE plots. Hence, we can approximate the prediction function within each subspace by \(\hat{f}_{\mathcal{A}_{l}}(\mathbf{x})=g_{0}-3.02\mathbf{x}_{1}+1.12\mathbf{x} _{3}\) and \(\hat{f}_{\mathcal{A}_{r}}(\mathbf{x})=g_{0}+2.98\mathbf{x}_{1}+1.03\mathbf{x} _{3}\).
Equivalence of SD and Mean-Centered PDAccording to Herren and Hahn (2022), the Shapley value \(\phi_{j}^{(i)}(x_{j})\) of the \(i\)-th observation at \(\mathbf{x}_{j}=x_{j}\) can be decomposed as defined in
Figure 15: Visualization of applying GADGET with \(S=Z=\{1,2,3\}\) to Shapley values of the uncorrelated simulation example of Section 3 with \(Y=3X_{1}\mathbbm{1}_{X_{3}>0}-3X_{1}\mathbbm{1}_{X_{3}\leq 0}+X_{3}+\epsilon\) with \(\epsilon\sim\mathbb{N}(0,0.09)\). The upper plots show the Shapley values and the global estimated SD curve on the entire feature space, while the lower plots represent the respective Shapley values and regional SD curves after partitioning the feature space w.r.t. \(\mathbf{x}_{3}=0.007\).
Eq. (8) to
\[\phi_{j}^{(i)}(x_{j}) = \sum_{k=0}^{p-1}\frac{1}{k+1}\sum_{\begin{subarray}{c}W\subseteq-j: \\ |W|=k\end{subarray}}\left(\mathds{E}[\hat{f}(x_{j},X_{-j})|X_{W}=\mathbf{x}_{W}^{ (i)}]-\sum_{V\subset\{W\cup j\}}\mathds{E}[\hat{f}(X)|X_{V}=\mathbf{x}_{V}^{(i )}]\right)\] \[= g_{j}^{c}(x_{j})+\sum_{k=1}^{p-1}\frac{1}{k+1}\sum_{W\subseteq-j :|W|=k}g_{W\cup j}^{c}(x_{j},\mathbf{x}_{W}^{(i)}),\]
with \(g_{W\cup j}^{c}(x_{j},\mathbf{x}_{W}^{(i)})=\mathds{E}[\hat{f}(x_{j},X_{-j})|X _{W}=\mathbf{x}_{W}^{(i)}]-\sum_{V\subset\{W\cup j\}}\mathds{E}[\hat{f}(X)|X_{ V}=\mathbf{x}_{V}^{(i)}]\).
As in Eq. (16), the global feature effect (SD) of feature \(\mathbf{x}_{j}\) at \(x_{j}\) is then defined by
\[f_{j}^{SD}(x_{j})=\mathds{E}_{X_{W}}[\phi_{j}]=g_{j}^{c}(x_{j})+\sum_{k=1}^{p -1}\frac{1}{k+1}\sum_{W\subseteq-j:|W|=k}\mathds{E}[g_{W\cup j}^{c}(x_{j},X_{ W})]\]
Hence, if Corollary 4 is satisfied, if the joint global SD effect of features in \(S\) can be decomposed into the univariate SD effects as in Eq. (19), and if the interventional approach for Shapley calculation is used, then all feature interactions are zero, and the global SD effect of feature \(\mathbf{x}_{j}\) at \(x_{j}\) is given by
\[f_{j}^{Shap}(x_{j}) = g_{j}^{c}(x_{j})+\sum_{k=1}^{p-1}\frac{1}{k+1}\sum_{W\subseteq-j :|W|=k}\mathds{E}[g_{W\cup j}^{c}(x_{j},X_{W})]\] \[\stackrel{{\text{Cor. \ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq
Regional Effect.The feature effect for a feature \(\mathbf{x}_{j}\) at feature value \(x_{j}\) within a subspace/region \(\mathcal{A}_{g}\) of GADGET is estimated by
* **PD**: mean-centered regional PD \(\hat{f}^{PD,c}_{j|\mathcal{A}_{g}}(x_{j})=\frac{1}{|N_{g}|}\sum_{i\in N_{g}}\hat {f}^{c}(x_{j},\mathbf{x}^{(i)}_{-j})\) with \(N_{g}\) being the index set of all \(i:\mathbf{x}^{(i)}\in\mathcal{A}_{g}\).
* **ALE**: regional ALE \(\hat{f}^{ALE}_{j|\mathcal{A}_{g}}(x_{j})=\sum_{k=1}^{k_{j}(x_{j})}\frac{1}{|N_ {g}(k)|}\sum_{i\in N_{g}(k)}\left[\hat{f}(z_{k,j},\mathbf{x}^{(i)}_{-j})-\hat{ f}(z_{k-1,j},\mathbf{x}^{(i)}_{-j})\right]\) with \(N_{g}(k)\) being the index set of all \(i:\,x_{j}\in\,]z_{k-1,j},z_{k,j}]\wedge\mathbf{x}^{(i)}\in\mathcal{A}_{g}\).
* **SD**: regional SD \(\hat{f}^{SD}_{j|\mathcal{A}_{g}}(x_{j})\) is estimated by fitting a GAM on \(\{\mathbf{x}^{(i)}_{j},\hat{\phi}^{(i)}_{j}\}_{i:\mathbf{x}^{(i)}\in\mathcal{ A}_{g}}\).
The regional effect for feature \(\mathbf{x}_{j}\) is visualized for all \(x_{j}\in\mathcal{A}_{g}\). Therefore, the respective GAM curve is plotted in the case of SD, while we linearly interpolate between the grid-wise/interval-wise estimates of PD/ALE to receive the regional effect curves.
Interaction-Related Heterogeneity.The interaction-related heterogeneity for a feature \(\mathbf{x}_{j}\) at feature value \(x_{j}\) within a subspace/region \(\mathcal{A}_{g}\) of GADGET is estimated by the loss function in Eq. (10), which quantifies the variance of local effects at \(x_{j}\) and is visualized by
* **PD**: 95% interval around (mean-centered) regional PD estimate \(\left[\hat{f}^{PD,c}_{j|\mathcal{A}_{g}}(x_{j})\pm 1.96\cdot\sqrt{\hat{ \mathcal{L}}^{PD}(\mathcal{A}_{g},x_{j})}\right]\).
* **ALE**: standard deviation of local effects \(\sqrt{\hat{\mathcal{L}}^{ALE}(\mathcal{A}_{g},x_{j})}\).
* **SD**: Shapley values are recalculated within each region \(\mathcal{A}_{g}\) and plotted with the fitted GAM for the regional SD effect to visualize the variation of local feature effects aka interaction-related heterogeneity.
For each feature \(j\in S\), we generate one figure showing the regional effect curves of all final regions we obtain after applying GADGET. For PD, the regional effect curves are accompanied with intervals showing how much interaction-related heterogeneity remains in the underlying local effects (see, e.g., Figures 11). For ALE, a separate plot visualizes the interaction-related heterogeneity via the standard deviation of local effects, which is inspired by the derivative ICE plots of Goldstein et al. (2015) (see, e.g., Figure 12). For SD, the Shapley values that were recalculated conditioned on each subspace \(\mathcal{A}_{g}\) are visualized with the regional effect curve (see, e.g., Figure 18).
Note that we can also visualize the non-centered PD (\(\hat{f}^{PD}_{j|\mathcal{A}_{g}}\)) instead of the mean-centered PD, which might provide more insights regarding interpretation. However, the interaction-related heterogeneity must be estimated by the mean-centered ICE curves to only represent heteroogeneity induced by feature interactions (see Appendix B.4).
### Handling of Categorical Features
In this section, we will summarize the particularities of categorical features. Compared to numeric features, we have a limited number of \(K\) values (categories). Hence, compared to numeric features, we find split points by dividing the \(K\) categories into the two new
subspaces. Since GADGET is based on the general concept of a CART (decision tree) algorithm (Breiman et al., 1984) that can handle categorical features, the splitting itself follows the same approach as for a common CART algorithm. If we have categorical features in \(S\), it does influence the calculation of our objective function, and the handling depends on the underlying feature effect method. We briefly discuss the specifics for each of the three feature effect methods that we use in this paper. All of them are able to handle categorical features, which is a general requirement that we can use it for mixed data sets within GADGET.
PD PlotCompared to numeric features, the grid points for categorical features are limited to the number of categories. Otherwise, the calculation of the loss and the risk (see, Eq. 13 and Eq. 11) works exactly the same as for numeric features.23
Footnote 23: Since we calculate the loss point-wise at each grid point and sum it up over all grid points, the order of the category does not make a difference for the objective of GADGET.
ALE PlotALE builds intervals based on quantiles for numeric features to calculate prediction differences between neighboring interval borders for all observations falling within this interval. For binary features, the authors solve this as follows: for all observations falling in each of the categories, the prediction difference when changing it to the other category is calculated. For more categories, they suggest a sorting algorithm.24 Hence, we still receive the needed derivatives for GADGET for each category to calculate the loss and risk function for GADGET.
SD PlotCompared to numeric features, the x-axis of the SD plot is a grid of size \(K\). For each of these grid points (categories), the Shapley values for the observations belonging to the specific category are calculated. Hence, instead of a spline to quantify the expected value in Eq. (17), we use the arithmetic mean within each category (similar to PD plots) and, thus, sum up the variance of Shapley values for each category over all categories (within the respective subspace).
In general, if we apply GADGET and split w.r.t. a categorical feature such that only one category is present within a subspace (e.g., we split the feature sex such that all individuals are male in one resulting subspace), then the interaction related heterogeneity vanishes to zero, since only an additive shift for the feature sex is left in this subspace.
NoteIf a categorical feature \(\mathbf{x}_{j}\) is not only considered for splitting (\(j\in Z\)) but is also a feature of interest (\(j\in S\)), the different splitting possibilities of categories of \(\mathbf{x}_{j}\) prompt recalculation for ALE, since derivatives are only calculated for pre-sorted neighboring categories. In our implementations, we only split w.r.t. the pre-sorted categories and considered them as integer values to reduce the computational burden of the calculations.
## Appendix D Additions to PINT
Although the approximation of the null distribution via theoretical distributions reduces the computational burden when applying PINT (as described in Section 5), it is still high for high-dimensional settings. Therefore, we suggest to randomly select a smaller set of observations to apply PINT in the case of a high number of observations. In the case of
many features, we recommend to filter features beforehand and apply PINT only once to those with the highest possibility of containing feature interactions. For example, this can be done by excluding features that show only small variations of the local feature effects used in GADGET, since the homogeneous feature effects are a strong indicator that the feature does not interact with any other feature. For instance, the mean-centered ICE curves of \(\mathbf{x}_{4}\) in Figure 16 show only small variation, which indicates that \(\mathbf{x}_{4}\) does not interact with any other feature, and thus there is no need to consider it for PINT. Hence, we can apply PINT only on the remaining three features to identify which features interact with each other and must be considered within GADGET.
## Appendix E Further Details on Real-World Applications
COMPAS Data Set.Our analysis in Section 7 for the COMPAS data set is based on a tuned SVM. We chose this model based on the following selection process: For the binary classification task25, we chose to select the best model out of a logistic regression, a random forest, and a tuned SVM with RBF kernel. We also compared these models to a featureless model. The model selection was performed by a 5-fold cross-validation, where the hyperparameters cost \(C\) and inverse kernel width \(\sigma\) of the SVM were tuned via 3-fold cross-validation on the inner folds of the nested resampling approach.26 We evaluate the learners' test performance on the outer folds based on the F1 score and Matthews correlation coefficient (MCC) (Chicco and Jurman, 2020). Since the tuned SVM performs best w.r.t. both evaluation metrics, we chose this model for our further analysis. Note that the performance differences between the different learners (besides the featureless baseline) are very small. Thus, one might consider using the most interpretable learner (here, logistic regression) for further analysis. However, since the purpose of our analysis is to detect feature
Figure 16: Mean-centered ice curves (black) and mean-centered PD curves (grey) for all four features of one repetition of the simulation example described in Section 5.
interactions and reduce interaction-related heterogeneity by partitioning the feature space, here we choose the best-performing model that also potentially learned feature interactions.
In addition to the results of GADGET based on PD presented in Section 7, we also applied GADGET with the same settings based on SD.
The effect plots for the four resulting regions are shown in Figure 18. GADGET based on SD performs the same first split as for PD. The second split is also executed according to the number of prior crimes, but the split value is lower than for PD (at 2.5 instead of 4.5). The total interaction-related heterogeneity reduction (\(R_{Tot}^{2}=0.87\)) is also similar to that when PD is used. Note that Shapley values explain the difference between the the actual and average prediction. Hence, SD plots are centered, while Figure 11 shows the uncentered regional PD plots.
Bikesharing Data Set.The results in Section 7 for the Bikesharing data set are based on a random forest. We chose this model based on the following selection process: For the underlying regression task, we selected the best-performing model out of a linear model, a random forest, and a tuned SVM with RBF kernel. As a baseline comparison, we also report the performance of a featureless model. The model selection was performed by a 5-fold cross-validation, where the hyperparameters cost \(C\) and inverse kernel width \(\sigma\) of the SVM were tuned via 3-fold cross-validation on the inner folds of the nested resampling
\begin{table}
\begin{tabular}{|l|c c|} \hline Learner & MCC & F1 Score \\ \hline Featureless & 0.0000 & 0.7573 \\ Logistic Regression & 0.4675 & 0.8057 \\ Random Forest & 0.4710 & 0.8125 \\ Support Vector Machine (tuned) & 0.4752 & 0.8126 \\ \hline \end{tabular}
\end{table}
Table 6: Average learner test performance on 5-fold cross-validation for COMPAS data set.
Figure 17: Global SD curves and Shapley values of considered features of the COMPAS application example.
approach.27 We evaluate the learners' test performance on the outer folds based on the MSE an \(R^{2}\). The random forest performed best and, hence, was chosen for further analyses.
Footnote 27: For tuning, we used random search with 30 iterations on a search space of \(2^{-12}\) to \(2^{12}\) for each of the two hyperparameters.
|
2308.07782 | $2$-knots with the same knot group but different knot quandles | We give a first example of 2-knots with the same knot group but different
knot quandles by analyzing the knot quandles of twist spins. As a byproduct of
the analysis, we also give a classification of all twist spins with finite knot
quandles. | Kokoro Tanaka, Yuta Taniguchi | 2023-08-15T14:03:32Z | http://arxiv.org/abs/2308.07782v1 | # \(2\)-knots with the same knot group but different knot quandles
###### Abstract.
We give a first example of \(2\)-knots with the same knot group but different knot quandles by analyzing the knot quandles of twist spins. As a byproduct of the analysis, we also give a classification of all twist spins with finite knot quandles.
2020 Mathematics Subject Classification: Primary 57K12, 57K10; Secondary 57K45
## 1. Introduction
A \(1\)-knot is a circle embedded in the \(3\)-sphere \(S^{3}\), a \(2\)-knot is a \(2\)-sphere embedded in the \(4\)-sphere \(S^{4}\), and they are collectively referred to as knots in this paper. There are two related topological invariants of an oriented knot \(\mathcal{K}\) defined by using information of the knot complement with a chosen fixed base point. One is the _knot group_\(G(\mathcal{K})\), the fundamental group of the complement, defined as the set of homotopy classes of loops, and the other is the _knot quandle_\(Q(\mathcal{K})\)[13, 17] defined as the set of homotopy classes of paths with a suitable binary operation. Note that the definition of \(G(\mathcal{K})\) does not require the orientation of \(\mathcal{K}\), but the definition of \(Q(\mathcal{K})\) does. In this paper, we investigate difference between these two invariants.
To make the problem concrete, we consider the following two conditions for oriented knots \(\mathcal{K},\mathcal{K}^{\prime}\):
1. The knot groups \(G(\mathcal{K})\) and \(G(\mathcal{K}^{\prime})\) are isomorphic to each other.
2. The knot quandles \(Q(\mathcal{K})\) and \(Q(\mathcal{K}^{\prime})\) are isomorphic to each other.
Since the associated group of the knot quandle is group isomorphic to the knot group, it is known that the condition (b) implies the condition (a). The converse does not hold in general for \(1\)-knots. For example, the square knot and the granny knot satisfy the condition (a) but do not satisfy the condition (b), that is, they are \(1\)-knots with the same knot group but different knot quandles. On the other hand, there are no known examples of \(2\)-knots with the same knot group but different knot quandles as in Remark 1.1. The purpose of this paper is to give a first example for such \(2\)-knots. More precisely, we prove in Theorem 3.1 that there exist infinitely many triples of \(2\)-knots such that any two of each triples satisfy the condition (a) but do not satisfy the condition (b). The main ingredient of the proof is Theorem 3.3, which reveals an algebraic property, called the _type_, of the knot quandle of a twist-spin.
**Remark 1.1**.: Researchers in this field may have believed that there were \(2\)-knots with the same knot group but different knot quandles, but they would not have known of any specific examples of such \(2\)-knots or proofs of their existence. We point out that based on the example of such \(1\)-knots we cannot construct an example of such \(2\)-knots in a straightforward way. Zeeman's twist-spinning construction [29] is a typical way to obtain a \(2\)-knot \(\tau^{n}(k)\) from a \(1\)-knot \(k\) (and an integer \(n\)). For the square knot \(k\) and the granny knot \(k^{\prime}\), it is known that \(\tau^{0}(k)\) and \(\tau^{0}(k^{\prime})\) are equivalent [8, 24], and hence they satisfy the condition (b). Moreover it follows that \(\tau^{n}(k)\) and \(\tau^{n}(k^{\prime})\) satisfy the condition (b) for any integer \(n\), even if we do not know whether they are equivalent or not.1
Footnote 1: We can check that \(\tau^{n}(k)\) and \(\tau^{n}(k^{\prime})\) are not equivalent for \(n=2,3\) by using quandle cocycle invariants [3].
**Remark 1.2**.: Based on the example of such \(1\)-knots, we can easily obtain an example of such surface knots of positive genus in a straightforward way, where the surface knot is a closed surface embedded in \(S^{4}\). Note that a surface knot of genus zero is nothing but a \(2\)-knot. For a classical knot \(k\), the spun \(k\) torus, which is a knotted torus in \(S^{4}\) defined by Livingston [16], has the knot quandle isomorphic to \(Q(k)\) and also has the knot group isomorphic to \(G(k)\). Then, using the above example of \(1\)-knots, the granny knot and the square knot, we obtain two surface knots of genus one with the same knot group but different knot quandles. We can also obtain such surface knots of genus greater than one by stabilizing them.
This paper is organized as follows. In Section 2, we review the basics of quandles including the type of a quandle, the knot quandle of an oriented knot and a generalized Alexander quandle. We compute the type of the knot quandle for a twist spin (Theorem 3.3) and prove our main result (Theorem 3.1) in Section 3. As an application of Theorem 3.3, we give a classification of all twist spins with finite knot quandles in Section 4. Appendix A is devoted to proving Proposition 3.2, which is a key proposition for the proof of Theorem 3.3. Finally, in Appendix B, we compute the types of the knot quandles for a certain class of \(2\)-knots called branched twist spins including all twist spins.
## 2. Preliminaries
A _quandle_\(X\)[13, 17] is a non-empty set with a binary operation \(*\) that satisfies the following conditions:
* For any \(x\in X\), we have \(x*x=x\).
* For any \(y\in X\), the map \(S_{y}:X\to X;x\mapsto x*y\) is bijective.
* For any \(x,y,z\in X\), we have \((x*y)*z=(x*z)*(y*z)\).
Let \(X\) be a quandle. For any \(x,y\in X\) and \(n\in\mathbb{Z}\), we denote \(S_{y}^{n}(x)\) by \(x*^{n}y\). We set \(\operatorname{type}(X):=\min\{n\in\mathbb{Z}_{>0}\mid x*^{n}y=x\text{ (for any $x,y\in X$)}\}\), where \(\min\,\emptyset=\infty\). We call \(\operatorname{type}(X)\) the _type_ of the quandle \(X\).
The _associated group_ of \(X\) is the group generated by the all elements of \(X\) subject to the relations \(x*y=y^{-1}xy\) for all \(x,y\in X\). We denote the associated group of \(X\) by \(\operatorname{As}(X)\). The associated group of \(X\) acts on \(X\) from the right by \(x\cdot y:=x*y\) for any \(x,y\in X\). A quandle \(X\) is _connected_ if the action of \(\operatorname{As}(X)\) on \(X\) is transitive.
A map \(f:X\to Y\) between quandles is a (_quandle_) _isomorphism_ if \(f(x*y)=f(x)*f(y)\) for any \(x,y\in X\) and \(f\) is a bijection. When there is a quandle isomorphism \(f:X\to Y\), we say that \(X\) and \(Y\) are (_quandle_) _isomorphic_.
Let \(\mathcal{K}\) be an oriented \(n\)-knot for \(n=1,2\). Let \(N(\mathcal{K})\) be a tubular neighborhood of \(\mathcal{K}\) and \(E(\mathcal{K})=S^{n+2}\backslash\text{int}N(\mathcal{K})\) an exterior of \(\mathcal{K}\). We fix a point \(p\in E(\mathcal{K})\). Let \(Q(\mathcal{K},p)\) be the set of homotopy classes of all paths in \(E(\mathcal{K})\) from a point in \(\partial E(\mathcal{K})\) to \(p\). The set \(Q(\mathcal{K},p)\) is a quandle with an operation defined by \([\alpha]*[\beta]:=[\alpha\cdot\beta^{-1}\cdot m_{\beta(0)}\cdot\beta]\), where \(m_{\beta(0)}\) is a meridian loop starting from \(\beta(0)\) and going along in the positive direction. We call \(Q(\mathcal{K},p)\) the _knot quandle_ of \(\mathcal{K}\), which is known to be connected. The isomorphism class of the knot quandle does not depend on the base point \(p\). Thus, we denote the knot quandle simply by \(Q(\mathcal{K})\). We note that for an oriented knot \(\mathcal{K}\), the associated group \(\operatorname{As}(Q(\mathcal{K}))\) is group isomorphic to the knot group \(G(\mathcal{K}):=\pi_{1}(E(\mathcal{K}))\).
Let \(G\) be a group and \(f:G\to G\) a group automorphism. We define the operation \(*\) on \(G\) by \(x*y:=f(xy^{-1})y\). Then, \(\operatorname{GAlex}(G,f)=(G,*)\) is a quandle, which is called the _generalized Alexander quandle_. Although it is difficult to compute the type of a quandle in general, the type of a generalized Alexander quandle can be computed as follows, which will be used in what follows.
**Proposition 2.1**.: _We have \(\operatorname{type}(\operatorname{GAlex}(G,f))=\operatorname{order}(f)\)._
Proof.: Since the definition of \(\operatorname{GAlex}(G,f)\), we have
\[(f^{i}(xy^{-1})y)*y=f((f^{i}(xy^{-1})y)y^{-1})y=f(f^{i}(xy^{-1}))y=f^{i+1}(xy^ {-1})y\]
for any \(x,y\in G\) and \(i\in\mathbb{Z}_{\geq 0}\). Thus, we see that \(x*^{i}y=f^{i}(xy^{-1})y\) for any \(x,y\in G\) and \(i\in\mathbb{Z}_{\geq 0}\).
Let \(j\) be a positive integer. For any \(x\in G\), \(x*^{j}e\) is equal to \(f^{j}(x)\), where \(e\) is the identity element of \(G\). Thus, if \(j<\operatorname{order}(f)\), there exists an element \(y\) such that \(y*^{j}e\) is not equal to \(y\). This implies that \(\operatorname{type}(\operatorname{GAlex}(G,f))\geq\operatorname{order}(f)\). In particular, if \(\operatorname{order}(f)=\infty\), then we have \(\operatorname{GAlex}(G,f)=\infty\). Hence, we assume that the order of \(f\) is finite.
For any \(x,y\in G\), we have
\[x*^{\operatorname{order}(f)}y=f^{\operatorname{order}(f)}(xy^{-1})y=xy^{-1}y=x,\]
which implies that \(\operatorname{type}(\operatorname{GAlex}(G,f))\leq\operatorname{order}(f)\). By the above discussion, we see that \(\operatorname{order}(f)=\operatorname{type}(\operatorname{GAlex}(G,f))\)
## 3. Knot quandles of twist spins
In this section, we discuss the knot quandle of twist spins. The purpose of this section is to show the following theorem:
**Theorem 3.1**.: _There exists a triple \(\{F_{1},F_{2},F_{3}\}\) of 2-knots such that_
1. _the groups_ \(G(F_{1}),G(F_{2})\) _and_ \(G(F_{3})\) _are mutually isomorphic, but_
2. _no two of the quandles_ \(Q(F_{1}),Q(F_{2})\) _and_ \(Q(F_{2})\) _are isomorphic._
_Moreover, there exist infinitely many such triples._
Let \(k\) be an oriented \(1\)-knot in \(S^{3}\). For an integer \(n\), Zeeman [29] defined the \(2\)-knot \(\tau^{n}(k)\), which is called the _\(n\)-twist spun_\(k\). It was shown that the \(n\)-twist spun \(k\) is a fibered \(2\)-knot whose fiber is the once punctured \(M_{k}^{n}\) for \(n>0\), where \(M_{k}^{n}\) is the \(n\)-fold cyclic branched covering space of \(S^{3}\) branched along \(k\). In particular, the \(1\)-twist spun \(k\) is trivial for any oriented \(1\)-knot \(k\), and the \(n\)-twist spin of the trivial \(1\)-knot is also trivial for any \(n>0\). Litherland [15] showed that, for any oriented \(1\)-knot \(k\) and any integer \(n\), the \((-n)\)-twist spun knot \(\tau^{-n}(k)\) is equivalent to \(\tau^{n}(-k!)\), where \(-k\) (resp. \(k!\)) is the oriented \(1\)-knot obtained from \(k\) by inverting the orientation (resp. taking the mirror image) of \(k\). Hence, we consider the case where \(n>1\) and \(k\) is non-trivial, since we are interested in fibered \(2\)-knots.
Let \(\varphi\) be the group automorphism of \(\pi_{1}(M_{k}^{n})\) induced by the monodromy of \(S^{4}\backslash\tau^{n}(k)\). We note that the monodromy of \(S^{4}\backslash\tau^{n}(k)\) coincides with the canonical generator of the transformation group of \(M_{k}^{n}\). Then we have the following proposition, whose proof will be given in Appendix A:
**Proposition 3.2**.: _Let \(\varphi\) be the group automorphism of \(\pi_{1}(M_{k}^{n})\) induced by the monodromy of \(S^{4}\backslash\tau^{n}(k)\). Then the order of \(\varphi\) is equal to \(n\)._
Inoue [11, Theorem 3.1] showed that the knot quandle \(Q(\tau^{n}(k))\) of \(\tau^{n}(k)\) is quandle isomorphic to \(\operatorname{GAlex}(\pi_{1}(M_{k}^{n}),\varphi)\). Then, we have:
**Theorem 3.3**.: _The type of \(Q(\tau^{n}(k))\) is equal to \(n\)._
Proof.: It follows from Proposition 2.1 and Proposition 3.2 that we have \(\operatorname{type}(Q(\tau^{n}(k)))=\operatorname{type}\left(\operatorname{ GAlex}(\pi_{1}(M_{k}^{n}),\varphi)\right)=\operatorname{order}(\varphi)=n\).
**Corollary 3.4**.: _Let \(k,k^{\prime}\) be non-trivial oriented \(1\)-knots and \(n,n^{\prime}\) integers greater than \(1\). If \(Q(\tau^{n}(k))\) is quandle isomorphic to \(Q(\tau^{n^{\prime}}(k^{\prime}))\), then we have \(n=n^{\prime}\)._
Proof.: Since the type of quandles is an invariant of quandles, it holds that
\[n=\operatorname{type}(Q(\tau^{n}(k)))=\operatorname{type}(Q(\tau^{n^{\prime} }(k^{\prime})))=n^{\prime}\]
by Theorem 3.3 and the assumption.
Proof of Theorem 3.1.: Let \(p,q\) and \(r\) be coprime integers. Gordon [7] showed that \(G(\tau^{r}(t_{p,q}))\) is isomorphic to \(\pi_{1}(M_{t_{p,q}}^{r})\times\mathbb{Z}\), where \(t_{p,q}\) is the \((p,q)\)-torus knot. It is known that \(M_{t_{q,r}}^{p},M_{t_{r,p}}^{q}\) and \(M_{t_{p,q}}^{r}\) are homeomorphic ([19]), which
implies that \(G(\tau^{p}(t_{q,r})),G(\tau^{q}(t_{r,p}))\) and \(G(\tau^{r}(t_{p,q}))\) are mutually group isomorphic. Thus, putting \(F_{1}:=\tau^{p}(t_{q,r}),F_{2}:=\tau^{q}(t_{r,p})\) and \(F_{3}:=\tau^{r}(t_{p,q})\), we see that \(2\)-knots \(F_{1},F_{2}\) and \(F_{3}\) satisfy the condition (1). By Theorem 3.3, we have \(\operatorname{type}(Q(\tau^{p}(t_{q,r})))=p,\,\operatorname{type}(Q(\tau^{q}(t _{r,p})))=q\) and \(\operatorname{type}(Q(\tau^{r}(t_{p,q})))=r\). This implies that the \(2\)-knots \(F_{1},F_{2}\) and \(F_{3}\) satisfy the condition (2). Varying a triple of comprine integers, we obtain infinitely many such triples of \(2\)-knots.
## 4. Classification of twist spins whose knot quandles are finite
In this section, we give a classification of twist spins whose knot quandles are finite. Let \(k\) be an oriented \(1\)-knot and \(n\) an integer greater than \(1\). Since the knot quandle \(Q(\tau^{n}(k))\) is isomorphic to the generalized Alexander quandle \(\operatorname{GAlex}(\pi_{1}(M_{k}^{n}),\varphi)\), the knot quandle \(Q(\tau^{n}(k))\) is finite if and only if the fundamental group \(\pi_{1}(M_{k}^{n})\) is finite. Thus, if the knot quandle \(Q(\tau^{n}(k))\) is finite, \(\tau^{n}(k)\) is an element of one of the following six sets (see [11]):
* \(S_{1}=\{\tau^{n}(k)\mid n=2,k\text{: a non-trivial $2$-bridge knot}\}\).
* \(S_{2}=\{\tau^{n}(k)\mid n=2,k\text{: a Montesinos knot }M((2,\beta_{1}),(3,\beta_{2}),(3,\beta_{3}))\}\).
* \(S_{3}=\{\tau^{n}(k)\mid n=2,k\text{: a Montesinos knot }M((2,\beta_{1}),(3,\beta_{2}),(5,\beta_{3}))\}\).
* \(S_{4}=\{\tau^{n}(k)\mid n=3,k\text{: the torus knot }t_{2,3}\text{ or the torus knot }t_{2,5}\}\).
* \(S_{5}=\{\tau^{n}(k)\mid n=4,k\text{: the torus knot }t_{2,3}\}\).
* \(S_{6}=\{\tau^{n}(k)\mid n=5,k\text{: the torus knot }t_{2,3}\}\).
### Classification as unoriented \(2\)-knots
We classify twist spins whose knot quandles are finite as unoriented \(2\)-knots.
**Theorem 4.1**.: _Let \(k\), \(k^{\prime}\) be non-trivial oriented \(1\)-knots and \(n\), \(n^{\prime}\) integers greater than \(1\). Suppose that \(Q(\tau^{n}(k))\) and \(Q(\tau^{n^{\prime}}(k^{\prime}))\) are finite. Then \(\tau^{n}(k)\) is equivalent to \(\tau^{n^{\prime}}(k^{\prime})\) as unoriented \(2\)-knots if and only if \(n=n^{\prime}\) and \(k\) is equivalent to \(k^{\prime}\) as unoriented \(1\)-knots._
Proof.: It is obvious that \(\tau^{n}(k)\) and \(\tau^{n^{\prime}}(k^{\prime})\) are equivalent if \(n=n^{\prime}\) and \(k\) and \(k^{\prime}\) are equivalent. We discuss the converse. Suppose that \(\tau^{n}(k)\) and \(\tau^{n^{\prime}}(k^{\prime})\), whose knot quandles are finite, are equivalent.
Case 1. Consider the case where \(\tau^{n}(k)\) is an element of \(S_{1}\cup S_{2}\cup S_{3}\). By Corollary 3.4, we have \(n^{\prime}=n=2\). Since \(Q(\tau^{n^{\prime}}(k^{\prime}))\) is finite, \(\tau^{n^{\prime}}(k^{\prime})\) is also an element of \(S_{1}\cup S_{2}\cup S_{3}\). By [12], we see that \(k\) and \(k^{\prime}\) are equivalent.
Case 2. Consider the case where \(\tau^{n}(k)\) is an element of \(S_{4}\). By Corollary 3.4, we have \(n^{\prime}=n=3\). Since \(Q(\tau^{n^{\prime}}(k^{\prime}))\) is finite, \(\tau^{n^{\prime}}(k^{\prime})\) is also an element of \(S_{4}\), and hence \(k\) and \(k^{\prime}\) are equivalent to \(t_{2,3}\) or \(t_{2,5}\). It is known that
\[|Q(\tau^{3}(t_{2,3}))|=|\pi_{1}(M_{t_{2,3}}^{3})|=8\ \ \text{and}\ \ |Q(\tau^{3}(t_{2,5}))|=|\pi_{1}(M_{t_{2,5}}^{3})|=120,\]
which implies that \(\tau^{3}(t_{2,3})\) and \(\tau^{3}(t_{2,5})\) are not equivalent. Thus, we see that \(k\) and \(k^{\prime}\) are equivalent.
Case 3. Consider the case where \(\tau^{n}(k)\) is an element of \(S_{5}\cup S_{6}\). By Corollary 3.4, we have \(n=n^{\prime}\in\{4,5\}\). Since \(Q(\tau^{n^{\prime}}(k^{\prime}))\) is finite, \(\tau^{n^{\prime}}(k^{\prime})\) is also an element of \(S_{5}\cup S_{6}\), and hence \(k\) and \(k^{\prime}\) are equivalent to \(t_{2,3}\).
**Remark 4.2**.: Based on the classification of \(\tau^{2}(t_{3,5})\), \(\tau^{3}(t_{5,2})\) and \(\tau^{5}(t_{2,3})\) by Gordon [7, Proof of Theorem 1.1] and that of the set \(S_{1}\), which consists of all 2-twist spun 2-bridge knots, by Plotnick [23], our Theorem 4.1 has almost been proved in a series of the works by Kataoka [14], Jang-Kataoka-Miyakoshi [12] and Miyakoshi [20]. Details are as follows.
Kataoka [14] classified the set \(S_{1}\cup S_{2}\cup S_{3}\), which consists of all 2-twist spun spherical Montesinos knots, by computing the orders of the knot quandles, and using dihedral group representations of the knot groups (and the classification by Plotnick [23] mentioned above). Then Jang-Kataoka-Miyakoshi [12] simplified its proof by using 3-colorings rather than dihedral group representations. With these in mind, Miyakoshi [20] almost proved our Theorem 4.1 by further computing the orders of the knot quandles for the set \(S_{4}\cup S_{5}\cup S_{6}\), and using 3-colorings (and the classification by Gordon [7] mentioned above). The only remaining part was whether \(\tau^{2}(t_{3,4})\in S_{2}\) is equivalent to \(\tau^{4}(t_{2,3})\in S_{5}\) or not, where the torus knot \(t_{3,4}\) is nothing but the Montesinos knot \(M((2,-1),(3,1),(3,1))\). Note that, for both 2-knots, the order of the knot quandle is 24 and the number of 3-colorings is 9.
### Classification as oriented \(2\)-knots
We classify twist spins whose knot quandles are finite as oriented 2-knots. For an oriented knot \(\mathcal{K}\), we denote by \(-\mathcal{K}\) (or \(\mathcal{K}!\), respectively) the oriented knot obtained from an oriented knot \(\mathcal{K}\) by inverting the orientation (or taking the mirror image) of \(\mathcal{K}\). An oriented knot \(\mathcal{K}\) is said to be _invertible_ (resp. \((+)\)_-amphicheiral_) if \(\mathcal{K}\) is equivalent to \(-\mathcal{K}\) (resp. \(\mathcal{K}!\)). Litherland [15] showed that, for an oriented 1-knot \(k\) and an integer \(n\), \(\tau^{n}(-k)\) is equivalent to \((\tau^{n}(k))!\) and \(\tau^{n}(k!)\) is equivalent to \(-(\tau^{n}(k))\) as oriented 2-knots. Hence, if the oriented 1-knot \(k\) is invertible (resp. \((+)\)-amphicheiral), then the oriented 2-knot \(\tau^{n}(k)\) is \((+)\)-amphicheiral (resp. invertible). Although it is not known whether or not the converse holds in general, we can say the following for twist spins whose knot quandles are finite:
**Proposition 4.3**.: _Suppose that \(Q(\tau^{n}(k))\) is finite for a non-trivial oriented \(1\)-knot \(k\) and an integer \(n\) greater than \(1\)._
1. \(k\) _is invertible and_ \(\tau^{n}(k)\) _is_ \((+)\)_-amphicheiral._
2. \(k\) _is_ \((+)\)_-amphicheiral if and only if_ \(\tau^{n}(k)\) _is invertible._
Proof.: We prove the case (1). Since \(Q(\tau^{n}(k))\) is finite, it follows from Subsection 4.1 that the 1-knot \(k\) is a 2-bridge knot or a Montesinos knot of type \(((2,\beta_{1}),(3,\beta_{2}),(\alpha_{3},\beta_{3}))\), where \(\alpha_{3}=3\) or 5, which implies that 1-knot \(k\) is invertible (see [2]). Hence, by [15], the 2-knot \(\tau^{n}(k)\) is \((+)\)-amphicheiral.
We prove the case (2). Since \(Q(\tau^{n}(k))\) is finite, the 2-knot \(\tau^{n}(k)\) is an element of \(S_{1}\cup\cdots\cup S_{6}\). Gordon [10] showed that, when the 2-knot \(\tau^{n}(k)\) is an element of \(S_{1}\), the 1-knot \(k\) is \((+)\)-amphicheiral if and only if the 2-knot
\(\tau^{n}(k)\) is invertible. Gordon [10] also showed that the \(n\)-twist spin of any torus knot is not invertible for \(n>1\). Since it is known that any torus knot is not \((+)\)-amphicheiral, when the \(2\)-knot \(\tau^{n}(k)\) is an element of \(S_{4}\cup S_{5}\cup S_{6}\), the \(1\)-knot \(k\) is not \((+)\)-amphicheiral and the \(2\)-knot \(\tau^{n}(k)\) is not invertible. Hence, it is sufficient to consider the case where the \(2\)-knot \(\tau^{n}(k)\) is an element in \(S_{2}\cup S_{3}\). In this case, we have that \(n=2\) and the \(1\)-knot \(k\) is a Montesinos knot of type \(((2,\beta_{1}),(3,\beta_{2}),(\alpha_{3},\beta_{3}))\), where \(\alpha_{3}=3\) or \(5\). Since it is known that the \(1\)-knot \(k\) is not \((+)\)-amphicheiral, all we need is to prove that the \(2\)-knot \(\tau^{2}(k)\) is not invertible. To prove it, we recall the following proposition due to Gordon [10]:
**Proposition 4.4**.: _Let \(M\) be a closed,connected, orientable 3-manifold and \(F\) a fibered 2-knot whose fiber is the once punctured \(M\). If \(F\) is invertible, then there exists an orientation reversing self homotopy equivalent of \(M\)._
The fiber \(M_{k}^{2}\) of the \(2\)-knot \(\tau^{2}(k)\) is known to have the finite non-abelian fundamental group. It follows from the proof of Theorem 8.2 of [21] that the fiber \(M_{k}^{2}\) does not have an orientation reversing self homotopy equivalent. Thus, by Proposition 4.4, we see that the \(2\)-knot \(\tau^{2}(k)\) is not invertible.
Combining Theorem 4.1 with Proposition 4.3, we have the following:
**Theorem 4.5**.: _Let \(k\), \(k^{\prime}\) be non-trivial oriented \(1\)-knots and \(n\), \(n^{\prime}\) integers greater than \(1\). Suppose that \(Q(\tau^{n}(k))\) and \(Q(\tau^{n^{\prime}}(k^{\prime}))\) are finite. Then \(\tau^{n}(k)\) is equivalent to \(\tau^{n^{\prime}}(k^{\prime})\) as oriented \(2\)-knot if and only if \(n=n^{\prime}\) and \(k\) is equivalent to \(k^{\prime}\) as oriented \(1\)-knot._
## Appendix A Proof of Proposition 3.2
In this appendix, we will show Proposition 3.2. For a non-trivial oriented \(1\)-knot \(k\) in \(S^{3}\) and an integer \(n\) greater than \(1\), let \(M_{k}^{n}\) be the \(n\)-fold cyclic branched covering space of \(S^{3}\) branched along \(k\), and \(\varphi\) the group automorphism of \(\pi_{1}(M_{k}^{n})\) induced by the monodromy of the complement \(S^{4}\backslash\tau^{n}(k)\) of the fibered \(2\)-knot \(\tau^{n}(k)\). Then, by [4, Proposition A.11.] and [23, Lemma 2.3], we have the following:
**Lemma A.1**.: _If the universal covering space of \(M_{k}^{n}\) is homeomorphic to \(\mathbb{R}^{3}\), then the order of \(\varphi\) is equals to \(n\)._
Using the lemma above, we give a proof of Proposition 3.2, in which we call a \(1\)-knot in \(S^{3}\) as a knot for simplicity.
Proof of Proposition 3.2.: If \(\operatorname{order}(\varphi)=1\), the quandle \(\operatorname{GAlex}(\pi_{1}(M_{k}^{n}),\varphi)\) is trivial, that is, it holds that \(x*y=x\) for any \(x,y\in\pi_{1}(M_{k}^{n})\). Since \(\operatorname{GAlex}(\pi_{1}(M_{k}^{n}),\varphi)\) is isomorphic to the knot quandle \(Q(\tau^{n}(k))\), the quandle \(\operatorname{GAlex}(\pi_{1}(M_{k}^{n}),\varphi)\) is connected. This implies that the cardinality of \(\operatorname{GAlex}(\pi_{1}(M_{k}^{n}),\varphi)\) is \(1\), that is, \(\pi_{1}(M_{k}^{n})\) is trivial. By the Smith conjecture, the knot \(k\) must be the unknot. Hence, we have \(\operatorname{order}(\varphi)\neq 1\).
Since \(\varphi\) is induced by the canonical generator of the transformation group of \(M_{k}^{n}\), we see that \(\varphi^{n}\) is the identity map, and hence \(n\) is divisible by \(\operatorname{order}(\varphi)\). Then, if \(n\) is a prime number, it follows from \(\operatorname{order}(\varphi)\neq 1\) that \(\operatorname{order}(\varphi)=n\). Hence, we assume that \(n\) is a composite number.
If \(k\) is the composite knot of \(k_{1}\) and \(k_{2}\), then the fundamental group \(\pi_{1}(M_{k}^{n})\) is the free product of \(\pi_{1}(M_{k_{1}}^{n})\) and \(\pi_{1}(M_{k_{2}}^{n})\) (see [25, Theorem 2]). For \(i=1,2\), let \(\varphi_{i}\) be the group automorphism induced by the canonical generator of the transformation group of \(M_{k_{i}}^{n}\), and \(\eta_{i}\) the injective group homomorphism from \(\pi_{1}(M_{k_{i}}^{n})\) to \(\pi_{1}(M_{k}^{n})\). It holds that the restriction \(\varphi|_{\pi_{1}(M_{k_{i}}^{n})}\) coincides with the group homomorphism \(\eta_{i}\circ\varphi_{i}:\pi_{1}(M_{k_{i}}^{n})\to\pi_{1}(M_{k}^{n})\) for \(i\in\{1,2\}\). Thus, if \(\operatorname{order}(\varphi_{1})=\operatorname{order}(\varphi_{2})=n\), we see that \(\operatorname{order}(\varphi)=n\). Hence it is sufficient to consider the case where \(k\) is a prime knot, that is, \(k\) is one of a torus knot, a hyperbolic knot or a satellite knot. We note that, by the equivalent sphere theorem [18], the branched covering space \(M_{k}^{n}\) of the prime knot \(k\) is irreducible (see also [25, Theorem 2]), which will be used in Case 1 and Case 3 below.
Case 1. Consider the case where \(k\) is a torus knot. Since \(S^{3}\backslash k\) is a Seifert fibered space, \(M_{k}^{n}\) is also a Seifert fibered space. Thus, the universal covering space of \(M_{k}^{n}\) is either \(S^{3},S^{2}\times\mathbb{R}\) or \(\mathbb{R}^{3}\) (see [27, Lemma 3.1]).
* Suppose that the universal covering space of \(M_{k}^{n}\) is \(S^{2}\times\mathbb{R}\). Then \(\pi_{2}(M_{k}^{n})\) is non-trivial. By the sphere theorem [22], it holds that \(M_{k}^{n}\) is not irreducible. This is a contradiction.
* Suppose that universal covering space of \(M_{k}^{n}\) is \(S^{3}\). Since \(n\) is a composite number, we see that \(k=t_{2,3}\) and \(n=4\). Then we have \(\operatorname{order}(\varphi)\mid 4\). Moreover, it follows from \(\operatorname{order}(\varphi)\neq 1\) that \(\operatorname{order}(\varphi)=2\) or \(4\). It is known that \(Q(\tau^{4}(t_{2,3}))\) has the following presentation (cf. [26]): \[\langle a,b\mid(a*b)*a=b,a*^{4}b=a\rangle.\] If \(\operatorname{order}(\varphi)=2\), then \(\operatorname{type}(Q(\tau^{4}(t_{2,3})))=2\) by Proposition 2.1. Hence, the following is also a presentation of \(Q(\tau^{4}t_{2,3})\): \[\langle a,b\mid(a*b)*a=b,a*^{4}b=a,a*^{2}b=a,b*^{2}a=b\rangle.\] Using Tietze's moves, we see that \(Q(\tau^{4}(t_{2,3}))\) has the following presentation: \[\langle a,b\mid(a*b)*a=b,a*^{2}b=a\rangle.\] This implies that \(Q(\tau^{4}(t_{2,3}))\) is quandle isomorphic to \(Q(\tau^{2}(t_{2,3}))\). On the other hand, by Inoue's result, we have \[|Q(\tau^{4}(t_{2,3}))|=|\pi_{1}(M_{t_{2,3}}^{4})|=24\ \ \text{and}\ \ |Q(\tau^{2}(t_{2,3}))|=|\pi_{1}(M_{t_{2,3}}^{2})|=3.\] This is a contradiction. Hence we have \(\operatorname{order}(\varphi)=4\).
* Suppose that the universal covering space of \(M_{k}^{n}\) is \(\mathbb{R}^{3}\). By Lemma A.1, we have \(\operatorname{order}(\varphi)=n\).
Case 2. Consider the case where \(k\) is a hyperbolic knot. Since \(n\) is a composite number, \(n\) is greater than \(3\). As a consequence of the orbifold
theorem, it holds that \(M_{k}^{n}\) is a hyperbolic manifold (see [1, 5]). Thus, the universal covering space of \(M_{k}^{n}\) is \(\mathbb{R}^{3}\). By Lemma A.1, we have \(\operatorname{order}(\varphi)=n\).
Case 3. Consider the case where \(k\) is a satellite knot. In this case, the complement \(E(k)\) of \(k\) contains an incompressible torus. Then, by [9], we have either that \(M_{k}^{n}\) is sufficiently large, or that \(S^{3}\) and \(M_{k}^{n}\) contain a non-separating 2-sphere. Since \(S^{3}\) does not contain non-separating 2-sphere, we see that \(M_{k}^{n}\) is sufficiently large. Recall that \(M_{k}^{n}\) is also irreducible as metioned above. Since it is shown in [28] that the universal covering space of the interior of a sufficiently large irreducible 3-manifold is \(\mathbb{R}^{3}\), the universal covering space of \(M_{k}^{n}\) is \(\mathbb{R}^{3}\). By Lemma A.1, we have \(\operatorname{order}(\varphi)=n\).
## Appendix B Branched twist spin
In this appendix, we compute the types of the knot quandles for a certain class of 2-knots called branched twist spins including all twist spins. We also give an alternative proof of [6, Theorem 1.1] quoted below as Theorem B.2, which followed from [6, Theorem 4.1] on knot group representations for branched twist spins.
A 2-knot that is invariant under a circle action on the 4-sphere \(S^{4}\) is called a _branched twist spin_. Let \(\tau^{n,s}(k)\) be the branched twist spin obtained from an oriented 1-knot \(k\) and a pair of coprime positive integers \(n\) and \(s\) with \(n>1\). Since the branched twist spin \(\tau^{n,s}(k)\) is the branch set of the \(s\)-fold cyclic branched covering space of \(S^{4}\) branched along the twist spin \(\tau^{n}(k)\), it is a fibered 2-knot whose fiber is the once punctured \(n\)-fold cyclic branched covering space of \(S^{3}\) branched along \(k\) and whose monodromy is the \(s\)-times composite of the canonical generator of the transformation group of \(M_{k}^{n}\). Note that \(\tau^{n,1}(k)\) is nothing but \(\tau^{n}(k)\). By [11, Theorem 3.1], the knot quandle \(Q(\tau^{n,s}(k))\) is quandle isomorphic to the generalized Alexander quandle \(\operatorname{GAlex}(\pi_{1}(M_{k}^{n}),\varphi^{s})\), where \(\varphi\) is the group automorphism of \(\pi_{1}(M_{k}^{n})\) as before. Then, for a non-trivial oriented 1-knot \(k\), we have the following:
**Theorem B.1**.: _The type of \(Q(\tau^{n,s}(k))\) is equal to \(n\)._
Proof.: It follows from Proposition 3.2 and the coprimeness of \(n\) and \(s\) that \(\operatorname{order}(\varphi^{s})=\operatorname{order}(\varphi)=n\). Hence, by Proposition 2.1, the type of \(Q(\tau^{n,s}(k))\) is equal to \(n\).
Fukuda [6] studied dihedral group representations of the knot group and showed the following as its application:
**Theorem B.2** ([6, Theorem 1.1]).: _Let \(k_{1}\) and \(k_{2}\) be non-trivial oriented \(1\)-knots, and \(\tau^{n_{1},s_{1}}(k_{1})\) and \(\tau^{n_{2},s_{2}}(k_{2})\) be branched twist spins. If \(n_{1}\) and \(n_{2}\) are different, then \(\tau^{n_{1},s_{1}}(k_{1})\) and \(\tau^{n_{2},s_{2}}(k_{2})\) are not equivalent._
Here we give an alternative proof of his theorem by using the knot quandle rather than the knot group.
Proof of Theorem B.2.: We prove the contraposition. If two branched twist spins \(\tau^{n_{1},s_{1}}(k_{1})\) and \(\tau^{n_{2},s_{2}}(k_{2})\) are equivalent, then the two knot quandles \(Q(\tau^{n_{1},s_{1}}(k_{1}))\) and \(Q(\tau^{n_{2},s_{2}}(k_{2}))\) are quandle isomorphic, and hence we have
\[n_{1}=\operatorname{type}(Q(\tau^{n_{1},s_{1}}(k_{1})))=\operatorname{type}(Q( \tau^{n_{2},s_{2}}(k_{2})))=n_{2}\]
by Theorem B.1.
## Acknowledgement
The authors would like to thank Seiichi Kamada and Makoto Sakuma for their helpful comments. The first author was supported by JSPS KAKENHI Grant Number 17K05242 and 21K03220. The second author was supported by JSPS KAKENHI Grant Number 21J21482.
|
2303.02588 | SatIn: Hardware for Boolean Satisfiability Inference | This paper describes SatIn, a hardware accelerator for determining boolean
satisfiability (SAT) -- an important problem in many domains including
verification, security analysis, and planning.
SatIn is based on a distributed associative array which performs short,
atomic operations that can be composed into high level operations.
To overcome scaling limitations imposed by wire delay, we extended the
algorithms used in software solvers to function efficiently on a distributed
set of nodes communicating with message passing.
A cycle-level simulation on real benchmarks shows that SatIn achieves an
average 72x speedup against Glucose, the winner of 2016 SAT competition, with
the potential to achieve a 113x speedup using two contexts.
To quantify SatIn's physical requirements, we placed and routed a single
clause using the Synopsys 32nm} educational development kit.
We were able to meet a 1ns cycle constraint with our target clause fitting in
4867um^2 and consuming 63.8uW of dynamic power; with a network, this
corresponds to 100k clauses consuming 8.3W of dynamic power (not including
leakage or global clock power) in a 500mm^2 32nm chip. | Chenzhuo Zhu, Alexander C. Rucker, Yawen Wang, William J. Dally | 2023-03-05T06:30:32Z | http://arxiv.org/abs/2303.02588v1 | # SatIn: Hardware for Boolean Satisfiability Inference
###### Abstract
This paper describes SatIn, a hardware accelerator for determining boolean satisfiability (SAT)--an important problem in many domains including verification, security analysis, and planning. SatIn is based on a distributed associative array which performs short, atomic operations that can be composed into high level operations. To overcome scaling limitations imposed by wire delay, we extended the algorithms used in software solvers to function efficiently on a distributed set of nodes communicating with message passing. A cycle-level simulation on real benchmarks shows that SatIn achieves an average 72x speedup against Glucose [1], the winner of 2016 SAT competition, with the potential to achieve a 113x speedup using two contexts. To quantify SatIn's physical requirements, we placed and routed a single clause using the Synopsys \(32\,\mathrm{nm}\) educational development kit. We were able to meet a \(1\,\mathrm{ns}\) cycle constraint with our target clause fitting in \(4867\,\mathrm{\mu m^{2}}\) and consuming \(63.8\,\mathrm{\mu W}\) of dynamic power; with a network, this corresponds to 100k clauses consuming \(8.3\,\mathrm{W}\) of dynamic power (not including leakage or global clock power) in a \(500\,\mathrm{mm^{2}}\)\(32\,\mathrm{nm}\) chip.
## I Introduction
Boolean satisfiability (SAT) is an NP-complete problem that forms the core of many practical problems, including planning and formal design verification [2, 3, 4]. SAT attempts to find an assignment of boolean values to variables that results in a given formula being true; it is also possible to reduce many problems in NP to SAT in polynomial time [5]. Solvers are under continuous development, with an annual SAT competition highlighting the latest advances [6]. Because SAT can easily be optimized, many practical problems are translated into SAT instead of being solved directly [7, 8].
Prior work has been done to accelerate SAT using custom hardware [9, 10]. However, both the overall speedup against state of the art software solvers and the maximum problem size are very limited. This is due to two reasons: traditional architectures store data (clauses) separately from processing elements, making it expensive to traverse all clauses once SAT problems get large enough, and most proposed hardware accelerators are only able to accelerate a single part of modern SAT solving algorithms. Because the remaining part still takes 10% of the total runtime, the overall speedup is limited to 10x.
To address these problems, we build custom hardware for clause storage and basic clause operations to eliminate clause data movement, even for large SAT problems. Instead of moving clause data, we deliver variable assignments to each clause through an interconnection network (Figure 1). The latency of the interconnection network is hidden by allowing asynchronous implications. Furthermore, our clause units are reused for multiple computationally intensive parts of modern SAT algorithms, minimizing CPU load while leaving flexibility for heuristic tuning. With more than 99% of the baseline SAT algorithm accelerated by our hardware, we achieve an average 72x speedup with a single context against the 2016 SAT competition winner [1]. Our design is easy to expand for very large SAT problems, either with a larger chip or by using multiple chips.
To make our accelerator distributed, and therefore scalable, we introduce several modifications to single threaded SAT algorithms. These allow key steps of the algorithm, such as constraint propagation and clause learning, to run in parallel while being correct. To evaluate the speedup from our accelerator, we use a custom cycle-approximate simulator to run several problems from the 2016 SAT competition [6]. We compare the speedup against Glucose, using the same decision heuristics and learned clauses to ensure a fair comparison [1]. We extrapolate the results from this speedup to identify an ideal speedup for adding a second, independent thread. We then evaluate the cycle time, area, and power of our accelerator by using the Synopsys \(32\,\mathrm{nm}\) educational development kit to place and route a clause unit and router and analytically model the network links. This shows that, in a \(500\,\mathrm{mm^{2}}\)\(32\,\mathrm{nm}\) die, we can fit up to 100,000 clauses with a total power consumption of \(8.25\,\mathrm{W}\). Scaling this to a modern 16nm process should allow upwards of 400,000 clauses on a single die of comparable size.
This paper makes the following contributions:
1. We analyse the important characteristics of modern SAT problems and use these characteristics to motivate our hardware design,
2. We develop a distributed SAT algorithm, optimized for minimal synchronization overhead that permits the use of existing heuristics from state-of-the art software SAT solvers,
3. We develop the architecture of SatIn, a SAT accelerator that uses a distributed architecture of many simple clause units connected by a NoC to accelerate the bulk of the SAT problem, and
4. We evaluate SatIn on problems from the 2016 SAT competition. Our evaluation includes a performance comparison against Glucose, the software program that won the 2016 competition. We also include area and power numbers from implementing SatIn in a 32nm technology through place and route.
We start by describing modern SAT algorithms and our modifications in Section II. System architecture and clause architecture of the accelerator are described in detail in Section III and Section IV. Section V describes our evaluation methodology and is followed by experimental results in Section VI. We discuss the related work in Section VII and conclude in Section IX.
## II SAT Algorithms
For this work, we focus on SAT problems that are expressed in Conjunctive Normal Form (CNF). This is a product of sums representation: a list of _clauses_, where each clause is a list of _literals_. Each literal is an assignment of a _variable_ to a _polarity_, either negated or not. For the problem to be satisfied, every clause must be satisfied; for a clause to be satisfied, at least one literal in the clause must be true. CNF allows the solver to reason about the formula more efficiently, while still being a reasonable target for translating practical problems. In this section, we will first describe common algorithms used in modern software SAT solvers and then propose our modification to each algorithm for distributed execution.
### _Software Algorithms_
Most deterministic SAT algorithms today are based on the original algorithm by Davis, Putnam, Logemann, and Loveland (DPLL), which uses a tree search to explore the set of potential solutions [11]. Each node in a tree represents the decision of a single variable to be either true or false, and the algorithm explores the tree, turning back as soon as it finds that a particular decision precludes a satisfying assignment. Over the past decades, extensions to DPLL and smart heuristics have been proposed to allow faster SAT solvers.
#### Ii-A1 Boolean Constraint Propagation
The most important step in a DPLL-based SAT solver is Boolean Constraint Propagation (BCP). When a CNF clause has all but one literal decided, and no literals are decided as true, it is referred to as a _unit_ clause. Unit clauses must have their remaining literal decided as true to have the problem satisfied, so the literal is decided to be true and then applied to all other clauses. For example, consider a problem with clause \(x_{0}\lor x_{1}^{\prime}\lor x_{2}^{\prime}\) and the existing decisions \(x_{0}^{\prime}\quad(x_{0}\to 0)\) and \(x_{1}\quad(x_{1}\to 1)\). For this clause to be satisfied, \(x_{2}\) must be 0, and this logical _implication_ is broadcast to the remaining clauses. However, the implication \(x_{2}^{\prime}\) depends on \(x_{0}^{\prime},x_{1}\). Therefore, if either \(x_{0}^{\prime}\) or \(x_{1}\) is undone, then \(x_{2}^{\prime}\) must also be undone. Software solvers are able to efficiently implement this algorithm using the two-watched literal scheme introduced by Chaff [12]. Instead of checking all literals in all clauses, these solvers check only two, which is sufficient to ensure that a clause has not become unit. Once a watched literal is decided, the clause is satisfied, unit (if all unwatched literals are decided), or another literal is swapped in to be watched. This is increasingly efficient for longer clauses, because most implications do not need to be handled at all clauses that they impact.
However, it is possible that the logical results obtained from BCP may produce conflicting implications. Consider the following clauses:
\[x_{0}\lor x_{1}^{\prime}\lor x_{2}^{\prime}\]
\[x_{0}\lor x_{1}^{\prime}\lor x_{2}.\]
If \(x_{0}^{\prime},x_{1}\) is decided, then the two clauses will attempt to propagate \(x_{2}\) and \(x_{2}^{\prime}\). Because a variable cannot logically be in two states, this results in a _conflict_, and either \(x_{0}\) or \(x_{1}^{\prime}\) must be true for a satisfying assignment. The problem is not satisfiable if there are no remaining branches of the search space to explore after a conflict.
#### Ii-A2 Decision Heuristics
A literal's decision may trigger a series of implications. If none of these implications result in conflicts and all clauses are not yet satisfied, another literal needs to be decided to continue the search. To associate implications to their triggering decision, a _decision level_ is kept globally and incremented when a literal is decided. SAT solvers rely on decision heuristics like the Variable State Independent Decaying Sum (VSIDS) heuristic, first introduced by the Chaff solver [12]. VSIDS keeps a periodically decaying sum for each literal, which is incremented whenever that literal is part of a learned clause. When the solver must make a decision, it chooses the undecided literal with the highest count. In essence, VSIDS facilitates search progress by attempting to satisfy recent conflict clauses first.
Fig. 1: Top level architecture of SatIn. Central unit and clause banks are connected through an interconnection network.
#### Ii-A3 Conflict-Driven Clause Learning
The GRASP solver improves upon the original DPLL procedure by learning new clauses whenever conflicts occur [13]. The new clauses are added to the database, and are used to backtrack farther up the search tree after resolving a conflict. These learned clauses are used later to avoid reaching the same conflict in a different branch of the search space.
A conflict is detected when both literals for a variable are implied as true. The last literal assignment that led to both implications is defined as the first Unique Implication Point (1-UIP). The new clause is the negated 1-UIP literal and the negation of any literals from earlier decision levels that led, directly or indirectly, to the conflicting implication. Instead of backtracking to the previous decision level, the solver returns to the latest decision level where the learned clause is unit, generating an implication that sets the 1-UIP literal to true. BCP is then performed as usual.
For example, consider the three clauses
\[x_{0}\lor x_{1}\lor x_{2},\]
\[x_{1}\lor x_{3}^{\prime},\]
\[x_{2}^{\prime}\lor x_{3}.\]
Deciding \(x_{0}^{\prime}\) (decision level 1) triggers no implications. Deciding \(x_{1}^{\prime}\) (decision level 2) triggers the implication of \(x_{3}^{\prime}\). \(x_{0}^{\prime}\) and \(x_{1}^{\prime}\) together imply \(x_{2}\), which then triggers a conflicting implication of \(x_{3}\). In this example, the 1-UIP literal of the conflict pair is \(x_{1}^{\prime}\), and \(x_{0}^{\prime}\) is the only assignment from earlier decision levels contributing to this conflict. \(x_{1}\lor x_{0}\) is added to the database, and the solver backtracks to the point immediately after the decision of \(x_{0}^{\prime}\), which then generates the implication of \(x_{1}\).
#### Ii-A4 Clause Strengthening
Shorter learned clauses are desirable because they require less space and become unit more quickly. MiniSAT introduced a clause strengthening procedure to shorten learned clauses by recursively applying a logical simplification [14]. Strengthening identifies variables in the learned clause that are directly or indirectly implied by other literals in other learned clause and removes them. In addition, minimization with binary resolution is applied on the asserting literal in the learned clause to further reduce learned clause length [15].
#### Ii-A5 Clause Deletion and Restarts
Another important heuristic used by modern SAT solvers is the clause deletion heuristic: although the solver learns many clauses during its execution, not all of them prove to be equally useful. An effective deletion heuristic allows a solver to occasionally prune the database of learned clauses [1, 12]. In Glucose, each learned clause is associated with a Literal Blocks Distance (LBD) score which is the number of different decision levels of literals contained in that clause. Clauses with lower LBD are more desirable because their literals can be assigned with fewer decisions. Glucose aggressively cleans out its database by removing the half of the learned clauses with the highest LBD scores after a certain number of conflicts.
Additionally, solvers such as Glucose periodically restart while keeping their learned clauses and heuristics. This is important to avoid remaining "stuck" in an unproductive branch of the search tree, which can happen if an early decision precludes a satisfying assignment [12, 13].
#### Ii-A6 Parallelism
Several methods of parallelism have been proposed and implemented for software SAT solvers. The simplest case is that the problem is divided into sub-problems by assigning some combination of fixed variables to processes: for example, work could be distributed among eight processes by assigning \(x_{0},x_{1},x_{2}\) to the first process, \(x_{0},x_{1},x_{2}^{\prime}\) to the second, and so forth. However, this ignores the fact that a satisfying solution, if one exists, may only be found in a subset of branches and others might be trivially unsatisfiable. Therefore, this static work distribution may result in some processes finishing significantly earlier than others, and these solvers must partition the problem space into potentially millions of sub-problems and dynamically share the work [16]. Another possibility involves using multiple processes, each attempting to solve the same problem; this is the approach that Glucose and many other parallel solvers use [17]. The processes are initialized with different random number generator states, and potentially different heuristics, before racing to completion. While the processes are running, they periodically select and share learned clauses. Sharing learned clauses allows one process to learn from conflicts another process has encountered, and the use of different heuristics means that at least one thread of the solver is likely to have heuristics that are well-adapted to the problem.
### _Algorithm Modifications_
Modern software solvers are designed to be efficient running on high performance CPUs. However, they are not suitable for direct hardware implementation. In order to make an accelerator run efficiently, we analyzed major parts of modern SAT solvers and modified them to use the parallelism present in hardware.
#### Ii-B1 Distributed Boolean Constraint Propagation
With dedicated hardware support for all clauses in a SAT problem, one could perform a propagation on each clause simultaneously in a single cycle. However, it takes much longer for the propagated literal to arrive due to wire delay. Instead of handling propagations serially, we propose to run multiple propagations asynchronously and synchronize when there are no more propagations or is a conflict. Propagation latency can thus be hidden when there are enough ongoing implications to keep all clauses busy. An illustration of distributed BCP is shown in Figure 2.
With distributed BCP, clauses detect conflicts locally and broadcast a global stop signal, eliminating a round trip through the central unit. This typically happens when a clause receives a propagation message with a variable marked as decided
to the opposite polarity from a variable already locally decided. When variable polarities differ after a conflict, the resulting implications are logically meaningless. Therefore, for effective clause learning, the accelerator must identify the earliest conflict pair. To address this problem, we introduce a logical _implication level_ as an indicator of implication dependences and determine the conflict pair used for clause learning at a stand-alone central unit; this is similar to the use of logical clocks for providing a happens-before relationship in distributed systems [18].
The implication level is a number associated with each BCP message. Each propagation variable comes with an implication level \(l_{p}\), and we maintain a shared implication level \(l_{i}\) for each clause bank. After each propagation, the shared implication level is refreshed as \(\max(l_{p},l_{i})\), and outgoing propagations are sent with level \(l_{i}+1\). The implication level of an implied variable \(x\) is therefore greater than that of all variables \(x\) depends on, and the central unit can then easily maintain a single implication trail. correct conflict clauses for learning are then the conflicting pair with the lowest implication level, which is guaranteed not to be the result of a prior conflict.
Another function of implication levels is to provide unique reason clauses in distributed BCP. Software solvers like Glucose recursively track reason clauses for both clause learning and clause strengthening, and have exactly one clause as the reason for each implied variable. Uniqueness of reason clauses is guaranteed when variables are propagated sequentially because each variable can only be propagated once. For distributed BCP, two clauses at different corners of the chip can trigger the same implication for one variable. In this case, we depend on the central unit to specify a unique reason clause for the implication and mark all other clauses as not reason clauses. This is done by discarding all implications except the one with the lowest level; logically, this holds because \(x\) implies \(x\) and the higher-numbered implications are redundant.
#### Ii-B2 Distributed Clause Learning
In MiniSAT and Glucose, clause learning is performed by recursively tracking reason clauses starting from the conflict pair. Literals implied in previous decision levels are negated and added to the learned clause, and literals from the current decision level are reasoned further. The tracking stops when a single unreasoned literal is found as the cut between the last decision and the conflict, or Unique Implication Point (UIP). The solver then jumps back to the latest level that will leave one literal in the learned clause unset, and sets it to satisfy the learned clause and start a new round of implications [13].
As described in Section II-B1, our distributed BCP implementation guarantees unique reason clauses, which are indicated by setting a bit within the clause. With our clause units, clause learning can be performed by our accelerator in the reverse order of BCP: after a conflict pair is determined by the central unit, a message is sent to each reason clause for the conflicting implications, to query about variables that triggered the implications. Each reason clause, upon receiving the query that matches its implication, generates a query message for each of other variables in the clause implied after the current decision level. All messages are also received by the central unit to build the learned clause. Distributed clause learning reuses most of the hardware resources for BCP and requires only minimal storage in the central unit for the learned clause. For complicated problems, the number of queries on a single conflict easily grows to over a thousand, and the average number of cycles spent on each query is very close to 1.
#### Ii-B3 Backtracking
When conflict occurs after a decision and a learned clause is generated, the central unit finds a decision level to backtrack to and cancels all variable decisions and implications after the backtrack level. Software solvers keep a global
Fig. 2: Distributed boolean constraint propagation algorithm. The propagation starts from the central unit (a) and is broadcast to all clause units in the network. Implications are triggered on the path of the propagation (d) and (e). Conflicts can be detected locally in the conflict clause (h).
array of variable assignments to make backtracking cheap. However, we keep variable assignments locally in each clause unit, requiring that cancelled variables be broadcast to all clause units for backtracking. To reduce this penalty, we maintain a _current_ bit per literal in all clause units indicating the variable was decided or implied in the latest decision level. When there is a conflict, we can cancel all variables in the current decision level with one message instead of one per variable, but we still need to cancel variables decided earlier than current decision level sequentially. As most cancelled variables are in the current level, adding the current bit saves around 90% of backtracking cycles.
#### Ii-B4 Distributed Clause Strengthening
We implemented both clause strengthening mechanisms in a distributed style. We use an additional bit per literal to represent whether the literal is in the learned clause before strengthening. Once we learn a clause, the central unit sends messages to all clause units to mark all literals in the newly learned clause as learned. If a reason clause has all but the originally-implied literal marked as learned, it will generate a new message indicating its reason literal can be removed from the learned clause. Furthermore, this message will also mark the implication literal as learned in other reason clauses to enable removal of indirectly-implied literals from the learned clause. The accelerator delays clause strengthening and performs it in parallel with implication, an approach introduced by Wiering [19]. Clause strengthening must only be completed before the next backtracking to ensure consistent variable assignments. This is because the backtracking level after the conflict can be calculated without strengthening, and the learned clause will not produce any implications until the solver backtracks beyond this level.
#### Ii-B5 Multiple Contexts
Because any effective hardware SAT solver will periodically need to synchronize and engage software running on a general purpose core, it is likely that there will be idle periods between bursts of propagations. Although these can be kept as short as possible, we still want to ensure that we are making optimal use of the hardware by exploiting multi-threaded parallelism, where logically independent threads explore different parts of the search space while sharing learned clauses. However, while Glucose is only able to share a few clauses due to communication overhead, we are able to share _all_ clauses--by doing so, we avoid duplicating the literals in each clause, and only have to duplicate the status bits (an approximately 25% increase in state).
## III System Architecture
Our system consists of a large number of clauses and a central controller connected together via a network, as shown in Figure 1. To allow the clauses to be efficiently accessed, they are grouped into banks, where each bank shares hardware including the network router and sequencing logic to convert certain longer messages into the simpler commands that clauses are able to execute. The core of the central unit is a general purpose CPU executing the control flow shown in Figure 3, which allows the architecture to be flexible in which heuristics it supports. Because heuristics are critical to obtaining good performance from a SAT solver and are still an active area of research and commercial differentiation, an accelerator must be adaptable to be competitive. The central unit also includes a received implication sorter, to provide a consistent serial view of the parallel implication execution by ordering received BCP messages by implication level.
### _Network Design_
#### Iii-A1 Messages
We designed a set of messages that enables communication among clause banks as well between clause banks and the central unit, which are combinations of the basic fields shown in Table I, with a complete list of messages in Table II. Unlike interconnection networks that use point to point routing, most of our messages are broadcast to all other routers in the network. Therefore, instead of using only network addresses, we use 3 routing option bits to allow:
1. routing back to the source clause bank,
2. broadcast, and
Fig. 3: Control logic for central unit.
3. routing to the central unit.
Combinations of these routing options suffice for most routing, and a network address can still be used when the message is bound for a specific clause bank. With the field lengths specified in Table I, we can support address up to \(2^{20}\) clauses and variables. All messages except AddClauseMessage are short enough (\(<64\) bits) to fit into one fit, which minimizes network latency and simplifies router design. AddClauseMessage is divided into several packets and reassembled at the destination clause bank.
#### Iii-B2 Router Design
We make two major modifications to network routers compared with traditional interconnection networks: a simpler flow control mechanism and broadcast routing. Virtual channels are not necessary because all messages fit in a single fit. A credit-based flow-control mechanism with input buffers is used. Deterministic, dimension-order routing is used to broadcast messages: sending them first to all routers in the same row and then to all routers in each column. To reduce latency, a single-cycle router is employed. Route calculation, traversal of wires to output multiplexers, and output arbitration are all performed in one cycle. Flits begin traversing the output channel on the next cycle. We explored both mesh and flattened butterfly topologies, with the central unit placed at the center of the network to reduce latency.
#### Iii-B3 Idle Detection
Another important function of our network is detecting global idle--the state when no messages are being processed or will be generated in the network for a given context. It is important to notify the central unit as soon as network becomes idle to reduce synchronization overhead and allow the central unit to broadcast the next decision. We monitor activity in each clause bank and router and then combine these local idle signals in an AND tree. Dedicated idle signal wires have lower overhead than routing idle signals as messages.
\begin{table}
\begin{tabular}{l|c c c c c c c|l} Name & H & N & C & V & P & I & E & Bits & Description \\ \hline AddClause & 1 & 1 & 1 & \(\leq\)8 & \(\leq\)8 & 0 & 0 & \(\leq\)194 & Add a clause to a given clause bank \\ PropLit & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 62 & Propagate one literal with an implication level \\ CancelVar & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 25 & Cancel a previous decision or implication of a variable \\ CompleteDL & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 25 & Mark the beginning of a new decision level \\ Conflict & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 21 & Bring a stop to all clause banks after a conflict \\ NotReason & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 25 & Mark a clause as not a reason clause \\ Reason & 1 & 0 & 0 & 2 & 2 & 0 & 0 & 48 & Query about the literals that triggered an implication \\ Strengthen & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 27 & Mark a literal as removable in the learned clause \\ \end{tabular}
\end{table} TABLE II: The set of messages used in the network.
Fig. 4: Clause bank architecture.
## IV Clause Architecture
Because each individual clause handles far more incoming propagation or backtracking messages than any other type of message, the primary purpose of the clause unit is to provide an extremely fast and efficient associative search mechanism. Clauses also need to be as small as possible: the largest problem that can be solved is limited by the number of clause units. Therefore, we offload as much complexity as possible into the central controller and a per-bank sequencing unit that communicates with the network. Clauses have no internal sequencing, and operate with a small set of inputs and outputs to allow integration into large banks. Because hardware does not provide good support for variable length clauses, we restrict problems to have fixed 8-variable clauses. Variables can be introduced to provide logical relationships between split clauses, so this does not limit the generality of our accelerator. For example, consider the clause \(x_{0}\lor x_{1}\lor x_{2}\lor x_{3}\). By adding a variable \(c_{0}\), the problem can be rewritten as \((x_{0}\lor x_{1}\lor c_{0})\wedge(c_{0}^{\prime}\lor x_{2}\lor x_{3})\).
### _Command Set_
The _decode_ block in Figure 4 decodes messages arriving over the network into the clause commands listed in Table III. To keep the clauses small and fast, the clause command set is extremely restricted.
To start solving a problem, the clause must first be reset; then, the variables are loaded one at a time using the setvar command. The clause is next marked as valid in a single context using the validate command, which also loads the connector variables and latches to describe which variables are present (necessary for clause strengthening). Propagations are then sent to the clause using provar, which will assert the propagation flag once it has only one unset variable; the getpro command reads that literal and clears the flag. When a context is ready to receive clauses from another context, it can use chkres to do so.
Once the getpro command executes, it sets the reason flag, used by the clause learning and strengthening procedures. Learning starts by using the getreason command to search the global array for the reason clause of a given literal; once it is found, the current decision level bits are read using getlvlbits. This allows the clause bank controller to determine which literals need to be reasoned further and which should be sent directly to the central unit to be included in the learned clause. However, as mentioned previously, there might be duplicate reason clauses for a literal which could lead to suboptimal conflict clauses and incorrect strengthened clauses. Therefore, clearreason allows the central unit to send individual clauses messages telling them to ignore the fact that they are a reason clause.
To begin clause strengthening, the central unit sends the clause unit a strcopy command. This loads a register with a bit for each variable that is defined; strpro messages are then sent for all the literals in the learned clause. These will cause the corresponding bits to be cleared in the clause unit; once all but one bit are clear _and_ the unset bit corresponds to a reason variable the clause will send a strpro message of its own. It is critical that the clause remember which bit was the reason bit, because this allows it to avoid cycles while strengthening that could result in too many variables being deleted. For example, consider a learned clause \(x_{0}\lor x_{1}\lor x_{2}\), where, on their own, \(x_{0}\) implies \(x_{1}\), \(x_{1}\) implies \(x_{2},\) and \(x_{2}\) implies \(x_{0}\). A naive strengthening approach may incorrectly produce an empty strengthened clause.
### _Connecting Variables_
Another important feature of the clause is its support for connecting variables without requiring messages to be processed by the clause bank controller. This means that each clause has connections to the previous and next clauses for propagation
\begin{table}
\begin{tabular}{l|l|l} Purpose & Name & Description \\ \hline \multirow{3}{*}{Setup} & setvar & Loads a variable into the clause \\ & validate & Marks the clause as valid in a given context, and loads the connector bits \\ & chkres & Marks the clause as valid in a given context if it is valid in any context \\ \hline \multirow{3}{*}{BCP} & provar & Propagates a variable in a given context \\ & getpro & Gets the un-decided variable and polarity in a given context \\ & getvar & Get the variable at a given index \\ & clearvar & Unsets a variable in a given context \\ & completed1 & Clears all recently-decided bits \\ \hline \multirow{3}{*}{Strengthening} & copystr & Copies the variable present bits for strengthening \\ & strprovar & Propagates a literal against the strengthening bits \\ & strgetpro & Gets the propagated variable from strengthening (to delete from the learned clause) \\ \hline \multirow{3}{*}{Learning} & clearreason & Marks the clause as not a reason clause \\ & getreason & If the clause is a reason clause and the literal matches, sets the reason flag \\ & getlvlbits & Gets the “decided this level” bits from the clause \\ \hline \multirow{3}{*}{Learning} & nop & No operation \\ \end{tabular}
\end{table} TABLE III: The command set for a clause.
and strengthening. If a clause becomes unit, it marks its connecting variable as satisfied, and sends a signal to the previous or next clause to mark the corresponding connecting variable as not satisfied. This arrangement leaves open one race condition: if two adjacent clauses become unit at the same time, they may mark the same variable as decided in two different polarities! However, this will be resolved in the next cycle, because the signal from the other clause will un-satisfy the connecting variable, both clauses will assert a conflict flag, and the central unit can use the clause along with the decision trail to reason the conflict.
Because the connecting variables allow a single propagation to take arbitrarily long, an output is added to the clause that indicates whether any of the connecting variables have performed a chained propagation or strengthening. This allows the clause bank to wait the minimum amount of time before sending an idle signal, instead of waiting the worst-case 1000 cycles for a clause propagating through an entire clause bank.
## V Methodology
### _Simulation_
To accurately estimate performance, we built a cycle-approximate simulator to simulate solving real SAT problems on our accelerator. The simulator is written in C++ and models three major parts:
#### V-A1 Clause Unit Behavior
Fixed length clause units are grouped into banks. There is a control unit inside each clause bank that bridges the interconnection network and individual clause units. The control unit decodes messages from the network into commands that are executed in parallel by all active clause units. New implications and conflicts are encoded by the control unit to become messages for the network, and message decoding and command execution are pipelined for better throughput. For our simulation, we use 8-variable clauses in 1024-clause banks, with a 4-stage pipelined clause bank controller.
#### V-A2 Interconnection Network Behavior
We also simulate the behavior of the interconnection network as described in Section III-A2. Input buffering, route computation, and switch allocation are simulated at the register level to provide accurate cycle counts, including stalls from switch resource contention. Wire delay is simulated as proportional to the distance between two routers, with adjacent routers having a single-cycle delay.
#### V-A3 Central-Unit Logic
The central unit is simulated functionally and takes care of executing the heuristics, such as decision making and restarts. Because all the computation intensive parts of solving a SAT problem are performed by the clause units, the central unit is both free from heavy workloads and flexible for configurations on different heuristics. Furthermore, because the heuristics can be executed in parallel with the accelerated computation, they are therefore modeled as incurring no extra overhead. As previously mentioned, using even slightly different heuristics can lead to substantially different runtimes. To focus our comparison on BCP, clause learning and backtracking, we link our simulator with a Glucose solver and stay synchronized to ensure we are using exactly the same heuristics as the software solver. This is important because an implication could result in multiple different conflicts, and the conflict that is used for clause learning is then the result of a race condition. Although selecting any minimal conflict is technically correct, it makes providing a fair comparison against Glucose challenging.
### _Hardware Evaluation_
Because our design is dominated by an extremely large number of clause units, we elected to optimize the design of a single clause unit and use it as a proxy for the overall design. The design is written in parameterized Verilog HDL supporting a independently selectable context and variable counts. A series of directed tests were written to verify the core functionality of the design. For synthesis and layout evaluation, we used the Synopsys \(32\,\mathrm{nm}\) educational standard cell library [20]. We targeted a \(1\,\mathrm{ns}\) clock period at nominal voltage of \(1.05\,\mathrm{V}\). All results are presented using the typical-typical corner at \(25\,\mathrm{C}\), and all timing and power results include post place and route parastics.
Synthesis was performed using Design Compiler, allowing full ungrouping after an initial synthesis pass and highest-effort optimizations except where otherwise stated. After synthesis, we used IC Compiler to generate a \(90\,\mathrm{\char 37}\) utilization floorplan with a 3:1 aspect ratio, pins constrained for efficient tiling, and routing constrained to metal layers M1-M5. This allows M6-M9 to be used for routing the global clock, network links, and a global power/ground network, even though we do not route those signals in our minimal design. After placing and routing, we used Synopsys PrimeTime to perform a power simulation, with the post place and route netlist and back-annotated capacitances. Power simulation was performed using a.saif file to annotate the input and sequential elements with static probabilities and switching probabilities for a sequence of random propagations, one per cycle. We expect that this is representative of the vast majority of the SAT solver's operation.
## VI Results
### _Problem Characterization_
We started our search by attempting to characterize the set of interesting SAT problems. We chose the application track problems from the 2016 SAT competition as our initial benchmark set, because they are submitted from a wide variety of industrial domains [6]. The results from this study are shown in Figure 5 and only include the clauses present before the solver starts adding learned clauses. The majority of clauses are extremely short, with the majority of problems having 99.9% of clauses shorter than 8 variables. They also show that the median problem has 10% of variables in more than 10 clauses, and
1% in almost 100 clauses. These two insights drive our design to use an associative array composed of a large number of short clauses, with each variable broadcast to the entire array. Any attempt to maintain a mapping from variables to clauses will be hampered by these popular variables.
### _Software Profiling_
After analyzing the SAT problems themselves, we then analyzed the software solvers Glucose and MiniSAT, which share the majority of their code [1, 14]. The results of this analysis are shown in Figure 6. This profiling study highlights a key limitation of previous hardware SAT solvers, which only attempt to speed up the Boolean Constraint Propagation (BCP) step of the algorithm. Even for the problems where this would provide the most speedup, the potential speedup is still limited to 10x via Amdahl's law. For us to achieve a greater speedup, we must add in backtracking, clause learning, and clause strengthening to the hardware. Profiling also highlights the opportunities for hardware over software, with approximately 1000 cycles per implication in software and no efficient scaling for decision levels with more implications.
### _Accelerator Speedup_
We then randomly picked 10 SAT problems from 2016 SAT Competition benchmarks to solve with our simulator. The size of problems are listed in Table IV. For small problems from Agile benchmark, we solve the whole problem on our simulator until the problem is found to be satisfiable or unsatisfiable. For large problems, with more than 300,000 clauses, simulation to completion is impractical. Instead, we terminate the simulation once we have 20,000 conflicts and compare the runtime with Glucose running to same number of conflicts (this takes about \(10\,\mathrm{s}\) for Glucose). Because our simulator is strictly synchronized with Glucose, the work done by the two solvers is identical. Figure 7 shows the runtime of the simulated accelerator compared to Glucose. The simulator uses a mesh topology with one execution context, a clause length of 8, and 1024 clauses per bank running at \(1\,\mathrm{GHz}\). Glucose runs on a Intel i7-5930K CPU at \(3.5\,\mathrm{GHz}\) using a single thread. Simulation shows that it takes 2
Fig. 5: Analysis of the 2016 SAT competition application track problems without learned clauses. Each box and whisker plot shows the distribution of problems measured at a certain percentile. For example, the 99.9% bar in (a) indicates that over all problems, the 99.9th percentile of variable popularity ranged from 10 clauses to over 10,000 clauses.
Fig. 6: Software profiling results.
to 5 cycles on average for each BCP, while CPU takes around 1000 cycles. We achieve overall speedups ranging from 48x to 112x, with a geometric mean of 72x on the problems that we simulate.
Our simulation shows that the average clause unit is 37% idle due to network synchronization. Because different contexts synchronize independently, commands from other contexts can make full use of all clause units while one context is waiting the network to be idle. Ideally, we could speed up another 1.58x on average with multiple contexts even though we execute only one command per cycle within each clause unit. The projected speedup with two contexts is also shown in Figure 7.
### _Network Results_
We ran further experiments on our simulator to examine how the network affects performance. We simulated two topologies: mesh and flattened butterfly, and also ran the simulation with two parallel networks and two commands per clause per cycle. The results are shown in Figure 8 with run time normalized to the single mesh network. We get an average 12.6% and 8.0% speedup respectively for doubling bandwidth and switching to a flattened butterfly topology, and a 20.4% performance improvement for both. In contrast, the increased cost for a second network and doubled clause throughput or flattened butterfly topology is much greater than its gain. For a \(n\times n\) network, a flattened butterfly router needs \(2n-1\) input channels and \(2n-1\) output channels, while a mesh router needs only 5 channels for each direction. The cost can grow dramatically as the network gets larger, and almost all extra network bandwidth of flattened butterfly topologies is wasted by sending the same message to different routers. Therefore, we concluded it is better to use a single mesh network.
#### V-D1 Power
We evaluate the power of the interconnection network based on two major parts: routers and wires. We implemented routers in Verilog HDL supporting both the point to point and broadcast operation, and placed and routed them using the same flow as the clause units. Each router takes \(22.861\,\mathrm{\mu m}^{2}\), which is only 0.5% of the size of one clause bank, and consumes almost no power. The majority of network energy consumption comes from the interconnecting wires. With \(32\,\mathrm{nm}\) technology, it takes about \(0.2\,\mathrm{pJ}\) per millimeter to send one bit in a densely packed bus on M7, treating the layers above and below as ground planes (the conservative case) but ignoring fringing capacitance. The overall estimation of wire power is \(1.85\,\mathrm{W}\) on a 16 by 16 network under the activity factor of 34% given by the simulation.
\begin{table}
\begin{tabular}{c|l l l} Benchmark & Filename & \#Variables & \#Clauses \\ \hline \multirow{6}{*}{Agile} & bench\_3928 & 5824 & 20353 \\ & bench\_13545 & 3293 & 11661 \\ & bench\_8784 & 3223 & 11057 \\ & bench\_1455 & 5823 & 20338 \\ & bench\_16221 & 59960 & 139918 \\ & bench\_2471 & 61078 & 200509 \\ & bench\_8692 & 8107 & 27257 \\ \hline \multirow{6}{*}{Incremental} & snw\_13\_8 & 12521 & 359380 \\ & safe027\_poso & 74047 & 385241 \\ \cline{1-1} & test\_vr5\_c1\_88257 & 89353 & 356708 \\ \end{tabular}
\end{table} TABLE IV: Problem characteristics
Fig. 7: A comparison of Glucose against the simulated design.
### _Clause Layout_
#### Iv-E1 Scaling
Although we selected 8 variables and 2 contexts for laying out the clause unit, we also evaluated the scaling of the unit over a range of parameters (shown in Table V). The clause unit size scales linearly with the number of variables and with the number of contexts, but there is a more significant penalty paid for the additional contexts than might be expected. This is because not all logic can be shared between contexts; notably, the individual enables for each control bit must be duplicated. Intuitively, this means that the clause units should be as small as possible because every clause unit that is too large wastes space. However, there is a practical bound on scaling because shorter clauses need more cycles to handle a propagation through connecting variables.
#### Iv-E2 Area Breakdown
Table VI shows the results of a synthesis flow without ungrouping to estimate the area contribution of each part of the clause architecture. We can see that sequential state makes up only a small portion of the clause, with a much larger portion being the combinational logic. The comparators themselves take around 13% for an 8 variable clause, with an additional 8% from a mux to select the outgoing propagation. Logic to detect the propagation and conflict states is minimal, but there is a significant control overhead. Although this cannot be broken down directly, the majority of this logic is likely in the logic to set and clear individual bits for control, as well as several smaller inferred multiplexers. The large combinational elements are heavily shared, with the output mux being used for reading out arbitrary literals (necessary for clause learning),
\begin{table}
\begin{tabular}{c c|l} Area & Power & Parameter \\ \hline
1.95x & 1.98x & \(8\to 16\) variables \\
1.36x & 1.01x & \(1\to 2\) contexts \\
1.12x & 1.33x & \(20\to 24\) variable bits \\
1.85x & & \(1\to 2\) commands (old) \\ \end{tabular}
\end{table} TABLE V: Clause scaling relative to an 8 variable, 1 context clause. Power reported is dynamic power.
Fig. 8: An evaluation of different network topologies and network/clause bandwidths.
Fig. 9: The placed and routed clause design, with 8 variables and 2 contexts. The design is \(120\,\mu\mathrm{m}\) by \(40\,\mu\mathrm{m}\).
accessing the strengthened literal, and getting propagations. Similarly, the variable finder (8 comparators, producing a one-hot output) is used to handle incoming propagations, clause learning, and clause strengthening. Other minor parts of the design include the command decoder and a shared decoder for variable and context indices (which themselves share an input bus).
There are two types of stateful components within an individual clause. The first is the clause data--information about which variables are present, which polarities they have, etc. This is stored using D latches for space efficiency and makes up approximately 20% percent of the total area. The other storage is replicated per context and tracks state for propagations, clause learning, and clause strengthening. This state is stored in D flip-flops. We attempted to use D latches for this state as well, but the presence of many logical feedback loops made the implementation unworkable with our timing tools.
#### Vi-A3 Multiple Commands
Our current clause unit supports only a single command (of any type) per cycle. A previous design supported a parameterizable number of commands. The scaling results, also shown in Table V, are post-synthesis only and target a smaller command set than the foregoing results, so only a relative scaling is presented. However, it is clear that increasing the number of commands increases the area linearly. This is because each additional command requires duplication of many combinational resources, especially comparators and the control logic necessary to drive the decided and satisfied bits.
## VII Related Work
There have been a number of previous attempts to accelerate SAT, but none have attained a high real-world speedup for several reasons [21]. In this section, we detail these results and explain our solver's key differences.
### _Brute Force Solver_
Yuan et al. implemented a solver that uses MiniSAT to solve the majority of the SAT problem, then turns the remaining part over to a brute-force hardware solver [22]. They claim that MiniSAT spends the majority of its time solving the last few variables, so it is easier to turn this phase of execution over to dedicated hardware. However, their solution is limited by the exponential nature of their design: they are only able to solve for 13 variables at a time on a DE2-115 FPGA, and every additional variable would require doubling the size of the accelerator. They also do not evaluate the feasibility of their solution for the enormous problems common today.
### _BCP-Only Solvers_
Another set of solvers accelerate just the BCP portion of SAT, and rely on a host CPU to perform all other functions; these have their maximum overall speedup significantly limited by Amdahl's law but are able to take advantage of advances in software heuristics. Davis et al. implemented a BCP accelerator using FPGAs that stores the clauses in block RAMS, and obtained a speedup of 3.7-38.6x for BCP only vs. software [9]. In a large FPGA, they are able to handle 176k clauses but only 64k variables. They compress the clause representation by storing a tree mapping literals to clauses, which is walked each time an implication is processed and thus sets a bound on throughput by requiring multiple memory accesses per incoming implication. This tree representation also limits how often variables can appear, by restricting each variable to occur at most once per group of clauses. A further extension of this work proposes hardware to dynamically add learned clauses, but this additional hardware is not capable of learning the clauses itself [23].
Thong et al. implemented a similar BCP accelerator, but, instead of using a tree data structure to store variables, stores them as pointers to other variables [10]. When a variable is implied, it is traversed as a linked list with each clause pointing to the next clause containing the variable. Although this allows the clauses to be stored in block RAM, it means that these accesses must be serialized, and increases the per-implication latency. Additionally, the need to perform multiple RAM accesses per implication may limit throughput for many problems.
\begin{table}
\begin{tabular}{l|c c} Type & \multicolumn{2}{c}{Area} \\ & (\(\mu m^{2}\)) & (\%) \\ \hline Clause storage & 864.1 & 20.5 \\ Other storage & 741.6 & 17.6 \\ Comparators (x8) & 546.7 & 13.0 \\ Variable output mux & 351.2 & 8.3 \\ Propagation/strengthening detectors & 140.3 & 3.3 \\ Other muxes & 273.5 & 6.5 \\ Decoders & 69.6 & 1.7 \\ Other combinational area & 1230.3 & 29.2 \\ \hline \hline Total cell area & 4217.3 & 100 \\ \end{tabular}
\end{table} TABLE VI: Breakdown of clause area (8 variables, 2 contexts).
### _Entire Problem Solvers_
Gulati et al. designed a SAT accelerator that sought to perform the entire SAT solving process in hardware [24]. While their algorithm is able to perform the decisions on the accelerator (relying on the CPU to load in new sub-problems as existing ones are satisfied), they do not allow the same intelligent heuristics to be used as a software solver, and do not allow for clause learning. They estimated a speedup of 90x over MiniSAT, but only evaluate small problems which may not make effective use of clause learning. And as more advanced technologies like clause strengthening are not supported, it will need to search more variable assignments before reaching SAT or UNSAT.
## VIII Future Work
There are many possible future directions for this work. One is implementing a hardware version of the two-watched literal scheme implemented in software. Although this may not be efficient for the majority of clauses, it could greatly decrease the expense of keeping extremely long clauses entirely in associative storage at the cost of additional control complexity and memory bandwidth. Another potential direction is to study the performance impact of off-chip interconnection networks, and multi-chip implementations to support problem sizes larger than will fit on a single chip.
### _Efficient Multiple Commands_
Using hardware to filter the commands executed by each clause could allow multiple commands to be executed each cycle without requiring a linear increase in area and power. Further exploring this direction would involve selecting the ideal organization of filters (one per clause vs. one per bank) and sizes and quantifying the impact via additional simulation. Another possibility includes restricting the commands that can be executed on the second command port; for example, not allowing variables to be read.
### _Incomplete Algorithms_
The SAT algorithms we reference in this work are _complete_--given a problem and sufficient time, they are guaranteed to produce either a satisfying assignment or a proof that the problem is not satisfiable. However, there is a class of SAT solvers that are unable to determine that a problem is not satisfiable, but may provide a satisfying assignment of variables if one exists; one such algorithm is Stochastic Local Search (SLS) [25]. SLS produces a random assignment of all variables in the problem, then greedily flips the variable that will satisfy the most clauses. It proceeds until there is either a timeout or a satisfying assignment, and is the best peforming algorithm for certain classes of SAT problems [21].
### _Satisfiability Modulo Theories_
Perhaps the most exciting prospects for future work lie in the field of Satisfiability Modulo Theories (SMT), an extension of SAT [26]. SMT involves specifying each variable as a logical relationship in some other theory, e.g., integers or unevaluated functions. Then, for a problem to be satisfiable, all the clauses must be satisfied _and_ all the logical relationships must be satisfied in the theory. The simplest SMT solvers work by translating the problem to SAT, and attempting to solve it. Once the SAT solver produces a satisfying assignment, the theory solver attempts to determine if it is logically true. If not, a clauses is added to the SAT problem disallowing it and the search continues; otherwise, the search stops.
## IX Conclusion
In this work, we introduced SatIn, an accelerator for boolean satisfiability that uses a distributed control scheme and is partitioned over a network. This is different from many current accelerators, which do not support spatial partitioning for solving arbitrarily large problems. We demonstrated that our individual computing elements (clauses) consume a small amount of area and power and can be tiled into large arrays. Using 8 variable, 2 context clauses we can fit a single clause into \(4867\,\mathrm{\mu m}^{2}\) consuming \(63.8\,\mathrm{\mu W}\) dynamic power in \(32\,\mathrm{nm}\). This is sufficient, along with a reasonable scaling factor, to solve problems composed of hundreds of thousands of clauses on a single chip in a more modern process. We simulated SatIn with a cycle-accurate simulator on real benchmarks and achieved an average 72x speedup against Glucose, the winner of the 2016 SAT competition, using a single context.
|
2304.06749 | A catalogue of cataclysmic variables from 20 years of the Sloan Digital
Sky Survey with new classifications, periods, trends and oddities | We present a catalogue of 507 cataclysmic variables (CVs) observed in SDSS I
to IV including 70 new classifications collated from multiple archival data
sets. This represents the largest sample of CVs with high-quality and
homogeneous optical spectroscopy. We have used this sample to derive unbiased
space densities and period distributions for the major sub-types of CVs. We
also report on some peculiar CVs, period bouncers and also CVs exhibiting large
changes in accretion rates. We report 70 new CVs, 59 new periods, 178
unpublished spectra and 262 new or updated classifications. From the SDSS
spectroscopy, we also identified 18 systems incorrectly identified as CVs in
the literature. We discuss the observed properties of 13 peculiar CVS, and we
identify a small set of eight CVs that defy the standard classification scheme.
We use this sample to investigate the distribution of different CV sub-types,
and we estimate their individual space densities, as well as that of the entire
CV population. The SDSS I to IV sample includes 14 period bounce CVs or
candidates. We discuss the variability of CVs across the Hertzsprung-Russell
diagram, highlighting selection biases of variability-based CV detection.
Finally, we searched for, and found eight tertiary companions to the SDSS CVs.
We anticipate that this catalogue and the extensive material included in the
Supplementary Data will be useful for a range of observational population
studies of CVs. | Keith Inight, Boris Gänsicke, Elmé Breedt, Henry Israel, Stuart Littlefair, Christopher Manser, Thomas Marsh, Timothy Mulvany, Anna Pala, John Thorstensen | 2023-04-13T18:00:03Z | http://arxiv.org/abs/2304.06749v3 | A catalogue of cataclysmic variables from 20 years of the Sloan Digital Sky Survey with new classifications, periods, trends and oddities.
###### Abstract
We present a catalogue of 507 cataclysmic variables (CVs) observed in SDSS I to IV including 70 new classifications collated from multiple archival data sets. This represents the largest sample of CVs with high-quality and homogeneous optical spectroscopy. We have used this sample to derive unbiased space densities and period distributions for the major sub-types of CVs. We also report on some peculiar CVs, period bouncers and also CVs exhibiting large changes in accretion rates. We report 70 new CVs, 59 new periods, 178 unpublished spectra and 262 new or updated classifications. From the SDSS spectroscopy, we also identified 18 systems incorrectly identified as CVs in the literature. We discuss the observed properties of 13 peculiar CVS, and we identify a small set of eight CVs that defy the standard classification scheme. We use this sample to investigate the distribution of different CV sub-types, and we estimate their individual space densities, as well as that of the entire CV population. The SDSS I to IV sample includes 14 period bounce CVs or candidates. We discuss the variability of CVs across the Hertzsprung-Russell diagram, highlighting selection biases of variability-based CV detection. Finally, we searched for, and found eight tertiary companions to the SDSS CVs. We anticipate that this catalogue and the extensive material included in the Supplementary Data will be useful for a range of observational population studies of CVs.
keywords: Hertzsprung-Russell and colour-magnitude diagrams - cataclysmic variables - stars:evolution
## 1 Introduction
Cataclysmic variables (CVs, see Warner, 1995 for an exhaustive overview) are an important class of white dwarf binaries within multiple astrophysical contexts, including accretion disc physics (Gonzalez Martinez-Pais et al., 2014) and potential gravitational wave sources for _LISA_(Danzmann & LISA Study, 1996). These compact binaries will become increasingly important as the inventory becomes more complete and the emerging population statistics can be used to test evolutionary models. Although \(\sim 10\,000\) CV candidates are known (see for example Watson et al., 2017) that sample contains a fair amount of contaminants, including flaring M-dwarfs, young stellar objects, variable AGN and unexplained transients (see discussion in section 2 of Pala et al., 2020). To be useful for evolutionary modelling a sample of CVs needs not only to be reasonably complete over a given volume but also possess astrometric data for each CV and, crucially, an identification spectrum.
The sample of 507 CVs spectroscopically observed by the Sloan Digital Sky Survey (SDSS) (Gunn et al., 1998; York et al., 2000) presented here is a magnitude-limited sample over 25.6 per cent of the sky (section 6.4.4).
The four phases of SDSS (hereafter SDSS I to IV) were executed between 2000 and 2020 (Abazajian et al., 2003; Adelman-McCarthy et al., 2008; Aihara et al., 2011; Aguado et al., 2019) and addressed many science goals of which the search for quasars (Richards et al., 2002) and latterly luminous red galaxies and quasars
(Dawson et al., 2013) in the Baryon Oscillation Spectroscopic Survey (BOSS and eBOSS) are of particular relevance here. Between 2000 and 2009 SDSS obtained photometry, using the five \(ugriz\) filter passbands, of over 14 000 square degrees, predominantly of the northern sky. The imaging camera (Gunn et al., 1998) was installed on a purpose-built 2.5 m telescope at Apache Point, New Mexico. This photometry was used to select targets for subsequent spectroscopic surveys using a multi-object, fibre-fed spectrograph capable of taking 640 (SDSS I and II) and 1000 (SDSS III and IV) simultaneous spectra spanning \(\simeq 3800-9200\) A (SDSS I and II) and \(\simeq 3600-10\,000\) A (SDSS II and IV, Smee et al., 2013).
Whereas CVs were targeted within SDSS I via a multi-dimensional colour selection (Szkody et al., 2002), the vast majority of CVs were serendipitously identified from the spectroscopy of quasar candidates. SDSS I (York et al., 2000) and II (Dilday et al., 2005; Yanny et al., 2009) have proven to be a major source of optical spectroscopy of CVs, both of previously known systems and new discoveries (Szkody et al., 2002, 2003, 2004, 2005, 2006, 2007, 2009, 2011).
SDSS III and IV were two subsequent phases spanning from July 2008 until June 2014 (Eisenstein et al., 2011; Dawson et al., 2013) and from July 2014 until August 2020 (Blanton et al., 2017; Dawson et al., 2016) respectively.
SDSS I and II yielded spectra of 285 CVs (Szkody et al., 2011) and have been particularly successful in identifying CVs with no history of outbursts (Gansicke et al., 2009). Specifically they provided a rich source of short period (\(80-86\) min) CVs that provided the crucial observational evidence for the existence of a "period minimum spike". Other groundbreaking achievements include the identification of low accretion rate, X-ray weak, polars (Schmidt et al., 2008 and references therein) and CVs with pulsating white dwarfs (Woudt et al., 2005; Gansicke et al., 2006; Mukadam et al., 2007). Beyond these individual discoveries the systematic detection of all types of CVs has enabled statistical studies of population density and CV evolution (Gansicke, 2005). These achievements from SDSS I and II provided the motivation to identify CVs in SDSS III and IV. Further motivation came from the opportunity to refine the attributes of the known CVs using new astrometric and photometric survey data.
We report here the identification of 70 new CVs found in SDSS I to IV (see list in Appendix A). We have also revisited the remaining 437 CVs observed during SDSS I to IV in the light of _Gaia_(Gaia Collaboration, 2020) astronomy and photometry from the Catalina Real-Time Transient Survey (CRTS, Djorgovski et al., 2011), Zwicky Transient Facility (ZTF, Masci et al., 2018; Bellm et al., 2019) and the Transiting Exoplanet Survey Satellite (_TESS_, Ricker et al., 2015) which was not previously available. This has resulted in 59 new orbital periods (including some unpublished orbital periods from other sources, details are provided in the supplementary information) and 262 new/revised classifications. SDSS objects are identified by their coordinates, e.g. SDSS J152212.20+080340.9, which we abbreviate throughout the paper as J1522+0803.
The paper is laid out as follows. We discuss CV evolution in section 2 and in section 3 we provide an overview positioning how the different types evolve in the overall CV life-cycle using examples from SDSS I to IV. In section 4 we describe the methods used to identify new CVs and classify existing ones. Our results are described in section 5 and discussed in section 6 with an overall summary in section 7.
## 2 CV Evolution
Whilst we do not seek to replicate the extensive literature on CVs (a comprehensive coverage can be found in Warner, 1995 and Hellier, 2001) we do explain the evolutionary relevance of the different types of CV found by SDSS.
CVs are born out of a common-envelope event (Paczynski, 1976; Ivanova et al., 2013) in which the higher-mass star in a binary system evolves of the main sequence and becomes a giant, engulfing its lower-mass companion. Friction in the common envelope extracts angular momentum and energy, resulting in a dramatic reduction of the orbital separation of the two stars. The result of the common envelope is a close binary consisting of a white dwarf (the core of the former red giant) plus a low-mass companion; this is known as a post common envelope binary (PCEB). Most PCEBs exit the common envelope in a detached configuration (and many are observed in this state, e.g. Rebassa-Mansergas et al., 2012). However, the stellar wind from the companion causes magnetic braking (Verbunt and Zwaan, 1981) and results in a loss of angular momentum. Meanwhile tidal forces on the secondary will eventually result in tidal locking to the orbital period (Fleming et al., 2019). Magnetic braking will eventually shrink the orbital separation to the point where the secondary star fills its Roche lobe, and starts mass transfer onto the white dwarf - initiating the further evolution of the system as a CV.
Magnetic braking continues into the CV phase resulting in the system continuing to lose angular momentum and as a consequence the system separation and \(P_{\rm orb}\) will decrease.
At around \(P_{\rm orb}=3\)h the donor stars in CVs become fully convective, possibly resulting in a re-configuration of their magnetic field (see observational evidence in Reiners and Basri (2009) and also fig. 15 in Kochukhov, 2021). In the standard scenario of CV evolution (Knigge et al., 2011), magnetic braking is strongly reduced, and the donor star, slightly bloated due to prior mass loss, shrinks to its thermal equilibrium radius, detaching from the Roche-lobe and stopping the mass transfer. The observational motivation for this "disrupted magnetic braking scenario" (Rappaport et al., 1983) is a paucity of CVs within the range 2.15 h \(\leq\)\(P_{\rm orb}\)\(\la\) 3.18 h; this is the so-called "period gap" (Whyte and Eggleton, 1980; Howell et al., 2001; Knigge, 2006). Some of this under-representation appears to be due to selection effects (see for example fig. 9 in Inight et al., 2021 and section 6.1). In the case of strongly magnetic white dwarfs, magnetic braking could be reduced, and these systems may not experience a detached phase in their evolution (Webbink and Wickramasinghe, 2002).
For periods \(\la\) 3 h, gravitational radiation (Paczynski, 1967; King, 1988) supersedes magnetic braking as the primary cause of angular momentum loss, which will eventually cause the separation and orbital period to decrease sufficiently for the donor to fill its Roche lobe and the system to resume accretion as a CV.
Throughout the evolution of a CV the mass of the donor, the rate of mass loss from the donor (\(\dot{M}\)), and \(P_{\rm orb}\) all continue to decrease until the donor mass is too low to sustain hydrogen burning and the system becomes a brown dwarf. This point is the "period minimum" of the system (Paczynski and Sienkiewicz, 1983). Following further mass loss from the donor the orbital separation (and hence period) increases and it becomes a "period bouncer" (e.g. Pala et al. (2018)).
The previous discussion has assumed that the donor is still on the main sequence when it commences accretion. However the donors in some CVs have undergone substantial nuclear evolution. Because the orbital period at which a donor fills its Roche lobe
is related to its density \(\rho_{\rm d}\) via \(P_{\rm orb}\propto(\rho_{\rm d})^{-0.5}\) (equation 12 in Knigge et al. 2011) it follows that, for the same donor mass, CVs with nuclear evolved donors have longer orbital periods than their un-evolved counterparts (e.g. Rodriguez-Gil et al. 2009). At periods \(\lesssim 5\) h, evolved donors have a higher temperature and spectral type than main sequence stars of the same mass. They can appear in the period gap (Rebassa-Mansergas et al. 2014) and evolve to periods below the period minimum (Thorstensen et al. 2020).
## 3 CV sub-types
"Cataclysmic Variables" were named as such because of the large brightness variations (eruptions and outbursts) observed among classical novae and dwarf novae. The modern definition of a CV dates from the 1960's (see for example Kraft 1963) and consists of a semi-detached binary in which a white dwarf accretes from a Roche-lobe filling donor star - and the current roster of CVs includes many systems that have no recorded outbursts (Gansicke et al. 2009).
CVs are classified into a number of sub-types. Historically they have been categorised based upon their observable characteristics with categories typically being named after a prototype system, frequently the first discovery. The number of sub-types, their definition, and relationship between different sub-types remain in flux, making their use challenging for scientists not regularly working in this research area - and some care has to be taken with the temptation to define new sub-types for each additional observational wrinkle in a particular CV. Nevertheless, dividing CVs into sub-types is meaningful, as their observational characteristics depend on a small number of physical parameters, which link to the evolutionary state and the accretion geometry of an individual system (see Fig. 1).
Here we provide an overview of the main CV categories that have been historically used with examples from SDSS I to IV and detail their state of evolution and mode of accretion. The key physical parameters are the orbital period, \(P_{\rm orb}\), the rate of mass loss from the donor, \(\dot{M}\), and the white dwarf magnetic field strength,\(B\).
The strength of the magnetic field of the white dwarf is of particular importance in determining the accretion geometry (Norton et al. 2008). We therefore discuss the classification and evolution of magnetic and non-magnetic systems separately.
### Non-magnetic CVs
#### 3.1.1 Dwarf Novae
The dwarf nova thermal limit cycle starts with increasing mass being retained in the disc which continues to grow until it becomes thermally unstable and a dwarf nova occurs. The magnitude of a dwarf nova outburst is typically \(\Delta m=3-5\) (Meyer & Meyer-Hofmeister 1983) and typically lasting for a few days (although superoutbursts such as that of GW Lib in 2007 can reach \(\Delta m\approx 9\) and last several weeks) after which the disc cools and the cycle repeats itself. Four sub-types are discussed below (SU UMa, WZ Sge, ER UMa and Z Cam) with the remaining dwarf nova CVs being classified as U Gem. In quiescence dwarf nova spectra (Fig. 2) exhibit hydrogen and also sometimes helium emission lines whilst in outburst their spectra resemble those of nova-like variables (see below).
#### 3.1.1.1 SUMa
dwarf novae are characterised by occasional "superoutbursts" (Fig. 3) typically lasting several weeks and recurring with a supercycle of a few hundred days (Osaki 1989; Buat-Menard & Hameury 2002; Patterson et al. 2005). These superoutbursts are significantly brighter than the regular dwarf novae which occur in between the superoutbursts. Hirose & Osaki (1990) showed that as the separation of CVs decreases tidal forces eventually cause the disc to become eccentric and non-axisymmetric. The material in the outer edge of the disc rotates more rapidly than the orbital period and will stabilise at a resonant ratio of 3:1. The gradual accumulation of mass in the disc eventually results in a superoutburst; Hirose & Osaki (1990) showed analytically that superoutbursts only occur when \(q=M_{\rm d}/M_{\rm WD}\lesssim 0.3\), with \(M_{\rm WD}\) and \(M_{\rm d}\) the masses of the white dwarf and of the donor star. Assuming a typical value of \(M_{\rm WD}\simeq 0.8\,\rm M_{\sun}\) and using the relation \(M_{\rm d}\approx 0.065P_{\rm orb}^{5/4}\) (Warner 1995 section 2.8) implies \(P_{\rm orb}\lesssim 2.8\) h. After reaching the peak luminosity, the superoutburst light curves are modulated by periodic oscillations called superhumps. The superhump period is a few per cent longer than \(P_{\rm orb}\) and the relation \(P_{\rm orb}=0.9162\times P_{\rm sh}+5.39\) min can be used to estimate \(P_{\rm orb}\) (Gansicke et al. 2009). In reverse, if \(P_{\rm sh}\) and \(P_{\rm orb}\) are both measured, the superhump period excess \(\epsilon=(P_{\rm sh}-P_{\rm orb})/P_{\rm orb}\) can be used to estimate \(q\) (Patterson et al. 2005; Kato & Osaki 2013) (note that the latter authors distinguish between the slightly different superhump periods observed as the superoutbursts evolve).
#### 3.1.1.2 ER UMa
Dwarf novae (Fig. 4) are a sub-type of SU UMa CVs characterised by an unusually short supercycle (\(20-50\) d) so that they spend over a third their time in superoutburst. The intervening time between superoutbursts is punctuated by normal outbursts every few days (Robertson et al. 1995; Kato et al. 2013). The cause for this behaviour is an \(\dot{M}\) around four times that of a typical SU UMa CV.
#### 3.1.1.3 WZ Sge
dwarf novae are also a sub-type (Patterson et al. 2002; Kato 2015) of SU UMa CVs. Their donors have lost most of their mass and they have low \(\dot{M}\) with very infrequent, albeit bright, superoutbursts (Fig. 5). Most WZ Sge systems do not exhibit normal outbursts in between their superoutbursts. The low \(\dot{M}\) and consequential low disc luminosity, often result in the white dwarf being visible in the optical spectrum. In many cases this is the best evidence for a WZ Sge classification as commonly used definitions (Kato et al. 2001; Kato 2015) are based upon observation of an outburst, together with the characteristic early superhumps, which may not occur for decades. Note that low \(\dot{M}\) is not sufficient in itself for a classification; there also needs to be evidence of the low mass of the donor.
#### 3.1.1.4 Z Cam
dwarf novae are dwarf novae with the unusual characteristic (Fig. 6) of having "standstills" of luminosity midway between the quiescent level and the elevated level associated with a dwarf nova outburst (Simonsen et al. 2014). This is believed to be due to having an \(\dot{M}\) that fluctuates around the critical level and too high to sustain the thermal limit cycle associated with dwarf novae. The IW And stars (Hameury & Lasota 2014) form a sub-category of \(Z\) Cam and show continual outbursts (not unlike dwarf novae) during the "standstill".
#### 3.1.2 Classical Novae
Classical novae (Bode & Evans 2012) were first observed by ancient Chinese astronomers as eruptions of \(8-15\) magnitudes that decayed over a period of a few to several hundred days (Clark et al. 1977). They are caused by an accumulation of hydrogen on the surface
of the white dwarf which eventually reaches a critical mass and pressure thereby initiating a runaway nuclear reaction (Starrfield, 1971) that expels an expanding shell (Cohen and Rosenthal, 1983; Shara et al., 2012) and may also influence the evolution of the CV (Nelemans et al., 2016; Shen and Quataert, 2022) by causing frictional angular momentum loss (Schreiber et al., 2016). Classical novae can therefore occur in all types of CVs including magnetic ones (Stockman et al., 1988). The accumulation of hydrogen is influenced by \(\dot{M}\) and therefore classical novae are more common in high \(\dot{M}\) systems typically with periods \(P_{\rm orb}\gtrsim 3\) h.
Figure 1: Diagram depicting the CV classification scheme used in this paper. The physical key parameters that define the boundaries between different classes are \(\dot{M}\) and \(P_{\rm orb}\) (which, together, establish whether or not the accretion disc is subject to thermal instabilities) and \(B\) (which determines whether or not a disc forms in the first place). Depending on the exact physical setup, some individual CVs may straddle the boundaries between the sub-types defined above. AM CVn systems, with three possible formation channels (Section 3.1.4), follow the same distribution as non-magnetic CVs.
Figure 3: J0032\(\pm\)3145 is an example of an SU UMa CV. Top panel. Smoothed SDSS spectrum in quiescence showing the typical, slightly double-peaked, Balmer emission lines. The Helium emission lines are not always present in SU UMa. Bottom left panel: ZTF (\(g\)-band) light curve showing outbursts and a superoutburst in red. Bottom right panel: Expanded light curve showing the decay of the superoutburst and characteristic superhumps.
Figure 2: J0813\(\pm\)4528 is an example of a dwarf nova. Top panel: The quiescent SDSS spectrum (smoothed with a five-point box-car filter) shows strong emission lines. The continuum reveals the spectroscopic signatures of the donor star, indicative of a long orbital period (6.94 h); this is a U Gem type dwarf nova above the period gap. Bottom left panel: ZTF (\(g\)-band) light curve showing outbursts. Bottom right panel: Expanded light curve showing the decay of an outburst over \(\simeq 3\) d.
By definition an eruption of a classical nova has been recorded at some time. The spectrum of a classical nova varies considerably depending upon the time since this eruption. The rapidly expanding shell will initially dominate the spectrum, with broad emission lines of hydrogen and helium. As it expands further and becomes more diffuse it resembles a planetary nebula and exhibits forbidden lines such as [S ii] 6717 A. The rapid expansion of the shell also results in spectral lines exhibiting P Cygni profiles. Eventually (years to decades) the spectrum becomes that of the pre-eruption CV. Most classical novae revert to being nova-like variables (Fig. 7), which is unsurprising as the nova recurrence time is inversely related to the accretion rate, and hence the nova population is dominated by nova-like variables (Townsley & Bildsten, 2005).
#### 3.1.3 Nova-like variables
Nova-like variables (Dhillon, 1996) have bright spectra dominated by the hot and stable disc (Fig. 8). The hydrogen in their discs is almost fully ionised and their spectra therefore exhibit absorption lines, often with inset emission lines. They often have strong disc winds which can appear as P Cygni spectral lines (Inight et al., 2022). Nova-likes sometimes exhibit changes between high and low states and observational phenomena in their light curves have given rise to a number of sub-types. The VY Scl sub-type shows relatively deep (\(2-5\) mag) low states which can last a few weeks to about a year, during which short brightenings occur (Bruch, 2019), the nature of which is still discussed (Scaringi et al., 2017). This contrasts with the IW And class (Section 3.1.1.4) in which characteristic outbursts occur during the (higher state) standstills (Kato, 2019).
Figure 4: J0912+5053 (D UMa) as an example of an ER UMa CV. It has a high \(M\) which is inconsistent with its short orbital period of 78 min at the period minimum (Kato et al., 1996; Fried et al., 1999), resulting in extremely frequent superoutbursts. The supercycle is 25 d whilst normal outbursts occur every four days. Top panel: Smoothed SDSS spectrum in quiescence showing Balmer emission lines. Bottom left panel: ZTF (\(g\)-band) light curve showing frequent superoutbursts. Bottom right panel: Expanded light curve showing the frequency of the superoutbursts.
Figure 5: J1610\(-\)0102 (V386 Ser) is an example of a WZ Sge CV. Top panel: Smoothed SDSS spectrum in quiescence showing double-peaked Balmer emission lines from the accretion disc. The spectrum of the white dwarf is visible at the blue and with characteristic wide Balmer absorption lines. Bottom panel: ZTF (\(g\)-band) light curve showing a single characteristically large superoutburst. Many WZ Sge have no recorded outbursts.
Figure 6: J1028+2148 (MV Leo) is an example of a Z Cam CV. Top panel: Smoothed SDSS spectrum in “standstill” showing Balmer emission lines. Bottom left panel: ZTF (merged and scaled \(r\) – and \(g\)–bands shown in blue) light curve showing standstill periods as well as outbursts and the low, quiescent, level. The outburst is highlighted in red. Bottom right panel: Expanded light curve showing a period of “standstill” followed by an outburst and a decline to quiescence.
#### 3.1.4 AM CVn
AM CVn systems (Fig. 9) are identified by helium lines and an absence of hydrogen lines in the optical spectrum (Ramsay et al., 2018). They are short-period systems (\(P_{\rm orb}\simeq 5-68\) min) but their evolutionary origin is unclear with at least three potential formation channels leading from their progenitors (Solheim, 2010). AM CVn systems are rare, with only \(\simeq 60\) systems known (Ramsay et al., 2018). They are good verification binaries for the space-based _LISA_ gravitational wave detector (Amaro-Seoane et al., 2017) given their short periods and separations (Nelemans et al., 2004; Liu et al., 2021).
### Magnetic CVs
Perhaps a third (Pala et al., 2020) of CVs have a white dwarf with a magnetic field \(B\gtrsim 1\) MG and these are termed magnetic. The origin of this strong field is not well understood; it may be formed during the common-envelope event or sometime later (Belloni and Schreiber, 2020). Schreiber et al. (2021) propose a model whereby the magnetic CV is born without a strong field and the white dwarf is spun up by accretion. The cooling white dwarf in the CV then forms a dynamo as it crystallises. The strengthening magnetic field causes the system to become detached and the only accretion is from the donor wind which follows magnetic field lines. This pre-polar stage (Hakala et al., 2022) continues until after the donor becomes fully convective and mass transfer recommences at which point the CV is a polar or intermediate polar (Wilson et al., 2021). This model is consistent with observational evidence (fig. 9 in Ferrario et al., 2015) as intermediate polars (with \(B\lesssim 10\) MG) are, in general, observed to have orbital periods \(P_{\rm orb}\gtrsim 3\) h whilst polars (with \(B\gtrsim 10\) MG) are observed to have typical orbital periods \(P_{\rm orb}\lesssim 2\) h.
Figure 8: J0758+1043 is an example of a nova-like variable. Top left panel: Smoothed SDSS spectrum showing Balmer H\(\alpha\) and H\(\beta\) emission lines together with Hy and H\(\delta\) absorption lines. Top right panel: Expanded spectrum showing the H\(\beta\) line and absence of He ii lines. The strength of He ii 4686 Å emission lines varies with orbital phase far more than Balmer emission lines – and often exceeding the strength of the H\(\beta\) line (Hoard et al., 2014). Bottom panel: ZTF (\(g\)-band) light curve showing the relatively constant emission from the disc in its hot, steady state.
Figure 7: J1545+1422 (CT Ser) is an example of an old nova that erupted in 1948, and that now appears as a nova-like variable. Top left panel: Smoothed SDSS spectrum showing Balmer and He ii emission lines. Top right panel: Expanded spectrum showing the relative strength of the H\(\beta\) and He ii lines. (Munari et al., 2014) Bottom panel: ZTF (\(g\)-band) light curve illustrating the relatively constant emission from the disc in its high state.
Figure 9: J0926+3624 is an example of an AM CVn CV. Top panel: The smoothed SDSS spectrum shows He i and He ii emission lines but no hydrogen lines. The absence of hydrogen lines is a characteristic of AM CVn CVs. Bottom panel: ZTF (merged and scale \(r\) – and \(g\)–bands) light curve showing outbursts.
#### 3.2.1 Polar
The magnetic field of the white dwarf in polars (Cropper, 1990; Wu, 2000), typified by the prototype AM Her, is generally greater than 10 MG. In this case the Alfven radius is beyond the circularisation radius, preventing the formation of an accretion disc, and the accretion flow follows field lines from the inner Lagrange point to impact on the white dwarf. These shock-heated impact zones (\(\approx 10^{8}\) K) give rise to X-ray radiation and also characteristic He ii emission lines (Fig. 10). The cyclotron radiation is sometimes observed in spectra as "humps" (see for example Thomas et al., 2000). These humps are harmonics of the cyclotron frequency \(f=(eB)/(2\pi m_{e})\), with \(e\) and \(m_{e}\) the charge and the mass of the electron, due to the magnetic field \(B\) and can appear in the infrared (e.g. AM Her, \(B\approx 15\) MG Bailey et al., 1991), optical (e.g. MR Ser, \(B\simeq 25\) MG; Schwope et al., 1993) or ultraviolet (e.g. AR UMa, \(B\simeq 200\) MG; Gansicke et al., 2001) according to field strength. Cyclotron emission is directional and results in cyclotron beaming which manifests itself as periodic behaviour. Polars can also be identified either by polarimetry, as cyclotron radiation is polarised (Chanmugam & Dulk, 1981; Shahbaz, 2019), or by the Zeeman splitting of Balmer lines from the white dwarf (Ferrario et al., 1995) that is sometimes visible in spectra obtained during low states when accretion nearly ceases (Fig. 11) and can be used to estimate the strength of the magnetic field. A consequence of the strong magnetic field is that the spin period of the white dwarf normally equals \(P_{\rm orb}\). This is thought to be due to the interaction of the strong white dwarf magnetic field with the donor magnetic field generating a torque which over time causes the white dwarf to spin down until it equals \(P_{\rm orb}\); the equilibrium situation (Campbell, 1984). Asynchronous polars (Pirola et al., 1994; Campbell & Schwope, 1999) have a white dwarf spin period slightly different from \(P_{\rm orb}\) which can be detected in periodograms. Asynchronous polars are believed to have been knocked out of exact synchronisation by an event such as a nova (Pavlenko et al., 2018).
#### 3.2.2 Pre-polars
White dwarfs in pre-polars (Reimers et al., 1999; Schwope et al., 2002; Parsons et al., 2021) (also called low-accretion rate polars) accrete via a stellar wind from the donor rather than through Roche-lobe overflow. These detached systems therefore have very low accretion rates with \(\dot{M}\) less than one per cent of that of other CVs (Schmidt et al., 2005). Cyclotron humps are the defining characteristic of pre-polars; typically the second and third harmonics are visible in the optical - although the humps can be masked by the donor spectrum (Fig. 12). Their cyclotron emission do exhibit circular polarisation (Hakala et al., 2022), but they have very low X-ray luminosities (Vogel et al., 2011). In the evolution scenario of Schreiber et al. (2021) pre-polars represent a natural phase in the evolution of CVs once the white dwarf develops a strong magnetic field.
#### 3.2.3 Intermediate polars
Intermediate polars (Patterson, 1994) such as their prototype DQ Her, have weaker white dwarf magnetic fields than polars. The magnetic field is too weak to force synchronisation of the spin period of the white dwarf and \(P_{\rm spin}\sim 0.1P_{\rm orb}\)(Norton et al., 2004) - the defining characteristic of an intermediate polar (Fig. 13). The vast majority of intermediate polars have a truncated accretion disc (Hameury & Lasota, 2017). In common with non-magnetic CVs material from the donor passes through the inner Lagrange point
Figure 11: J1105+2506 (ST LMi) is an example of a polar showing Zeeman splitting. The spectrum is shown in the top panel.The lower panel shows a model of the H\(\beta\) line evolving vertically (grey lines) with increased magnetic field (Schimeczek & Winner, 2014). The spectrum is over-plotted in blue with flux scaled to roughly match the amplitude of the grey lines. The split lines are identified by vertical dashes; they coincide with the grey lines at \(B\simeq 13\) MG.
Figure 10: J0921+2038 is an example of a polar. Top panel: The smoothed SDSS spectrum (blue) shows Balmer, He i and characteristic He ii emission lines. The four cyclotron humps (brown, magenta, green and red) provide clear evidence that this is a polar. Schmidt et al. (2008) determined that these are the fifth, sixth, seventh and eighth harmonics (shown in grey) due to a magnetic field of \(B=32\) MG. Bottom panel: ZTF (\(g\)-band) light curve showing the variability due to cyclotron beaming. There are no outbursts as polars do not have discs in which to store accreting material and hence they do not undergo a thermal limit cycle.
and pursues a ballistic path until it merges with the disc. The disc is truncated at the Alfven radius and material then follows field lines from the innermost part of the disc to one or both magnetic poles. The presence of the disc can result in outbursts (Hameury & Lasota, 2017). Magnetically funneled accretion onto the magnetic poles of the white dwarf results in pulsations in X-ray light curves (Hellier et al., 1991), which are the definitive observational proof of a CV being an intermediate polar.
## 4 Methods
In compiling this CV catalogue, we followed a two-step process. In the first step we identified a shortlist of potentially new CVs from the 5 789 200 spectra of SDSS III and IV. In the second step we then undertook a detailed classification of these potential CVs, as well as revisited all the CVs previously identified in SDSS I and II.
### Identification
SDSS I to IV focused upon a number of surveys none of which explicitly targeted CVs. Fortuitously the (BOSS and eBOSS) (Dawson et al., 2013, 2016) targets included quasars which share a similar colour parameter space to CVs. Since the publication of the first comprehensive orbital period distribution of SDSS (Gansicke et al., 2009), we progressively built up a list of candidate CVs based upon visual inspection of the flux and wavelength calibrated SDSS spectra (Weaver et al., 2015; Dawson et al., 2016). The spectra were reviewed multiple times by a range of individuals using a number of pre-selections in terms of colour and filtering on the redshifts determined by the SDSS project.
This visual process was augmented by a supervised machine learning process. A data set of known CVs and proven non-CVs was used to train a Random Forest (RF) classifier (Liaw & Wiener, 2001). The RF model used features extracted from the equivalent widths of 12 spectral lines and the average gradient of the spectrum measured over five intervals. The different identification methods resulted in a substantial overlap in the sub-sets of CV candidates that they identified, and whereas we cannot claim to have identified all the CVs that were spectroscopically observed by SDSS, we are confident that our search has picked up the majority of them. CVs that remain overlooked will either have extremely low signal-to-noise ratio spectroscopy, or have colours very close to those of single main-sequence stars, and weak emission lines.
Figure 12: J0049+2226 is an example of a pre-polar. Top panel: The smoothed SDSS spectrum (blue) shows Balmer emission lines. The donor is visible at the red end of the spectrum. The cyclotron hump visible between 4200 Å and 5000 Å (brown) is clear evidence that this is a polar or pre-polar. Following Parsons et al. (2021) we subtracted a model spectrum (Kesseli et al., 2017) of an M3 dwarf (result in grey) to reveal a further cyclotron hump (magenta) at \(\lambda=6840\) Å. Modelling of harmonic frequencies for varying magnetic fields yielded a good fit for a magnetic field strength of \(B\approx 78\) MG with the humps forming the second and third harmonics. The fact that the donor is visible implies that the accretion rate is low and this implies that this is a pre-polar accreting from the donor wind (Schreiber et al., 2021). Bottom panel: ZTF (\(g\)-band) light curve showing the variability attributable to cyclotron beaming which revealed the orbital period (\(P_{\rm orb}=5.92\) h).
Figure 13: J0946+1350 (HY Leo, HS 0943+1404) is an example of an intermediate polar. Top panel: The smoothed SDSS spectrum shows Balmer, He i and He ii emission lines. Middle panel: ZTF (\(g\)-band) light curve showing the variability due to the spin of the white dwarf and the accretion stream. The alternation between high and low states is also evident. Bottom panel: Periodogram taken from archival Jacobus Kapteyn Telescope data showing the orbital frequency (\(\Omega\)), spin period (\(\omega\)) and the beat period together with numerous aliases. For a detailed period analysis, see Rodriguez-Gil et al. (2005).
### Classification
No single method can reliably classify CVs - it is necessary to consider all the available information (SDSS spectroscopy, time-series photometry, _Gaia_ astrometry, colours from broad-band photometry, imaging) and seek a classification of CV sub-type that is consistent with all of the data by making use of the diagnostics outlined in section 3. Where possible, we attempted to classify the SDSS CVs down to the sub-types illustrated in Fig. 1. However, in \(\simeq 13\) per cent of cases only high level classifications such as dwarf nova or magnetic CV are possible due to the limited amount of information that is currently available. In addition, in the \(\simeq 20\) per cent of cases where a classification is very likely but not certain a colon (:) has been appended.
We began the classification of each object with a literature search using Simbad (Wenger et al., 2000) and a search of the AAVSO V5X database (Watson et al., 2006). Where an existing classification exists we checked that it is consistent with the SDSS I to IV spectra together with data from _Gaia_, VizieR, ZTF, CRTS and _TESS_. This process resulted in refining 262 classifications. It also revealed 18 systems, previously categorised as CVs that are now known not to be CVs (see section 5.2). For each CV, the references for the initial discovery, orbital period measurement, and optical spectroscopy are documented in the Supplementary Information.
Detailed consideration of the available data is discussed in the following sections.
#### 4.2.1 Spectral analysis
We first plotted the SDSS spectrum and then used the spectrum to calculate a synthetic _Gaia_\(G\)-band magnitude using the python python1 package. In addition to the SDSS spectrum we also created a broad-band (ultraviolet to infrared) spectral energy distribution (SED) diagram using data downloaded from VizieR (Allen et al., 2014) and over-plotted the spectrum. These plots are available as PNG files in the supplementary material.
Footnote 1: [https://github.com/mfouesneau/pyphot](https://github.com/mfouesneau/pyphot)
We initially compared the SED with the SDSS spectrum. There can be a number of possible explanations if the spectrum is not consistent with the SED. Firstly the flux calibration of all or part of the spectrum may be faulty which we establish by comparing the synthetic magnitude with archive observations. Secondly the spectrum may have been taken at a time when the CV was in a different state to that when the photometry was obtained. Thirdly the SED may have been contaminated by flux from a nearby object - which is why we check the PanSTARRS images (see Section 4.2.4). In some cases, the photometry from VizieR traces distinct SEDs, suggesting that photometry of the system was obtained in different accretion states. In these cases, the epochs of the SED photometry can be compared with the available light curves (section 4.2.2).
We next tried to identify the donor and/or the white dwarf in the SDSS spectrum. At the blue end of the spectrum broad Balmer absorption lines are indicative of the white dwarf. Conversely if the Balmer jump is visible in emission then the disc or accretion stream is much brighter than the white dwarf, which is typical of SU UMa CVs, polars, and intermediate polars. Absorption from the Mg triplet at 5167, 5172 and 5183 A (Fig. 2) is indicative of an early M-type or late K-type donor (suggestive of long, \(\gtrsim 2\) h, orbital periods) whereas a rising continuum in the red part of the spectrum (Fig. 5), often broken by TiO "sawtooth" molecular absorption bands indicates a mid-to-late type M dwarf (suggestive of shorter orbital periods). In addition, the detection of a blackbody-shaped bump in the ultraviolet or infrared part of the photometric SED can identify a white dwarf or M dwarf, respectively. Care was taken to distinguish the Balmer absorption lines from a white dwarf from those originating in a disc. Disc absorption lines tend to be narrower and flat-bottomed as they are formed by Doppler broadening due to Keplerian velocities in the disc whilst the Balmer absorption lines of the white dwarf are pressure-broadened due to the high surface gravities in the atmosphere. A clear white dwarf signature in the spectrum is indicative of a low level of accretion. If in addition the donor is not visible it suggest that the system is typically WZ Sge candidate.
Next, we considered the emission lines detected. In addition to the Balmer and He i lines, we probed for the presence of [S ii] 6717 A which is indicative of contamination by a nebula and is not seen in CV spectra. Broad and strong emission lines at arbitrary wavelengths are associated with highly re-shifted quasars. Finally, He ii 4686 A emission is indicative of a polar or intermediate polar particularly if the ratio of its equivalent width to that of H\(\beta\) is greater than 0.4 (Silber, 1992)
We also considered the shape of emission lines. Balmer emission lines are indicative of an accretion flow, either the disc in a non-magnetic CV, or a stream in polars. In non-magnetic CVs, the width of the Balmer emission from the disc will depend upon the inclination of the system (Horne & Marsh, 1986), with broad lines and the detection of double-peaks indicating a high inclination. Furthermore deep central absorption dips embedded in broad double peaked Balmer lines are a good indication that a system is eclipsing. Asymmetric emission lines are associated with the velocity dispersion of the accretion stream of polars whilst very thin lines are typical of background nebulae and detached systems.
Lastly we also superimposed upon the SDSS spectrum the five SDSS photometric magnitudes obtained in the original observations. These can reveal inconsistencies and in particular can reveal cyclotron humps (due to one observation being taken at an orbital phase when the cyclotron emission peaked).
#### 4.2.2 Light curves
Archival light curves from the CRTS, ZTF and _TESS_ were obtained from online archives; Caltech, IRSA (Rebull et al., 2019) and LightRurve (Lightkurve Collaboration et al., 2018) respectively. We reviewed these looking for outbursts, to provide confirmation of a dwarf nova classification, and also for eclipses to obtain orbital periods. The long span of this data (over 20 years) also reveals many state changes that are indicative of certain CVs (see Figs. 10 and 13) and nova-like variables (Fig. 8). The light curves for all CVs are shown in the supplementary material. We generated periodograms in an attempt to determine orbital periods from the light curves. Prior to generating the periodogram we cropped any outbursts from the light curve which was then de-trended typically using a low-order polynomial fit. Two periodograms of the light curve were then produced using a Fourier transform and also an orthogonal multi-harmonic analysis of variance (Schwarzenberg-Czerny, 1996). Potential periods were identified from peaks in the periodogram that exceeded a false alarm probability of five per cent; the results were then validated by phase-folding the original light curve on the potential period (see Fig. 14). The shape of the phase-folded light curve not only confirms that the period is valid but also helps to identify cases in which the orbital period may be twice that of the strongest signal in the periodogram. The photometric orbital modulation of CVs
can have various potential causes. Eclipses may occur once (of the white dwarf) or (rarely) twice (of the white dwarf and the donor, e.g. Ashley et al., 2020) per orbital cycle whilst ellipsoidal modulation due to the deformed shape of the donor star will yield two peaks per orbital cycle. Cyclotron beaming is another case where the waveform may show multiple peaks per orbital cycle, as does the emission from two poles in magnetic CVs.
Knowledge of the period assists the classification; SU UMa and ER UMa CVs typically have \(P_{\rm orb}\lesssim 3\) h (Ritter and Kolb, 1998), WZ Sge have even shorter periods whilst U Gem and Z Cam CVs typically have \(P_{\rm orb}\gtrsim 3\) h.
Whilst some periodograms show an unambiguous period others leave room for doubt due to aliases and harmonics, we have indicated such periods with a colon (:).
#### 4.2.3 Hertzsprung-Russell and Colour-Colour Diagrams
The position of a CV on the Hertzsprung-Russell (HR) diagram aids classification as many CV types congregate in specific sections of the diagram (see Fig. 15, Abril et al., 2020 and fig. 8 in Inight et al., 2021). Absolute magnitudes require knowledge of the distance of a CV which is normally derived from _Gaia_ parallaxes. Hence this diagnosis is not available for CVs without a _Gaia_ counterpart. Even when _Gaia_ data exists the position of a CV can be misleading if the parallax accuracy is poor or the CV is subject to significant reddening. A further issue is that CVs are intrinsically variable with outbursts, eclipses and orbital variations contributing to the observed spread in their distribution in the HR diagram. Fig. 15 shows a subset of the CVs from SDSS I to IV where systems with inaccurate parallaxes (\(\varpi/\sigma_{\varpi}<10\)) or inaccurate magnitudes (phot_g_mean_flux_over_error \(<10\), phot_bp_mean_flux_over_error \(<10\), phot_rp_mean_flux_over_error \(<10\)) have been removed in line with Lindegren et al. (2018); Lindegren (2018). Inspection of Fig. 15 shows that the location of a CV in the HR diagram can aid its sub-type classification. The HR diagrams, where available, are included as PNG files in the supplementary material.
Where a _Gaia_ counterpart does not exist, or does not have a parallax, a colour-colour diagram based on SDSS photometry is an alternative (see Fig. 16). This is typically the case for faint (\(m\gtrsim 20\)) systems. These approaches are somewhat complementary as SDSS photometry saturates at \(\rm m\simeq 15\) and is not available for bright CVs.
#### 4.2.4 Imaging
Pan-STARRS (Chambers et al., 2016) cutout images for each CV were obtained from the PS1 Image Cutout Server at STScI
Figure 14: The Fourier transform of the ZTF light curve of J01584-2823, an eclipsing CV, yields a particularly clear periodogram. Top left panel: The aliases around the central frequency are separated by a frequency of once per day – reflecting the nightly sampling. Top right panel: The central peak provides a good estimate of a period which can then be used to see a sine fit to the data yielding \(f=22.51981(3)\) d\({}^{-1}\). Bottom panel: A plot of the data folded at twice the best-fit period with a sinusoidal fit superimposed. In this case \(P_{\rm orb}=2/f\) due to ellipsoidal modulation although the corresponding frequency (\(11.2599\) d\({}^{-1}\)) is barely visible in the periodogram. This can be seen in the bottom panel with the deep eclipse of the white dwarf by the donor at \(\Phi\simeq 0.5\) and the two peaks at 0.25 in phase either side. In this case there is no doubt which is the correct alias but more generally the period uncertainty has to take account of the possibility of an incorrect alias being selected.
Figure 15: HR diagram showing all the CVs from SDSS I to IV where the astrometric and photometric errors are less than 10 per cent. The grey background consists of those _Gaia_ objects closer than 100 pc with _GALEX_ observations to increase the proportion of white dwarfs (which are UV-bright). Note the concentration of AM CVn and WZ Sge CVs near the white dwarf cooling sequence with SU UMa CVs spreading out towards the mains sequence. Nova-like variables and Z Cam CVs are concentrated around \(5<\rm G_{abs}<7\) as their high \(\dot{M}\) results in large optical luminosity.
(Flewelling et al., 2020). We superimposed circles for each _Gaia_ object (with the radius being proportional to the _Gaia_\(G\)-band magnitude) and for the SDSS fibre (2 arcsec diameter) (Fig. 17). The resulting image shows whether there is potential contamination of a spectrum from neighbouring objects, nebulae or background galaxies. In addition we superimposed circles for any _GALEX_ detections. CVs often have _GALEX_ observations but the accuracy of the coordinates of _GALEX_ observations can be incorrect by a few arcseconds and the image helps to assess whether a _GALEX_ observation could be associated with a different object. These images are available as PNG files in the supplementary material.
Figure 16: Colour-colour diagram showing the CVs from SDSS I to IV which have reliable SDSS photometry (where the magnitude of each colour is greater than 15.5 to avoid saturation and the magnitude error is less than 0.05). The grey background shows a collection of spectroscopically observed SDSS stars to guide the eye. The dense lower area forms the main sequence and the upper area is the white dwarf cooling sequence. Note the concentration of SU UMa and WZ.Sge CVs on and above the white dwarf cooling sequence.
Figure 17: Pan-STARRS cutout of J0752+3628 showing the potential for the contamination of fibre spectroscopy. J0751+1444 has an apparent _Gaia_ magnitude \(m_{\Omega}=17.7\) whereas its neighbour, at a distance of 3.38 arcsec, has \(m_{\Omega}=14.1\). The red circle denotes the target, the black dotted circle denotes the nominal coverage of an SDSS fibre (2 arcsec) and the blue circle is centred on the _Gaia_ coordinates of the neighbour scaled according to the _Gaia_\(G\) magnitude. The _Gaia_ coordinates are based upon EDR3 with an epoch of 2016 whilst the Pan-STARRS epoch is around 2012 and so a lack of alignment is a prompt to check proper motions to see if the contaminant was closer when the spectrum was observed. The pink square shows a _GALEX_ observation.
\begin{table}
\begin{tabular}{l
Results
In the interests of brevity the full observational data and analysis of the over 500 SDSS I to IV CVs has been consigned to the supplementary data. Illustrative examples are given here (Table 1 and Fig. 18) together with a brief discussion of a small set of the most interesting systems. Spectra have been smoothed by using a box-car median filter of width five points. A list of the 70 newly discovered CVs is included in Table A1.
### Unusual systems
The following notes discuss a small selection of the systems in SDSS I to IV which have unusual features and are worthy of further study.
#### 5.1.1 J0028+3118
The spectrum displays single-peaked Balmer emission lines above a flat continuum as well as a very strong He ii 4686 A emission line. The system is located close to the main sequence in the HR diagram. There is a a 1 yr state change of \(\approx\)2 magnitudes at MJD \(\simeq\) 58750 together with three shorter and smaller brightening events in the ZTF light curve. The ZTF light curve (Fig. 19) suggests that \(P_{\rm orb}\) = 29.20(1)h and also that the system is eclipsing. J0028+3118 is a high-accretion nova-like system but the very long period is unusual. The extraordinarily strong He ii 4686 A suggests a hot and helium-rich accretion stream \(-\) possibly the result of a hydrogen depleted donor that is still burning (Harrison, 2018).
#### 5.1.2 J0223\(-\)0520
The spectrum shows double-peaked Balmer emission lines with the higher order ones embedded in broad absorption lines from the white dwarf (Fig. 20). The blue slope of the continuum, and the shape of the white dwarf absorption lines suggest a relatively high white dwarf temperature, which is consistent with the SED being dominated by the white dwarf. The system is brighter in the far-ultraviolet (_GALEX_\(m_{\rm FUV}=19.5\)) than in the optical (\(g=19.9\)). This is not consistent with its position at the lower end of the white dwarf cooling sequence in the HR diagram with \(G_{\rm BP}-G_{\rm RP}=0.35\), which would imply a temperature \(T_{\rm eff}\simeq 8000\) K. The ZTF light curve shows variability but does not reveal an orbital period. There is an _XMM_ X-ray detection which is unusual for a low-\(\dot{M}\) system (Webb et al., 2020). This is probably a WZ Sge. It is possible that the raised temperature of the white dwarf is the consequence of a previous eruption.
#### 5.1.3 J0932+4725 and J0947+4524
Double-peaked Balmer, He i, He ii and Ca emission lines are visible in the SDSS spectra of these systems as well as the Balmer jump in emission (Fig. 21). There is deep central absorption in the emission lines suggesting that they are eclipsing - which is confirmed by Homer et al. (2006) for J0932+4725, and by the ZTF light curve for J0947+4524. Both are closer to the white dwarf cooling sequence than the main sequence in the HR diagram. In the CRTS and ZTF light curves J0932+4725 exhibits outbursts and superoutbursts whilst J0947+4524 exhibits two small (\(\simeq 1\) mag) outbursts. J0932+4725 has a period of 1.588 h (Homer et al., 2006) whilst we obtained a period of 1.34963(2) h from the ZTF light curve.
The large ratio of the helium to hydrogen line strengths suggests that the donor is likely He-enriched. An _HST_ observation of J0932+4725 by Pala et al. (2017) shows a large N v/C iv line flux ratio. Gansicke et al. (2003) attribute this type of nitrogen excess
Figure 19: J0028+3118 is an eclipsing CV with an unusually long orbital period. Top panel: ZTF light curve (\(r\)-band) showing a brightening event lasting about one year. Middle panel: The section of the light curve with MJD \(<\) 58600 folded on the 29.2 h period. Bottom panel: SDSS spectrum with a prominent He ii 4686 Å emission line.
Figure 20: J0223\(-\)0520 is a CV with an unusually hot white dwarf. Top panel: ZTF light curve (\(r\)-band) showing variability. Middle panel: The section of the light curve with MJD \(<\) 58600 folded on the 29.2 h period. Bottom panel: SDSS spectrum with a prominent He ii 4686 Å emission line.
to a system having passed through a phase of thermal timescale mass transfer and now accreting CNO processed material from a donor stripped of its external layers. J0932+4725 is therefore most probably such an evolved system. Whilst there are no _HST_ spectra of J0947+4524 the optical spectrum is very similar to that of J0932+4725 and suggests that this may also have an evolved donor. Systems with evolved donors are interesting as they are potential precursors to AM CVn systems (Podsiadlowski et al., 2003; Goliasch & Nelson, 2015)
The superoutbursts confirm that J0932+4725 is an SU UMa CV. J0947+4524 is a dwarf nova but we cannot classify it more precisely.
#### 5.1.4 j1146+6759
Double-peaked Balmer and He i emission lines are visible in the spectrum as well as the Balmer jump in emission. The donor is not detected. J1146+6759 is located midway between the main sequence and the white dwarf cooling sequence in the HR diagram. The system was observed in a superoutburst on 2011 January 4 and a superhump period of 0.0615(8) was reported (vsen-alert 125672), classifying this system as an SU UMa dwarf nova. The ZTF light curve shows very unusual variability (Fig. 22) with a steady decline in amplitude of \(\simeq 1.5\) mag over three years. This is followed by a rapid return to the original magnitude and then an outburst. It demonstrates that some dwarf novae can undergo state changes (section 6.6)
#### 5.1.5 j1150+041
The spectrum shows strong Balmer, He i and He ii asymmetric emission lines, suggesting that this is a magnetic CV. Neither the white dwarf nor the donor are visible. The ZTF light curve shows continuous variability with an amplitude of \(\simeq 1\) mag, but no outbursts. The periodogram computed from the ZTF data shows a clear and strong signal at an unusually long period of 141.65(5) h, or 5.9 d (Fig. 23). If this signal is genuinely the orbital period of the system, it would exceed that of the current record of 5.7 d (V1017 Sgr, Sekiguchi, 1992). However, in contrast to V1017 Sgr, the spectrum of J1150+4041 does not reveal any signature of a donor star, normally detected at such a long orbital period. Moreover, the system is located relatively close to the white dwarf cooling sequence in the _Gaia_ HR diagram, i.e. intrinsically faint. We have currently no consistent explanation for the observed properties of J1150+4041, but speculate that the long period might be the beat between the white dwarf spin and the orbital period in a slightly asynchronous polar. Radial velocity and fast photometry follow-up is required to unravel the true nature of this system.
#### 5.1.6 j1224+1841
The spectrum shows a blue continuum with Balmer emission lines, which are embedded in Balmer absorption lines for H\(\beta\) and the higher Balmer lines. J1224+1841 is located in the area of the HR diagram occupied by nova-like variables. The ZTF light curve shows multiple outbursts and also a drop of \(\simeq 2\) magnitudes to a low state lasting several hundred days (Fig. 24). There appear to be outbursts during the low state although it is possible that they are very short reversions to the high state. The CRTS light curve shows two similar low states. J1224+1841 belongs to the growing family of CVs bordering between nova-like variables and dwarf novae (Kato, 2019; Kato et al., 2021; Inight et al., 2022). Radial velocity follow-up should be carried out to determine the orbital period of this system.
Figure 21: Two eclipsing CVs with unusually strong He i emission lines, which very likely contain stripped nuclear evolved donor stars. Top panel: Folded ZTF light curve of J0947+4525 showing eclipse. Middle panel: Spectrum of J0932+4725. Bottom panel: Spectrum of J0947+4524.
Figure 22: J1146+6759 has been classified as an SU UMa dwarf nova, but has an unusual ZTF light curve (\(r\)-band) showing a slow decline into a faint state without outbursts.
#### 5.1.7 j1244+5936
This system is also known as Gaia16adi (Wyrzykowski et al., 2012). A double-peaked H\(\alpha\) emission line is visible in the spectrum as well as other Balmer, He i and He ii emission lines (Fig. 25). The deep central absorption features in the higher Balmer lines, dipping below the continuum flux level, suggest that the system is eclipsing, which is confirmed by the ZTF photometry. Absorption lines from the white dwarf are visible together with TiO molecular bands from the donor. It is closer to the white dwarf cooling sequence than the main sequence in the HR diagram. The ZTF and CRTS light curves show \(\simeq 10\) outbursts with at least one low amplitude superoutburst accompanied by superhumps. We obtained a reliable period of 1.74954(1) h from the ZTF light curve and a consistent period from the CRTS light curve. The folded light curve shows that it is eclipsing with a hot spot just before the eclipse. J1244+5936 appears to be an SU UMa albeit with a very low \(\dot{M}\). It may have recently started accreting - possibly after passing through the period gap; although theory (D'Antona et al., 1989) suggests that this transitional period will be short (\(\sim 10^{6}\) yr).
#### 5.1.8 j1522+3913
The spectrum shows Balmer emission lines above a relatively red continuum. Narrow absorption lines reveal a \(\simeq\) G-type donor star. The SED of the system exhibits a clear ultraviolet excess in the _GALEX_ bands, most likely from the accretion disc. The _Gaia_ HR diagram shows that J1522+3913 is just evolving off the main sequence into the giant branch. The ZTF light curve shows at least one outburst. We obtained a reliable period of 10.7686 d (258.4464 h) from the ZTF light curve confirming the period found by Drake et al. (2014) from the CRTS data. The folded light curve shows a clear double hump, consistent with the detection of ellipsoidal modulation. To our knowledge J1522+3913 is the longest known orbital period for a CV \(-\) twice that of V1017 Sgr (\(P_{\rm orb}=5.714\) d, Sekiguchi, 1992). We note that a small number of symbiotic stars are located in this region of the HR diagram as well, however, J1522+3913 does not show any of the defining features in its spectrum (see the definition in section 1.1 of Merc et al. 2021 ). Instead, we suggest that the very long orbital period, and the location in the
Figure 24: J1224+1841 is a transitional object between nova-like variables and dwarf novae: the ZTF light curve shows both outbursts and a drop-out (red). Outbursts appear to occur also during the faint state.
Figure 23: The ZTF data of J1150+4041 shows a very clear and strong signal at an unusually long period of 141.65(5) h. The highest peak in the periodogram from the ZTF light curve is the second harmonic and the folded light curve shows a double humped profile.
Figure 25: J1244+5936 (Gain16adi) is an eclipsing system with a very low accretion rate. Top panel: ZTF light curve (\(r\)-band). Middle panel: Light curve folded at the orbital period of 1.75 h, and rebinned into 30 phase bins. Note the eclipse of the disc at \(\Phi_{\rm orb}\simeq 0.4\) and the eclipse of the hot spot at \(\Phi_{\rm orb}\simeq 0.45\). Bottom panel: SDSS spectrum showing H\(\gamma\) and H\(\delta\) absorption lines from to the white dwarf and TO absorption lines from the donor in the red.
HR diagram are consistent with a CV in which the donor has evolved off the main sequence (e.g. for example GK Per Alvarez-Hernandez et al., 2021).
#### 5.1.9 j1549+1739
The SDSS spectrum shows single-peaked Balmer emission lines and a very strong He ii emission line on top of a strong blue continuum (Fig. 27). There is no sign of the white dwarf or the donor. J1549+1739 is located in the nova-like variable area of the HR diagram. Whilst there are no outbursts in the CRTS and ZTF light curves, the system meanders on time scales of about a year between \(\simeq 17.5-19\) mag. This is probably a nova-like variable but the strong He ii and its light curve are unusual. Spectroscopic follow up is encouraged to explore the spectral changes between the bright and faint states, and to determine the orbital period of the system.
#### 5.1.10 j1712+3218
The spectrum (Fig. 28) shows Balmer and Helium emission lines, including strong He ii, and the Bowen blend above a blue continuum. Narrow absorption lines of Mg, Na and Ca are detected, which may be from the donor star. J1712+3218 is located in the nova-like variable area of the HR diagram. The ZTF light curve shows variability with clear eclipsing behaviour. Drake et al. (2014) report an orbital period of 17.998h and concluded that this is a Beta Persei system. We determine \(P_{\rm orb}=8.9599(1)\) h from the eclipses in the ZTF light curve. We also have unpublished MDM time series spectroscopy confirming the orbital period. We classify J1712+3218 as a nova-like variable. This is an unusually long period for a nova-like variable although it would be consistent with an early donor that is visible even though the disc is in a high state.
#### 5.1.11 j1726+5432
This system has two SDSS spectra obtained at MJDs 51813 and 51997. Both spectra show single-peaked Balmer, He i and He ii emission lines as well as the Balmer jump in emission (Fig. 29). The SED shows that it has historically been brighter. There is no _Gaia_ parallax. It is above the white dwarf cooling sequence in the SDSS \(u-g,g-r\) diagram, although the SDSS photometry was obtained during a brighter state. The ZTF light curve shows no outbursts but low-level variability. We find an unusually long period of 15.2783(3) h from the ZTF light curve which, together with the helium-rich spectrum may indicate an evolved donor. In the absence of outbursts we cannot assign a sub-type and have classified this generically as a CV.
#### 5.1.12 j1831+4202
He i absorption lines are detected in the spectrum superimposed on a blue continuum (Fig. 30). There is no sign of the white dwarf or the donor. J1831+4202 has previously been classified as a CV (Girven et al., 2011) and a DB white dwarf (Genest-Beaulieu & Bergeron, 2019; Geier et al., 2019) however the DB classification is inconsistent with the variability in the _Gaia_ and ZTF light curves which indicates that this is an interacting binary. It is close to AM CVn in the HR diagram and we hence classify this system as an accretion-disc dominated AM CVn. The drop-outs in the _Gaia_ and ZTF light curves are unusual for this type of system.
#### 5.1.13 j2119+0332
Double-peaked Balmer and He i lines are visible in the spectrum as well as the Balmer jump in emission (Fig. 31). There are also Na absorption lines at \(\lambda\simeq\) 8190A and at \(\lambda\simeq\) 5890A together with broad Mg at \(\lambda\simeq\) 5170A suggesting a K-type donor. J2119+0332 is closer to the main sequence than the white dwarf cooling sequence in the HR diagram. There are no outbursts in either the CRTS or the ZTF light curves. We found ellipsoidal modulation and a reliable orbital period of 2.37223(3) h from the ZTF light curve, i.e. within the period gap. The period and relatively low luminosity are inconsistent with a main-sequence K-type donor and reminiscent of SDSS J001153.08\(-\)064739.2 (Rebassa-Mansergas et al., 2014) and QZ Ser (Thorstensen et al., 2002a) where the donor has undergone nuclear evolution on the main sequence before mass transfer to
Figure 26: The spectrum of the long-orbital period (10.8-d) J1522+3913 contains narrow absorption features from a G-type donor. The helium emission lines suggest that the donor is evolved. There are no the molecular and metallic absorption lines or Raman-scattered O vi lines; two of the conditions in Merc et al. (2021) for a symbiotic star classification.
Figure 27: J1549+1739 is probably a nova-like variable. Top panel: The spectrum shows an unusually strong He ii line. Bottom panel: The ZTF light curve (\(r\)-band) shows repeated changes of magnitude on timescales of about a year.
the white dwarf commences. In these cases a period of rapid mass transfer has resulted in a stripped donor and a short period. The strong Na absorption lines are consistent with CNO burning.
### Non-CVs
The classification of a number of systems that were previously classified as CVs have now, with the benefit of new data such as from _Gaia_ and ZTF, been revised. Most of the systems previously classified as CVs turned out to be either single white dwarfs or white dwarf plus main sequence binaries, in which emission lines from a nebular background or from the chromosphere of the companion were mistaken for signatures of accretion flows. The new classifications of these non-CV systems are listed in Table 2.
## 6 Discussion
### Distribution of periods
We have assembled periods for over half of the CVs in SDSS I to IV, either from the literature or from our analysis of the ZTF data. The main reasons why the remaining systems have no period measurements are due to either a low inclination (which will reduce the amplitude of any orbital modulation), faintness (i.e. low signal-to-noise ratio of the time-series photometry or spectroscopy), or sparse sampling of the available time-series data. With the possible exception of low \(\dot{M}\) short-period CVs (which do not have much orbital modu
Figure 30: J1831+4202 is an AM CVn with a helium-dominated absorption line spectrum and it exhibits low-amplitude brightness variations on time scales of about a year with occasional drop-outs. Top panel: The ZTF light curve (\(r\)- and \(g\)-band data shown in red and green respectively) and the _Gaia_ light curve (shown in blue). Bottom panel: Spectrum showing helium absorption lines and an absence of hydrogen lines.
Figure 29: J1726+5432 has an unusually long period of \(P_{\rm orb}=15.2783(3)\) h. Top panel: The periodogram peak at 3.14 per day is at twice \(P_{\rm orb}\). Bottom panel: Light curve folded at \(P_{\rm orb}\).
Figure 28: J1712+3218 is a nova-like variable with an unusually long orbital period. Top panel: ZTF light curve (\(r\)-band) showing eclipses. Middle panel: ZTF light curve folded on 8.96 h showing the eclipse. Bottom panel: The spectrum shows absorption lines that may be from an early-type donor (Mg is indicated).
lation from the hot spot and are inherently fainter) these factors are not likely to correlate with the period. The distribution of the SDSS CV periods is therefore assumed to be relatively unbiased, which we compare in Fig. 32 with that of version 7.24 (December 2015) of the Ritter & Kolb (2003) catalogue of CVs (hereafter R&K). This final version of the R&K catalogue lists 1429 CVs (including some from SDSS 1 to IV); however it focuses on well-studied CVs, many of which were discovered by virtue of their outbursts.
Interestingly the period gap (Howell et al., 2001) is clearly evident in the period distribution of CVs from the R&K catalogue but is not so evident in the SDSS 1 to IV sample. The proportion of SDSS CVs with periods below three hours is also greater than in the R&K catalogue, presumably due to spectroscopic identification of CVs with infrequent outbursts as reported by Gansicke et al. (2009), and the fact that the SDSS CV sample reaches, on average, fainter apparent magnitudes than the R&K sample (19.7 and 18.1 respectively). This is consistent with the results obtained by Pala et al. (2020) and Insight et al. (2021). Figure 32 highlights that selection effects have to be taken carefully into account when constructing observed CV samples to develop, support, or disprove theories of CV evolution, such as the canonical "disrupted magnetic braking framework" (Rappaport et al., 1983; Knigge et al., 2011) as well as
\begin{table}
\begin{tabular}{l l l} \hline SDSS name & Notes & New type \\ \hline
002603.80-093021.0 & Thorstensen \& Skinner (2012) determined that the spectrum was not that of a CV. Kleinman et al. (2013) classified the object as a (hydrogen) DA white dwarf. We found a ZTF period of 9.3086hh confirming that this is probably a binary. \\
034420.16+093006.8 & Parsons et al. (2015) found that this is a detached, eclipsing system. \\
035409.32+100924.4 & Kleinman et al. (2013) found that this was an eclipsing system including a DA white dwarf. This system is within the Eridanus superbubble (see fig. 1 in Ryu et al., 2006) and this is likely to be the source of the Balmer emission lines; the remaining spectrum is that of an isolated white dwarf. \\
053659.12+002215.1 & Thorstensen \& Skinner (2012) determined that the spectrum was not that of a CV. Kleinman et al. (2013) classified the system as a DA white dwarf. This system is in the edge of the Eridanus superbubble (see fig. 1 in Ryu et al., 2006) and this is likely to be the source of the Balmer emission lines; the remaining spectrum is that of an isolated white dwarf. \\
075117.09+144423.5 & The spectrum shows emission lines but these are contamination from a nearby CV (PQ Gem) and the object is actually an M-dwarf. \\
102347.67+003841.2 & Wang et al. (2009) found that this object is a millisecond radio pulsar - a low mass X-ray binary including a neutron star. \\
105443.06+285032.7 & Rebassa-Mansergas et al. (2010) concluded that the object is a detached binary including a DA white dwarf. Thorstensen \& Skinner (2012) also determined that the spectrum was not that of a CV. We note that the weak Balmer emission lines appear to vary in strength between multiple SDSS spectra. \\
111703.53+182558.1 & Thorstensen \& Skinner (2012) determined that the spectrum was not that of a CV. We classify the system as a detached binary containing a hot white dwarf and a main sequence star. The SDSS and LAMOST (He et al., 2017) spectra show that the emission lines vary in strength consistent with coming from the irradiated secondary at different phases of the orbit. We find a period of 9.33 h from the ZTF light curve. \\
121929.46+471522.8 & Gansicke et al. (2020) concluded that this was an isolated magnetic white dwarf with Zeeman-split Balmer line emission. \\
124959.76+035726.6 & Rebassa-Mansergas et al. (2010) concluded that the object is a detached binary including a DA white dwarf. Thorstensen \& Skinner (2012) also determined that the spectrum was not that of a CV. We find a period of 17.84 h from the ZTF light curve. \\
130236.21+060148.0 & Thorstensen \& Skinner (2012) determined that the spectrum was not that of a CV. Kiman et al. (2019) found that it was an isolated system – either an M or L dwarf. \\
131424.68+455528.3 & Kepler et al. (2016) identified this as a CV from SDSS data release 12. However the spectrum only shows very narrow emission lines and there are no recorded outbursts. It is in a nearby galaxy, UGC 8320 (4.3 Mpc, Karachentsev et al., 2003), and so it is most likely to be a hot star with the emission lines arising from the background galaxy. \\
131709.07+245644.2 & Thorstensen \& Skinner (2012) determined that the spectrum was not that of a CV. \\
132856.71+310846.0 & Thorstensen \& Skinner (2012) determined that the spectrum was not that of a CV. \\
151500.56+191619.6 & Thorstensen \& Skinner (2012) determined that the spectrum was not that of a CV. \\
155904.62+035623.4 & Thorstensen \& Skinner (2012) determined that the spectrum was not that of a CV. \\
202520.13+762222.4 & The spectrum is very noisy with only the H\(\alpha\) emission line clearly visible. There are no recorded outbursts and it is on the main sequence. The SED is consistent with this being a single M-dwarf. This is most likely to be a flaring main sequence star rather than a CV. \\
210241.09-004408.3 & Thorstensen \& Skinner (2012) determined that the spectrum was not that of a CV \\ \hline \end{tabular}
\end{table}
Table 2: Systems identified as CVs in the literature which we have reclassified as detached binaries or isolated stars. Unless otherwise stated the original classification was taken from Szkody et al. (2011). Each system may include a white dwarf (WD), M dwarf (M) or other type of main sequence star (MS).
sugges that the structure of the donor star may have no strong dependence upon the efficiency of magnetic braking (Andronov et al., 2003; El-Badry et al., 2022).
The orbital period distribution broken down by CV-sub-type is shown in Fig. 33. From the top panel it can be seen that U Gem CVs typically have periods above the period gap whilst most SU UMa CVs are located below the gap. Later in the evolution process low accretion WZ Sge CVs approach the period minimum and subsequently evolve back towards longer periods as "period bouncers". The CVs below the period minimum have helium-rich donors which were partially evolved before accretion commenced or formed directly from a detached white dwarf/brown dwarf binary (Kolb & Baraffe, 1999; Littlefair et al., 2007; Podsiadlowski et al., 2003). Polarars tend to have shorter periods whilst intermediate polars and pre-polars tend to be longer. The"period gap" is not so apparent in magnetic CVs, (see for example Mason et al., 2019) and may be a result of less efficient magnetic braking due to the white dwarf magnetic field (Webbink & Wickramasinghe, 2002). Nova-like variables congregate above the period gap alongside classical novae. Many classical novae become nova-like variables following their eruptions although there are exceptions where they become dwarf novae in the period between eruptions (Honeycutt et al., 1995; Mroz et al., 2016). Short-period nova-like variables are very rare; BK Lyn is the most famous example: Patterson et al. (2013) speculate that this is due to being the result of a relatively recent (\(\sim\) 2000yg ago) nova, and it has now morphed into an ER UMa dwarf nova.
The evolution of orbital periods of CVs across the HR diagram is shown in Fig. 34. CVs originate in or near the main sequence, with their luminosity dominated either by a luminous accretion disc or donor star, and progressively migrate towards the white dwarf cooling sequence as their donor mass decreases and periods shorten. In most WZ Sge the contribution of the accretion disc to their optical flux is small, and that of their donor stars entirely negligible.
### Distribution of CV sub-types
The distribution of the SDSS I to IV CV sub-types is shown in Fig. 35, where we compare it with that of the 150 pc volume-limited sample (Pala et al., 2020) and also with that of the R&K catalogue.
The proportion of magnetic CVs (polars, pre-polars and intermediate polars) in our sample is 20 per cent which is consistent with the 18 per cent found in Ritter & Kolb (2003) but is much lower than the 34 per cent found in Pala et al. (2020). This may be due to the small sample size (42) in Pala et al. (2020) which has a statistical 2\(\sigma\) uncertainty of \(\pm\)19 per cent. By comparison our sample (507)
Figure 31: The spectrum of J2119+0332 shows Na and Mg lines consistent with a K-type star. Whilst bright enough to outshine the disc the spectrum is too dim for a main sequence K-type suggesting a nuclear evolved stripped donor. This hypothesis is corroborated by the short orbital period of 2.4 h.
Figure 32: Cumulative distribution of the periods of the CVs in SDSS I to IV. The distribution is compared with the Ritter and Kolb catalogue of CVs (version 7.24, Ritter & Kolb, 2003). The period minimum (red line) and “period gap” (pink rectangle) are identified (see text for details).
Figure 33: Distribution of the periods of the CV sub-types in SDSS I to IV. CVs evolve from right to left past the “period gap” (pink rectangle) to the period minimum (broken black line) and then back out to longer periods. Top panel: U Gem CVs typically have periods above the period gap whilst most SU UMa CVs have shorter periods below the gap. Middle panel: Polars tend to have shorter periods whilst intermediate polars and pre-polars tend to be longer. Bottom panel: Nova-like variables congregate above the period gap alongside classical novae.
has an uncertainty of \(\pm 0.03\) per cent. Alternatively the difference may be because the absolute luminosity of polars is on average lower than non-magnetic systems causing a selection effect in our magnitude-limited sample.
The proportion of novae and nova-likes is lower in Pala et al. (2020) than our sample. This is due to their intrinsic brightness rendering them visible at far greater distances than other CVs and hence there will be a higher proportion in our magnitude-limited sample. The proportion is greater still in Ritter & Kolb (2003) due to historic novae identified by their eruptions.
We found a smaller proportion of WZ Sge CVs than Pala et al. (2020) most probably because they are inherently faint and only observed at relatively short distances. The small proportion of WZ Sge CVs in Ritter & Kolb (2003) is due to inconsistent classification; Ritter & Kolb (2003) classify many WZ Sge CVs simply as SU UMa.
### Variability of CVs
The photometric variability of CVs is widely used as a selection criterion when searching for CV candidates. However, different CV sub-types display very different types of variability, both in terms of amplitude and time-scales, and hence specific variability selection methods may introduce strong biases towards (or against) individual sub-types.
To illustrate this point, we computed three variability criteria for the SDSS I to IV CVs: (1) The difference between the maximum and minimum magnitude over the full set of ZTF observations which will be sensitive to outbursts and eclipses, where we only included systems with at least 150 observations. (2) The reduced \(\chi^{2}\) of the ZTF photometry with respect to its median. (3) the _Gaia_\(G\)-band variability defined as (Guidry et al., 2021)
\[G_{\rm var}=\frac{\rm phot\_g\_mean\_flux\_error\times\sqrt{phot\_g\_n\_obs} }{\rm phot\_g\_mean\_flux} \tag{1}\]
The distribution of these three types of variability are shown in the _Gaia_ HR diagram in Fig. 36).
The maximum amplitude of variability in the ZTF light curves (top panel) shows a concentration half-way between the white dwarf cooling track and the main sequence. This reflects the predominant population of SU UMa type CVs in this area of the HR diagram (see Fig. 15), which have frequent dwarf nova outbursts with amplitudes of \(\simeq 3-5\) mag, i.e. it is very likely that ZTF, despite the sparse sampling, will have caught them during an outburst. In contrast, the regions occupied predominantly by nova-like variables (\(G_{\rm BP}-G_{\rm RP}\simeq 0.25\), \(G_{\rm abs}\simeq 6\)) and WZ Sge systems (\(G_{\rm BP}-G_{\rm RP}\simeq 0.25\), \(G_{\rm abs}\simeq 12\)) show only relatively small variations between the minimum and maximum brightness detected by ZTF. The is reflected by their stable, hot (nova-like variables) or cool (WZ Sge CVs) accretion discs. However, a few systems in the region occupied by WZ Sge CVs show very large amplitude, \(\Delta g\simeq 8\) mag, variability, implying that ZTF observed them during one of their rare super-outbursts. The ZTF reduced \(\chi^{2}\) diagram shows overall a similar trend, however, the area occupied by nova-like variables exhibits a higher level of variability than in the maximum amplitude diagram, most likely because many of these systems have persistent short-term variability (Hickering, Bruch, 1992), and on average higher quality ZTF light curves as they are intrinsically bright. Finally, the _Gaia_ variability diagram shows again a lack of variability in the area occupied by nova-like variables and WZ Sge stars, and overall a lower number of strongly variable systems. This is most likely due to the fact that the _Gaia_ data spans a shorter baseline (\(\simeq 2.8\) yr), and has on average less data points than the ZTF photometry. Future _Gaia_ releases will extend the baseline to at least
Figure 34: HR diagram of the CVs in SDSS I to IV showing the orbital periods where available. CVs without a period are shown as empty black circles. Note that the shortest periods are close to the white dwarf cooling sequence and the longest periods are close to the main sequence.
Figure 35: Comparison of the distribution of sub-types between a volume-limited 150 pc sample (Pala et al., 2020), the SDSS I to IV sample and the R&K catalogue. Objects identified as either CVs or magnetic CVs (30 out of 507 without further classification have been omitted in each case. Selection effects are clearly evident with faint and/or non-outbursts sub-types less common in the R&K sample (note that many WZ Sge in the Ritter and Kolb sample are classified as SU UMa). The proportion of magnetic systems is also significantly higher in the 150 pc sample.
5 yr, possibly up to 10 yr, increasing the diagnostic power of the _Gaia_ variability index.
### Space Density
Measuring the space density of the various CV sub-types is fundamentally important for validating evolutionary models.
Assuming that the Galaxy is axially symmetrical the space density of a class of objects is a function of the height above the Galactic disc and the radial distance from the Galactic centre. Our analysis is sufficiently local that radial variations will be small and we henceforth ignore this factor. We further assume an exponential vertical distribution with scale height \(h\) where \(h\) is assumed to be dependent upon the typical age of the CV sub-type. The space density at a height \(z\) above the plane is therefore:
\[\rho\left(z\right)=\rho_{0}\times\exp\left(-\frac{\left|z\right|}{h}\right) \tag{2}\]
Our objective is to estimate the space density of the CV sub-types. Hence if the estimate of the number of a sub-type in a volume \(V\) is \(N_{\rm obs}\) the space density is given by:
\[\rho_{0}=\frac{N_{\rm obs}}{\int_{V}\exp\left(-\frac{\left|z\right|}{h}\right) dv} \tag{3}\]
which for a suitably large sample volume would become a constant dependent only on the choice of \(h\). This poses four challenges which are addressed in the following sections. (1) some CVs within the SDSS footprint would have been too bright or too faint to be observed by SDSS; (2) an appropriate value of \(h\) for each CV sub-type needs to be chosen; (3) SDSS was not complete and did not target a proportion of CVs within its footprint; (4) the SDSS footprint only achieved partial sky coverage and a method of extrapolating the results derived from the SDSS observations is needed.
#### 6.4.1 CV brightness
SDSS I to IV spectroscopy did not target any objects with \(g\lesssim 15\), which has been taken into account in section 6.4.3 below. At the faint end it will be apparent from the HR diagram (Fig. 34) that some sub-types have a larger intrinsic luminosity than others and will therefore be observable at greater distances by SDSS. We therefore sought to establish for each sub-type a limiting distance (\(R_{\rm lim}\)) within which they would have reliably been detected by SDSS. To derive this we first assume that CVs with \(g<21\) will be reliably detected from inspection of their SDSS spectrum if targeted; this is a conservative assumption as for example Paris et al. (2017) used \(g<22\). In the following, we decided to use the _Gaia_\(G\)-band magnitude rather than SDSS-g, as the multiple observations of _Gaia_ more adequately sample the average brightness of an individual system. The two passbands yield very similar apparent magnitudes for a given brightness. We calculated the mean (\(\mu\)) and standard deviation (\(\sigma\)) of the absolute _Gaia_\(G\) magnitude of each sub-type from the SDSS I to IV CVs (ignoring objects where the error in the Bailer-Jones et al. (2018) distance was greater than 20 per cent). By definition for any CV of a given sub-type with apparent magnitude \(m\), absolute magnitude \(M\) and distance \(d\) pc there is a 99.7 per cent probability that \(M<\mu+3\sigma\) hence
\[m-5\log_{10}(d)+5< \mu+3\sigma \tag{4}\] \[m< \mu+3\sigma+5\log_{10}(d)-5 \tag{5}\]
Figure 36: Variability of CVs related to position in the HR diagram. In each case older CVs which are approaching the white dwarf cooling sequence show less variability. Top panel: Distribution of maximum variability in the \(g\)-band ZTF light curve. Middle panel: Reduced chi-squared of the variability of the \(g\)-band ZTF light curve with respect to the median magnitude. Bottom panel: Variability in _Gaia_\(G\)-band magnitude (Eq. 1).
Now \(d<R_{\rm lim}\) is the largest value of \(d\) corresponding to \(m=21\):
\[21= \mu+3\sigma+5\log_{10}(R_{\rm lim})-5 \tag{6}\] \[5\log_{10}(R_{\rm lim})= 5+21-\mu-3\sigma\] (7) \[R_{\rm lim}= 10^{1+0.2(21-\mu-3\sigma)} \tag{8}\]
The values of \(R_{\rm lim}\) for each sub-type are shown in Table 3.
#### 6.4.2 Scale heights of the different CV sub-types
As we shall see the choice of scale height in the model has a significant effect upon the final estimates of the space density. Patterson (1984) used a single scale height of 150 pc for all sub-types in their analysis as did Duerbeck (1984) (who used \(125\pm 22\) pc). Pretorius et al. (2007a) assumed scale heights of 120, 260 and 450 pc (hereafter the P2007 model) for long-\(P_{\rm orb}\) systems (\(P_{\rm orb}>3\) h), normal short-\(P_{\rm orb}\) systems (\(P_{\rm orb}<3\) h) and period bouncers respectively. This was based on the assumption that these systems had increasing ages and therefore, as a result of scattering with other stars, increased space velocities resulting in higher average heights above the Galactic plane (Kolb & Stehle, 1996). We have used the assumptions of P2007 in our work. Where systems do not have an observed period we assign them to one of the three scale heights in P2007 based on their sub-type. U Gem CVs, nova-like variables and intermediate polars are assumed to be long-period CVs, short-period systems include SU UMa CVs, polars and WZ Sge CVs with the third category reserved for period bouncers. We attempted to analyse the SDSS I to IV data in order to support the scale height assumptions in P2007 but in practice our samples were not complete to a sufficient Galactic height.
#### 6.4.3 Completeness
SDSS I to IV used complex targeting rules focusing largely on quasars and not CVs (e.g. Richards et al., 2002; Ross et al., 2012; Dwelly et al., 2017). Fortunately CVs have similar colours to quasars and a significant proportion of quasar candidates were observed by SDSS. However, given that CV sub-types occupy different regions of the _Gaia_ HR diagram (and in the colour-colour diagrams used by SDSS for their target selection) the proportion of each sub-type observed by SDSS is not the same. Whilst in principle an analysis could estimate the degree of completeness for each sub-type, such an endeavour would be very complex. We have adopted an empirical approach by analysing version 7.24 of the R&K CV catalogue which contains 1429 CVs. We removed the 117 CVs in the R&K catalogue that were originally discovered by SDSS leaving 1312 systems. The apparent magnitude ("mag1") of these CVs in their normal state was processed and systems that were too bright (\(m<15\)) or to faint (\(m>21\)) were also removed3. From this we established that 208 CVs fell within the footprint of SDSS. We then established that 97 of these (46.6 per cent) were re-discovered by SDSS and we assumed that this proportion applies to all CVs and sub-types within the footprint. We then defined a correction factor \(C_{\rm corr}=2.14\) as the reciprocal of this proportion.
Footnote 3: Ideally, this analysis would have been done by sub-type but we discovered that the “type1” and “type2” fields in the R&K catalogue were unreliable; in particular many WZ Sge had been classified as SU UMa
Pala et al. (2020) used a different approach by quantifying the number of _Gaia_ objects within the SDSS footprint which fulfilled the quasar colour criteria for SDSS I and II and did not include any cut in the red colours. This resulted in 5300 objects and Pala et al. 2020 then found the proportion of these objects with an SDSS spectrum (53 per cent) of which \(\approx 6\) were predicted to be in the sample. Only four were found (three had a spectrum) from which they concluded that the completeness was 71 per cent (i.e. \(C_{\rm corr}=1.4\)). It is unsurprising that the completeness of the full SDSS I to IV sample is lower than that of the nearby, bright 150 pc sample.
#### 6.4.4 SDSS Footprint
The coordinates of the centres of the plates used in the "BOSS", "eBOSS" and "SDSS" surveys, together with their radius (1.49 degrees), define the coverage. However there are two problems to address; firstly the plates overlap and secondly CVs are not uniformly distributed in space but concentrated near the Galactic plane.
For convenience we define from the denominator of (3) an effective volume for a sphere radius \(R\):
\[V_{\rm eff}\left(h,R\right)=\int_{V(R)}\exp\left(-\frac{|z|}{h}\right)dv \tag{9}\]
To address the overlap problem we model the celestial sphere using \(2^{8}\) astropy HEALPix pixels (Gorski et al., 2005). Each HEALPix pixel has the same surface area (\(\Omega\) steradians) on the unit sphere and can be represented by a cone (Fig. 37). The effective volume of a sphere radius \(R\) is therefore the sum of the volume of the cones:
\[V_{\rm eff}(h,R)=\sum_{\rm All\,pixels}\int_{0}^{R}\exp-\left(\frac{|r\sin(b) |}{h}\right)(r^{2}\Omega dr) \tag{10}\]
The following substitutions are used:
\[A=\frac{|R\sin(b))|}{h} \tag{11}\]
Figure 37: Diagram depicting a slice of infinitesimal thickness \(dr\) through a cone subtending a solid angle (\(\Omega\)) at a radial distance (r). The cone represents one HEALPix pixel. The volume of the slice is \(r^{2}\Omega dr\).
\begin{table}
\begin{tabular}{l c c c c} \hline Type & \(N\) & \(\mu\) & \(\sigma\) & \(R_{\rm lim}\) (pc) \\ \hline U Gem & 16 & 7.83 & 1.13 & 899 \\ SU UMa & 66 & 9.90 & 0.94 & 451 \\ WZ Sge & 33 & 11.77 & 0.39 & 412 \\ Polar & 39 & 10.33 & 1.21 & 256 \\ Intermediate polar & 7 & 8.59 & 1.02 & 739 \\ Nova-like variable & 17 & 5.77 & 1.03 & 2676 \\ \hline \end{tabular}
\end{table}
Table 3: Absolute magnitudes and limiting distances of CV sub-types from SDSS I to IV. \(N\) is the number of SDSS I to IV objects with reliable distances (see text). \(\mu\), \(\sigma\) are the mean and standard deviation of the absolute _Gaia_\(G\) magnitudes.
\[x=\left(\frac{rA}{R}\right) \tag{12}\]
We then compared the coordinates (ra,dec) of each HealPix pixel with the coordinates of the centre of each plate to identify whether that pixel was covered by that plate. We then defined SDSS\({}_{\rm pix}\) as the de-duplicated set of HEALPix pixels that lay within the coverage of one or more plates. The effective volume of the SDSS sample is therefore:
\[\begin{split} V_{\rm eff}(h,R)&=\sum_{\rm SDSS_{ pix}}\int_{0}^{A}\exp\left(-Rx\right)\left(\left(\frac{Rx}{A}\right)^{2}\Omega \left(\frac{Rdx}{A}\right)\right)\\ &=\Omega R^{3}\sum_{\rm SDSS_{pix}}\left(\frac{1}{A^{3}}\right) \int_{0}^{A}x^{2}\exp\left(-Rx\right)\,dx\end{split} \tag{13}\]
Solving the integral analytically yields
\[\begin{split} V_{\rm eff}(h,R)&=\Omega R^{3}\sum_{ \rm SDSS_{pix}}\left(\frac{1}{A^{3}}\right)\left(2-\left(A^{2}+2A+2\right) \exp\left(-A\right)\right)\end{split} \tag{14}\]
which can be computed numerically for a given \(h\) and \(R\).
#### 6.4.5 Estimates of space density
Bringing the preceding sections together the three estimates of the space density (for \(h=120,260,450\) pc) are calculated from
\[\rho_{0}=\frac{N_{\rm obs}\times C_{\rm corr}}{V_{\rm eff}(h,R_{\rm lim})} \tag{15}\]
and the results are listed in Table 4.
Next, we need to discuss the accuracy of these results. From equation (15) the primary sources of error in \(\rho_{0}\) are in the correction factor and the scale height. The true scale height in each case will very likely lie between the two extreme values that we adopted, i.e. \(h=120\) pc and \(h=450\) pc (Table 4) and so a reasonable estimate for \(\rho_{0}\) (excepting possibly old WZ Sge systems) is \(\rho_{0}(260\) pc) with the extreme values forming ational \(3\sigma\) error estimate. We have avoided quoting any statistical uncertainties on our space density estimates as the true uncertainties are likely dominated by the assumptions on the scale height.
It is interesting to compare these estimates with the those based on different approaches used in previous works (Fig. 38) noting that the assumed scale heights have to be comparable.
Section 9.5 in Warner (1995) proposed \(\rho_{0}(103\,{\rm pc})=0.4\times 10^{-6}\,{\rm pc}^{-3}\) for U Gem type systems and \(\rho_{0}(176\,{\rm pc})=0.25\times 10^{-6}\,{\rm pc}^{-3}\) for SU UMa systems. Both values are lower than contemporary estimates presumably as earlier CV discoveries were primarily the result of following up transients and limited to brighter targets.
Pretorius et al. (2007b) estimated \(\rho_{0}(120\,{\rm pc})=(15^{+25}_{-8})\times 10^{-6}\,{\rm pc}^{-3}\) for all non-magnetic CVs based on a very small sample of four systems. Our results are towards the lower end of this range.
Pretorius & Knigge (2012) estimated a density of \(\rho_{0}(260\,{\rm pc})=(3.8^{+5.3}_{-1.9})\times 10^{-6}\,{\rm pc}^{-3}\) for non-magnetic CVs which compares with our value of \(\rho_{0}(260\,{\rm pc})=(4.29)\times 10^{-6}\,{\rm pc}^{-3}\) (by adding the individual estimates from Table 4).
Pretorius et al. (2013) estimated \(\rho_{0}=(1.3^{+0.6}_{-0.4})\times 10^{-6}\,{\rm pc}^{-3}\) for magnetic CVs, based upon a scale height of 120 pc for long period systems and 260 pc for short period systems, which is consistent with our estimates. They also estimated values for polars and intermediate polars which are consistent with this work.
Schwope (2018) estimated \(\rho_{0}(120\,{\rm pc})=0.13\times 10^{-6}\,{\rm pc}^{-3}\) for intermediate polars. This is significantly lower than our estimate.
Pala et al. (2020) estimated a composite \(\rho_{0}(280\,{\rm pc})=(4.8^{+0.6}_{-0.8})\times 10^{-6}\,{\rm pc}^{-3}\) for all CVs and our \(\rho_{0}(260\,{\rm pc})=5.78\times 10^{-6}\,{\rm pc}^{-3}\) is consistent with this noting that we use a larger correction factor (2.1 compared with 1.4 in Pala et al. (2020)). Pala et al. (2020) also provided incomplete (their footnote 9 states that they were unable to estimate the completeness of individual sub-types) estimates for \(\rho_{0}(260\,{\rm pc})\) for five sub-types. These values are lower than our estimates - partly due to (in)completeness but also because we use a larger correction factor (2.1 compared with 1.4 in Pala et al. (2020)). Our larger sample size may also have had an effect brought about by the effectiveness of SDSS in discovering short-period systems.
Inight et al. 2021 estimated \(\rho_{0}=(2.23^{+1.04}_{-0.6})\times 10^{-6}\,{\rm pc}^{-3}\) for their Gold Sample of 201 systems. The Gold Sample is a reliable subset of the known CVs (at least 305) within 300 pc and their estimate is therefore a lower limit for the true space density.
In principle, the fractions of sub-types from our space density estimates should reflect the true fractions in the underlying population without selection effects - as opposed to the number ratios in the magnitude limited SDSS I to IV sample. We therefore compared these fractions with those from the SDSS I to IV sample (Fig. 39), where the main difference is the larger proportion of WZ Sge stars and a lower fraction of SU UMa CVs when looking at the space density results. This is in line with the expectation that low-luminosity WZ Sge stars should be under-represented in a magnitude limited sample. Whilst demonstrating that the SDSS I to IV sample is subject to some selection effects, we also caution not to over-interpret the space density ratios because of the significant effect of scale height assumptions.
In summary, the long-standing discussion of the CV space density seems to converge to a value of \(\simeq 5\times 10^{-6}\)pc\({}^{-3}\). However, the uncertainties in scale heights continue to limit the accuracy of any estimates of space density. We have calculated space densities for six sub-types and Fig. 38 demonstrates that these estimates are consistent with other work.
### Low accretion rate systems
Period bouncers are CVs where the mass of the donor has shrunk below the level needed to sustain hydrogen fusion in the core (\(\simeq 0.061\)M\({}_{\sun}\)) and the donor has evolved to a brown dwarf. A consequence of this is that the orbital separation of the CV increases and \(P_{\rm orb}\) evolves back from the period minimum (often taken as \(\sim 76.2\) min, Knigge 2006) towards longer periods. Patterson (2011) identified a total of 22 candidate period bouncers, eight of which are in our sample (Table 5). Together with one polar, the 13 non-magnetic period-bouncers in Table 5 represent 14 per cent of the WZ Sge CVs and 2.6 per cent of our complete sample. The two new candidates from this work (O904+4402 and J2131\(-\)0039) were identified because of their spectra (white dwarf visible and donor not apparent) and an orbital period significantly larger than the period minimum. We estimated the space density of period bouncers using 14 per cent of the space density of WZ Sge from Table 4 to be \(\rho_{0}(450)=0.27\times 10^{-6}\)pc\({}^{-3}\). Comparing this with \(\rho_{0}(P2007)=7.92\times 10^{-6}\)pc\({}^{-3}\) implies that only 3.4 per cent of CVs are period bouncers which is a long way from the 70 per cent
of all CVs predicted by Kolb (1993). Goliasch & Nelson (2015) modelled CV evolution to predict the intrinsic proportion of period bouncers as being \(38-60\) per cent of all CVs and the observed proportion (taking account of selection effects) as \(2.5-11.2\) per cent. Although period bouncers are intrinsically faint any observational bias is unlikely to fully account for this difference. As long as period bouncers maintain a minimum amount of mass transfer, and associated Balmer emission lines, SDSS spectroscopy is extremely efficient at identifying WZ.Sge CVs, independent of their orbital period and their outburst frequency. The lack of observed period bouncers remains puzzling, and could be due to the donors in period bouncers either becoming detached or merging with the white dwarfs.
### Unusual state changes
In contrast to dwarf novae, which undergo quasi-regular outbursts because of thermal instabilities in their accretion discs, nova-like variables are known to exhibit state changes due to sustained changes in \(\dot{M}\)(Shafter et al., 1985; Leach et al., 1999; Rodriguez-Gil et al., 2007). Polars are also known to undergo state changes on a timescale of months to years (Kafka & Honeycutt, 2005; Ramsay et al., 2004; Simon, 2016), and low states have been observed occasionally in a small number of dwarf novae (Wood et al., 1995; Schreiber et al., 2002). The cause of the high/low state transitions is still debated, but a likely contender is star spots near the inner Lagrangian point (Livio & Pringle, 1994; King & Cannizzo, 1998; Hessman et al., 2000)
We have identified eight CVs (Figure 40) within the
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline & \multicolumn{2}{c}{Limiting} & \multicolumn{5}{c}{Space Density (\(10^{-6}\)pc\({}^{-3}\))} \\ Type & distance & \(N\) & \multicolumn{2}{c}{\(\rho_{0}(120)\)} & \(\rho_{0}(260)\) & \(\rho_{0}(450)\) & \(\rho_{0}(P2007)\) \\ \cline{2-7} U Gem & 899 & 10 & 0.68 & 0.16 & 0.09 & 0.63 \\ SU UMa & 451 & 30 & 4.21 & 1.71 & 1.19 & 1.88 \\ WZ Sge & 412 & 38 & 6.24 & 2.72 & 1.96 & 2.52 \\ Polar & 256 & 8 & 3.25 & 1.89 & 1.53 & 1.84 \\ IP & 739 & 6 & 0.50 & 0.14 & 0.08 & 0.32 \\ NL & 2676 & 23 & 1.01 & 0.12 & 0.04 & 0.97 \\ All CVs & 150 & 7 & 8.21 & 5.78 & 5.07 & 5.68 \\ All CVs & 300 & 52 & 13.92 & 7.23 & 5.61 & 7.92 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Analysis of the space density of CV sub-types for different assumptions of scale height. \(\rho_{0}\) (P2007) assumes the scale heights from Pretorius et al. (2007a). \(N\) is the number of SDSS I to IV objects that are closer than the limiting distance. Note that the values of \(\rho_{0}\) for individual sub-types are slightly understated (\(\approx 6\) per cent) due to the remaining unclassified CVs and dwarf novae. This does not apply to the “All CVs” values which have been calculated for 150 pc and 300 pc limiting distances to enable comparison with Pala et al. (2020) and Inight et al. (2021) respectively.
Figure 38: Comparison of the space density estimates for CVs from this work (bar chart) and published values. All estimates use the P2007 assumptions for scale height \(h\) unless otherwise stated. Left panel: Estimates for individual sub-types. Note that Pala et al. (2020) are understated as they do not take account of completeness. Right panel: Composite estimates. Inight et al. (2021) specifically excluded selection effects.
SDSS I to IV sample that display erratic light curve morphologies, neither displaying disc outbursts, nor clearly identifiable state changes - and which have spectra that are inconsistent with being either magnetic CVs or nova-like variables. These systems defy the standard CV classification scheme outlined in section 3. These eight examples of systems exhibit significant brightness changes on timescales that indicate the cause is a change to \(\dot{M}\) rather than instabilities of their discs. Below, we discuss the properties of these systems.
J0838+4910 is below the period gap with \(P_{\rm orb}=1.69\) h. Thorstensen et al. (2015) rightly concluded that this was an SU UMa based on the measured orbital period and the CRTS light curve. The SDSS spectrum is consistent with an SU UMa classification. However the ZTF data is contradictory - showing no outbursts and two long-lasting faint states.
J0853+4848 has a spectrum that shows Balmer, He i and weak He ii emission lines. The white dwarf is not visible but the M-type donor is detected in the red part of the spectrum. The system has been brighter since the SDSS spectrum was obtained. There is no CRTS data and the ZTF data shows one small outburst but also state changes of up to two magnitudes. The period is 2.36 h which is in the period gap.
J0922+3103 (AR Cnc) and J1544+2553 have periods of 5.15 h and 6.03 h respectively and spectra which resemble U Gem CVs with double-peaked Balmer and helium lines, and clear signatures of their M-dwarf donors. However their light curves more closely resemble those of polars and they exhibit no outbursts. This ambiguity prevents any further classification beyond that of a CV. Palaversa et al. (2013) classified J1544+2553 as a nova-like using an automated process but this is not consistent with the strong lines in the spectrum, the detection of the donor and the position in the HR diagram.
J1239+2108 (IR Com) has previously been identified as a CV with erratic brightness changes (Richter et al., 1997), and SDSS obtained a spectrum when accretion had virtually ceased for an extended period (Manser and Gansicke, 2014), revealing both the white dwarf and the donor star. It is on the lower edge of the period gap with \(P_{\rm orb}=2.09\) h.
J1244+3004 (CSS 080427:124418+300401) was found in a bright state within the CRTS light curve for about ten years, albeit exhibiting outbursts (Thorstensen et al., 2015). The ZTF light curve (Fig. 40) shows a prolonged drop of \(\simeq 2\) mag. It's period of \(P_{\rm orb}=1.86\) h places it below the period gap.
J2309+2135 (V405 Peg) was identified by Schwope et al. (2002b) as the counterpart of a _ROSAT_ X-ray source, and based on the detection of the donor star, and the apparent magnitude, the authors speculated that V405 Peg may be as close as \(\simeq 30\) pc. Thorstensen et al. (2009) determined the period to be 4.26 h, and measured a parallax of \(\simeq 150\) pc - superseded by the _Gaia_ parallax of \(\simeq 173\) pc. Schwope et al. (2014) obtained deep X-ray observations, revealing revealed a relatively low X-ray luminosity, and the authors noted that the system displays characteristics of both magnetic and non-magnetic CVs. The SDSS spectrum very clearly reveals the M-type donor star, with little contribution from either an accretion disc or the white dwarf.
J2330+3033 has a spectrum showing both Balmer and He i
Figure 39: Comparison of the distribution of sub-types between the SDSS I to IV sample and our predicted space densities using the P2007 scale height assumptions. This plot ignores sub-types for which we have not estimated a space density.
\begin{table}
\begin{tabular}{l l l l l} \hline SDSS name & CV type & Reference & \(P_{\rm orb}\) (h) & \(M_{\rm donor}\)(M\({}_{\sun}\)) \\ \hline _J0058-0107_ & WZ Sge & Kato et al. (2017) & 1.559 & q=0.079 \\ J0804+5103 & WZ Sge & Amantayeva et al. (2021) & 1.433 & 0.042 \\ J0843+2751 & WZ Sge & Patterson et al. (1998) & 1.439 & 0.02 \\ _J0090+4402_ & WZ Sge: This work & 1.674 & & \\ J1035+0551 & WZ Sge: Littlefair et al. (2006) & 1.368 & 0.06 \\ J1057+2759 & WZ Sge: McAllister et al. (2017) & 1.507 & 0.044 \\ _J1212+0136_ & Polar & Farihil et al. (2008) & 1.474 & \\ J1216+0520 & WZ Sge: Southworth et al. (2006) & 1.646 & \(<\)0.04 \\ J1255+2642 & WZ Sge: Patterson et al. (2005a) & 1.992 & \(<\)0.05 \\ J1433+1011 & WZ Sge & Savoury et al. (2011) & 1.302 & 0.0571 \\ J1435+2336 & WZ Sge: Pala et al. (2022) & 1.296 & & \\ J1507+5230 & WZ Sge & Littlefair et al. (2007) & 1.11 & 0.056 \\ _J2131-0039_ & WZ Sge: This work & 1.67 & & \\ J2304+0625 & WZ Sge & Nakata et al. (2014) & 1.616 & q=0.053 \\ \hline \end{tabular}
\end{table}
Table 5: Period bouncers within the coverage of SDSS I to IV. Candidate systems are shown in italics. The mass ratio of two systems were identified using the method from Kato and Osaki (2013); the donor mass is estimated by assuming a maximum white dwarf mass of 1.4M\({}_{\sun}\) and is therefore definitely low enough to form a brown dwarf. J1212+0136 is an interesting system as it was initially assumed to be a detached system with a white dwarf and low mass companion. It was only recognised as an accreting system from the detection of cyclotron emissions and an X-ray observation which also confirmed its type.
Figure 40: We identified eight CVs with substantial and erratic changes in apparent magnitude over periods of months or years – suggesting commensurate changes in accretion rate. The light curves shown are the merged \(r\)- and \(g\)-band data from ZTF, with the exception of J1239+2108 for which we show the CRTS data. The green lines show the trend using five point boxcar smoothing. The changes are atypical as the spectra of these systems are inconsistent with either a magnetic CV or nova-like variable classification.
emission lines, the donor is clearly visible. It is eclipsing and Hardy et al. (2017) found \(P_{\rm orb}=3.74\) h. The CRTS and ZTF light curves show variability but no outbursts.
We note that the periods of these systems cluster both near the lower edge of the period gap, and in the \(\simeq 4-6\) h range, indicating widely varying evolutionary stages. The lack of outbursts is indicative of a low \(\dot{M}\). A low \(\dot{M}\) is corroborated by the detection of the donor stars in the short-period systems, as non-magnetic CVs within this period range are typically SU UMa systems with accretion disc dominated spectra. The long-period systems coincide with the period range where dwarf novae are usually found (we encourage the reader to inspect section 7.2 of Knigge et al., 2011 for a discussion of the inconsistency between the CV sub-type distribution in the \(3-6\) h period range and the theoretical predictions of \(\dot{M}\) and the critical accretion rate for disc instabilities to (not) occur). We speculate that these eight systems may either contain weakly magnetic white dwarfs, with fields that are sufficiently low so that they do not develop the hallmarks of polars or intermediate polars, but strong enough to affect the structure of the disc. However, we also note that a number of intermediate polars exhibit dwarf nova outbursts - so the tentative suggestion of weak fields does not explain the absence of outbursts. An alternative idea is that these systems are undergoing some type of state change in their donor stars, as possibly suggested by their location near the lower edge of the period gap, and in the range where dwarf novae and nova-like overlap above the period gap.
### Tertiary systems
Raghavan et al. (2010) suggest that around eight per cent of close binaries are part of a triple system and have a distant companion. They are significant as the distant star can influence the evolution of the inner binary through Kozai cycles (Fabrycky and Tremaine, 2007). We searched _Gaia_ EDR3 for such systems by identifying candidate stars within 30 arcsec of each of the CVs in our sample. Potential companions can be expected to have similar proper motions. Howevr proper motions, and hence differences between them, will scale inversely with distance. We therefore adopted the following metric where the denominator is a proxy for distance:
\[\sqrt{\frac{\left(\rm pmra_{cv}-pmra_{cand}\right)^{2}+\left(\rm pmdec_{cv}-pmdec _{cand}\right)^{2}}{\rm pmra_{cv}}^{2}+\rm pmdec_{cv}^{2}}<0.05 \tag{16}\]
resulting in the eight systems in Table 6.
The projected separations, \(a\) were calculated by multiplying the angular separation by the Bailer-Jones et al. (2018) distance; the errors on the Bailer-Jones et al. (2018) distances are too large to estimate the radial separation. Estimating the binding energy is a useful test of whether these systems are gravitationally bound to the CV or not:
\[-E_{\rm bind}=G\,\frac{(M_{\rm WD}+M_{\rm d})M_{3}}{a} \tag{17}\]
A rough estimate is sufficient for this purpose and so we assume typical values of \(M_{\rm WD}=0.83\) M\({}_{\odot}\) and a donor of mass \(M_{\rm d}=0.14\) M\({}_{\odot}\). We estimate \(M_{3}\) from the position of the star in the _Gaia_ HR diagram; and find that the ev companion in seven of the tertiaries is on the main sequence; the companion to J1206+5100 is a white dwarf. The calculated values shown in Table 6, when compared with fig 6b of Burgasser et al. (2007) show clearly that these eight CVs are tertiary systems. These eight are only 1.6 per cent of our sample and should be considered a lower limit on the fraction of CVs in triple systems. Our search is likely to be incomplete at larger distances where _Gaia_ may not resolve close tertiaries or not detect them because they are too faint; in particular tertiaries with very low-mass or cool white dwarf companions would be hard to identify.
## 7 Summary
We have reviewed the spectra of CVs observed by SDSS I to IV and publish 178 for the first time along with 59 new periods. We discovered 70 completely new CVs and classified or updated 262. These 70 new CVs from SDSS III and IV augment the 282 previously identified in SDSS I and II. This total is remarkably consistent with the early prediction of 400 from Szkody et al. (2002). We have analysed the results and position the different CV sub-types in evolutionary terms together with revised estimates of space density. The period bouncers and tertiary systems within our sample have been identified and the abundances compared with previous estimates.
## Acknowledgements
We acknowledge with thanks the variable star observations from the AAVSO International Database contributed by observers worldwide and used in this research.
Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is [http://www.sdss3.org/](http://www.sdss3.org/).
SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University.
Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org.
SDSS IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics | Harvard & Smithsonian (CTA), the Chilean Participation Group, the French Participation Group, Instituto de Astrofisica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut fur Astrophysik Potsdam (AIP), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astronom (MPIA Heidelberg), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur
fur Astrophysik (MPA Garching), Max-Planck-Institut fur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatorio Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autonoma de Mexico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.
Based on observations obtained with the Samuel Oschin Telescope 48-inch and the 60-inch Telescope at the Palomar Observatory as part of the Zwicky Transient Facility project. ZTF is supported by the National Science Foundation under Grants No. AST-1440341 and AST-2034437 and a collaboration including current partners Caltech, IPAC, the Weizmann Institute for Science, the Oskar Klein Center at Stockholm University, the University of Maryland, Deutches Elektronen-Synchrotron and Humboldt University, the TANGO Consortium of Taiwan, the University of Wisconsin at Milwaukee, Trinity College Dublin, Lawrence Livermore National Laboratories, IN2P3, University of Warwick, Ruhr University Bochum, Northwestern and former partners the University of Washington, Los Alamos National Laboratories, and Lawrence Berkeley National Laboratories. Operations are conducted by COO, IPAC, and UW.
Data from the Catalina Real-Time Transient Survey has been used (Drake et al., 2009)
This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement.
This paper includes data collected with the TESS mission, obtained from the MAST data archive at the Space Telescope Science Institute (STScI). Funding for the TESS mission is provided by the NASA Explorer Program. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
This work makes use of observations obtained at the MDM Observatory, operated by Dartmouth College, Columbia University, Ohio State University, Ohio University, and the University of Michigan.
Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences.
Some of the results in this paper have been derived using the healpy and HEALPix package.
BTG was supported by grant ST/T000406/1 from the Science and Technology Facilities Council (STFC). This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 101020057). This research was supported in part by the National Science Foundation under Grant No. PHY-1748958.
## Data availability
All data used in this article are available from the sources referenced in the text.
|
2302.13974 | Collective Quantum Approach to Surface Plasmon Resonance Effect | In this research we present a theory of the surface plasmon resonance (SPR)
effect based on the dual length-scale driven damped collective quantum
oscillations of the spill-out electrons in plasmonic material surface. The
metallic electron excitations are modeled using the Hermitian effective
Schr\"{o}dinger-Poisson system, whereas, the spill-out electron excitations are
modeled via the damped non-Hermitian effective Schr\"{o}dinger-Poisson system
adapted appropriately at the metal-vacuum interface. It is shows that, when
driven by external field, the system behaves like the driven damped oscillator
in wavenumber domain, quite analogous to the driven damped mechanical
oscillation in frequency domain, leading to the collective surface spill-out
electron excitation resonance. In this model the resonance occurs when the
wavenumber of the driving pseudoforce matches that of the surface plasmon
excitations which can be either due to single-electrons or collective effects.
Current theory of SPR is based on longitudinal electrostatic excitations of the
surface electrons, instead of the polariton excitation parallel to the
metal-dielectric or metal-vacuum surface. Current theory may also be extended
to use for the localized surface plasmon resonance (LSPR) in nanometer sized
metallic surfaces in non-planar geometry. A new equation of state (EoS) for the
plasmon electron number density in quantum plasmas is obtained which limits the
plasmonic effects in high-density low-temperature electron gas regime, due to
small transition probability of electrons to the plasmon energy band. | M. Akbari-Moghanjoughi | 2023-02-19T04:07:24Z | http://arxiv.org/abs/2302.13974v1 | # Collective Quantum Approach to Surface Plasmon Resonance Effect
###### Abstract
In this research we present a theory of the surface plasmon resonance (SPR) effect based on the dual length-scale driven damped collective quantum oscillations of the spill-out electrons in plasmonic material surface. The metallic electron excitations are modeled using the Hermitian effective Schrodinger-Poisson system, whereas, the spill-out electron excitations are modeled via the damped non-Hermitian effective Schrodinger-Poisson system adapted appropriately at the metal-vacuum interface. It is shows that, when driven by external field, the system behaves like the driven damped oscillator in wavenumber domain, quite analogous to the driven damped mechanical oscillation in frequency domain, leading to the collective surface spill-out electron excitation resonance. In this model the resonance occurs when the wavenumber of the driving pseudoforce matches that of the surface plasmon excitations which can be either due to single-electrons or collective effects. Current theory of SPR is based on longitudinal electrostatic excitations of the surface electrons, instead of the polariton excitation parallel to the metal-dielectric or metal-vacuum surface. Current theory may also be extended to use for the localized surface plasmon resonance (LSPR) in nanometer sized metallic surfaces in non-planar geometry. A new equation of state (EoS) for the plasmon electron number density in quantum plasmas is obtained which limits the plasmonic effects in high-density low-temperature electron gas regime, due to small transition probability of electrons to the plasmon energy band.
Introduction
Plasmonics is a rapidly developing fundamental subject of research which combines the theory and applications spanning from physics, chemistry, engineering, biology, medicine and many other areas of active interdisciplinary fields [1; 2; 3]. Plasmonics had a long tradition since the discovery of surface plasmon polariton in 1950's. It has attracted significant attention after the discovery of the surface-enhanced Raman scattering (SERS) effect in 1970's [4] and continued to grow exponentially during the last decade, due to explosive technological developments in nanofabrication and low dimensional miniaturized semiconductor industry [5; 6]. Many new fields of research, such as nanoplasmonics, nanophotonics [7; 8], nano-optics [9] are emerging around the subject because of evergrowing applications of the plasmonics in futuristic nanotechnology, ultrahigh sensitivity and terahertz-range dielectric response of electrons to environmental changes. The surface plasmon resonance (SPR) is versatile technology used in determining particular substances called analytes. SPR biosensors are also used in label-free detection of various important biomarkers [10]. Despite a huge experimental advancement [11; 12; 13; 14; 15; 16] and success in the field, the theoretical developments remain classical. The best investigation tools for consistent results in plasmonic phenomena are numerical simulation and computational algorithms, which are time consuming due to many body complexity from one hand and prone to error, on the other. Furthermore, the classical Drude model and numerical simulations do not provide fundamental insight into the underlying atomic-scale physical phenomena at the quantum level.
Plasmonic systems, on the other hand, provide variety of interesting optical properties depending on many parameters, such as, the shape, size, periodicity and the material combination. The extreme sensitivity of electron oscillations on material configurations provides a great chance for industry technicians to invent vast variety of electronic devices by manipulation of involved parameters. Plasmonic devices are also main components of biosensors, optical modulators, photovoltaic technology etc. There are, however, technological obstacle and insufficient theoretical developments which limit the harvesting capacity of the plasmonic effects. By overcoming these limitations, plasmonic devices can find their full potential in optical emitters [17], plasmon focusing [18], nanoscale waveguiding [19], optical antennas [20], communication device [21], solar cells [22], plasmonic sensors [23], modulator [24] nanoscale swiches [25] and spasers [26]. Moreover, collective electron excitations may become the best candidates for developing effective hot electron generation [27] for solar energy conversion towards improved photovoltaic
and catalytic designs via the collective energy transport phenomena. The new plasmonic energy conversion may require several important considerations regarding the plasmonic materials, integrated architecture and device fabrication methods [28]. Recent investigations reveal that guiding and intensification of light by employing the right plasmonic geometries leads to more effective solar-cell designs in which light is more efficiently absorbed in a single quantum well, or in a single layer of quantum dots or molecular chromophores [29].
Plasmonics is also an appealing theoretical field of research, since, fundamental electronic excitations control almost every aspect of a solid, such as conductivity, heat and electrical transport [30]. The electrons uniquely interact with the lattice periodicity ans electromagnetic field to produce complex energy band structure which determines the thermodynamic behavior of a solid [31]. However, understanding the fundamental plasmonic behavior of interacting electron gas requires the improvement of advanced quantum many body [32; 33] and quantum electrodynamic [34] theories. Although, the foundation of the theory has been laid by previous pioneering works [35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51], the quantum mechanical treatment of electron system are usually based on various limiting approximations, such as, the single-electron energy dispersion and random-phase approximation. Due to complex nature of plasmonic phenomenon and the delicate interplay between the individual electrons and their collective excitation, some important aspects of the phenomenon are underestimated when the single-electron and collective behavior of the electron system are not treated in equal footing. For instance, there has been an intense recent debate [52; 53; 54; 55; 56; 57; 58; 59; 60] on the inconsistent prediction of attractive force between static screened ions in quantum plasmas between the linearized quantum hydrodynamic [61; 62; 63; 64; 65; 66; 67] and density functional theory (DFT) [68] treatments. While these theories can be reconciled when corrections are taken into account [69], the fundamental theory of plasmonics must go well beyond these correction and approximations. In an unpublished resent research the Lindhard theory is used with a more general dual length-scale energy dispersion which includes both single-electron as well as collective behavior [70]. It is seen that, the obtained results of such adjustment deviates substantially from prediction of both conventional quantum hydrodynamic and the Lindhard theory. The quantum hydrodynamic theory which is based on the Wigner-Poisson system has shown many success in predicting collective phenomena in dense electron environments [71; 72; 73; 74; 75; 76; 77; 78; 79; 80]. However, uses the independent single-electron Hamiltonian to generalize to many body effects and therefore ignores the energy band structure effect due to the electrostatic interactions. A more elaborative treatment in a recent research the Schrodinger-Poisson model [81] has been used to obtain the generalized energy dispersion relation from which collective phenomena is
studied in both single-electron and collective scales at the same time. Some interesting new collective quantum effects have been predicted within the Schrodinger-Poisson model [82; 83; 84; 85; 86; 87; 88]. In this research we develop a new theory of SPR based on the Schrodinger-Poisson model based on the driven excitations in half-space collective quantum excitations. The paper is organized in the following manner. The Linear quantum excitations are modeled in Sec. II. The model for spill-out electron excitations is provided in Sec. III. Driven collective excitations at the surface is considered in Sec. IV and conclusions are drown in Sec V.
## II Linear collective quantum excitations
The elementary collective quantum excitations of the electron gas can be described using various modeling such as kinetic, density functional and hydrodynamic theories. Here we use the effective Schrodinger-Poisson system, within the framework of quantum hydrodynamic model [89], in order to study plasmonic excitations in vacuum-metal-vacuum configuration. The one-dimensional collective dynamic state of the electrostatically interacting metallic electrons follow the Hermitian coupled pseudoforce system [88]
\[i\hbar\frac{\partial N(\mathbf{r},t)}{\partial t}=-\frac{\hbar^{2 }}{2m}\Delta N(\mathbf{r},t)-e\phi(\mathbf{r})N(\mathbf{r},t)+\mu N(\mathbf{r },t), \tag{1a}\] \[\Delta\phi(\mathbf{r})=4\pi e\left[|N(\mathbf{r})|^{2}-n_{0} \right], \tag{1b}\]
in which \(\mathcal{N}(\mathbf{r},t)=\psi(\mathbf{r},t)\exp[iS(\mathbf{r},t)/\hbar]\) denotes the collective statefunction, in combination with the electrostatic potential \(\phi(x)\), characterizing the probability density of the electron system which is related to the local electron number density via the definition, \(n(\mathbf{r})=\psi(\mathbf{r})\psi^{*}(\mathbf{r})\) in which \(n_{0}\) denotes the neutralizing positive background charge density in the metallic slab region of length \(L\) and the electron fluid momentum is given by, \(\mathrm{p(r,t)=}\nabla\mathrm{S(r,t)}\). The electrostatic potential \(\phi(\mathbf{r})\), plays the essential role of coupling the Schrodinger equation to the Poisson's relation.
In the slow electron gas compression limit, the chemical potential \(\mu\) is related to the electron number-density via the isothermal equation of state (EoS) [69]
\[n_{e}(\mu,T)=\frac{2^{1/2}m^{3/2}}{\pi^{2}\hbar^{3}}\int_{0}^{+ \infty}\frac{\sqrt{\varepsilon}d\varepsilon}{e^{\beta(\varepsilon-\mu)}+1}, \tag{2a}\] \[P_{e}(\mu,T)=\frac{2^{3/2}m^{3/2}}{3\pi^{2}\hbar^{3}}\int_{0}^{+ \infty}\frac{\varepsilon^{3/2}d\varepsilon}{e^{\beta(\varepsilon-\mu)}+1}. \tag{2b}\]
where \(\beta=1/k_{B}T\) with \(T\) being the equilibrium electron temperature and \(P_{e}\) the quantum statistical electron gas pressure. Note that the simple thermodynamic identity, \(n_{e}\nabla\mu=\nabla P_{e}(n_{e})\), holds.
For the one dimensional model in the stationary limit of equilibrium state, \(p=0\), the wavefunction \(\psi(x,t)\) may be decomposed into separate variables leading to following linearized system, after appropriate normalization of variables [84]
\[i\hbar\frac{d\psi_{1}(t)}{dt}=\varepsilon\psi_{1}(t), \tag{3a}\] \[\frac{d\Psi_{1}(x)}{dx}+\Phi_{1}(x)+E\Psi_{1}(x)=0,\] (3b) \[\frac{d\Phi_{1}(x)}{dx}-\Psi_{1}(x)=0, \tag{3c}\]
where \(\mu_{0}\) is the equilibrium chemical potential and \(\varepsilon\) is the collective quantum-state energy eigenvalue. Note that the linear expansion scheme, \(\{\psi^{0}=1,\phi^{0}=0,\mu^{0}=\mu_{0}\}\) is employed to normalize the functionals \(\Psi(\mathbf{r})=\psi(\mathbf{r})/n_{0}\) and \(\Phi(\mathbf{r})=e\phi(\mathbf{r})/E_{p}\) and variables \(x=x/l_{p}\) and \(t=t\omega_{p}\) in which \(\omega_{p}=\sqrt{4\pi e^{2}n_{0}/m}\) is the conventional plasmon frequency, \(l_{p}=1/k_{p}\) is the characteristic plasmon length with \(k_{p}=\sqrt{2mE_{p}}/\hbar\) being the coresponding plasmon wavenumber with \(E_{p}=\hbar\omega_{p}\) being the plasmon energy. The normalized energy \(E=\epsilon-\sigma\) with \(\epsilon=\varepsilon/E_{p}\) and the \(\sigma=\mu_{0}/E_{p}\) denotes the energy scale measured from the Fermi energy level, \(E_{F}=\hbar^{2}(3\pi n_{0})^{2/3}/2m\), of the degenerate electron gas. The Fourier analysis of coupled pseudoforce system (3) admits the generalized matter wave energy dispersion, \(E=k^{2}+1/k^{2}\), with the first and second terms characterizing the single-electron and collective excitations, respectively. The presence of electrostatic coupling between electrons leads to the dual length-scale plasmonic excitations of the electron gas.
The real-valued solutions to the time-independent system (3) follows
\[\Phi_{1}(x)=\frac{1}{\alpha}\left[\left(c_{1}k_{21}^{2}+c_{3} \right)\cos(k_{11}x)-\left(c_{1}k_{11}^{2}+c_{3}\right)\cos(k_{21}x)\right] \tag{4a}\] \[+\frac{2}{\alpha}\left[\frac{(c_{2}k_{21}^{2}+c_{4})\sin(k_{11}x )}{k_{11}}-\frac{(c_{2}k_{11}^{2}+c_{4})\sin(k_{21}x)}{k_{21}}\right].\] (4b) \[\Psi_{1}(x)=1+\frac{1}{\alpha}\left[\left(c_{1}+c_{3}k_{21}^{2} \right)\cos(k_{21}x)-\left(c_{1}+c_{3}k_{11}^{2}\right)\cos(k_{11}x)\right]\] (4c) \[+\frac{2}{\alpha}\left[\frac{(c_{2}+c_{4}k_{21}^{2})\sin(k_{21}x )}{k_{21}}-\frac{(c_{2}+c_{4}k_{11}^{2})\sin(k_{11}x)}{k_{11}}\right], \tag{4d}\]
where \(c_{i}\) are the integration constants characterizing the boundary conditions with \(\alpha=\sqrt{E^{2}-4}\), \(k_{11}=\sqrt{(E-\alpha)/2}\) and \(k_{21}=\sqrt{(E+\alpha)/2}\) being wavenumbers of the collective and single-electron excitations which satisfy the complementarity-like relation, \(k_{11}k_{21}=1\).
Figure 1 shows variations in various plasmon parameters in terms of the electron number density. Fig 2(a) depicts the variation of normalized electron temperature, \(\theta=T/T_{p}\), where \(T_{p}=E_{p}/\hbar\) is the characteristic plasmon temperature with electron concentration for different values of the electron temperature, \(T\). It is noted that the plasmon temperature, \(T_{p}\), despite its
Figure 1: (a) The variation in normalized plasmon temperature, \(\theta\), with electron concentration, for different values of the electron temperature. (b) Variation of the plasmon length with the electron number density. (c) Variation of normalized chemical potential \(\sigma=\mu_{0}/E_{P}\) with free electron concentration, for different values of the normalized plasmon temperature. (d) Changes in the plasmon energy of the electron gas with electron concentration. The increase in thickness of curves in plots (a) and (c) refer to increases in the varied parameter above given frames.
deceptive name has noting to do with real temperature and analogous to the Fermi temperature depends solely on the electron number density of the electron gas. It is remarked that \(\theta\) decreases sharply with increase in electron number density being larger for larger temperature values. Variation the plasmon length \(l_{p}\) is shown in Fig. 2(b) in terms of the electron concentration. The plasmon length is seen to also decreases with increase in the electron density with typical values of few tenth of a nanometer in metallic electron density range. Figure 2(c) depicts variation in the normalized chemical potential with electron concentration for different values of \(\theta\). The normalized chemical potential, \(\sigma\) only varies greatly with electron number density for lower \(\theta\) values. Figure 2(d) shows the dependence of plasmon energy on the electron number density. This parameter in the fully degenerate electron gas regime has typical values of few electron Volts and increases sharply as the electron number density is increased.
An important note is to be made here regarding the standard definitions of plasmon parameters in degenerate electron regime. Conventionally, the plasmon frequency, hence, the plasmon energy is defined in terms of the equilibrium number density of free electrons. However, at low temperature or equivalently high electron number density regime, majority of metallic electrons are packed below the Fermi level which prohibits them from collective contributions. It will be shown that, in order to correctly describe the plasmonic phenomena in metallic regime, these parameter definitions must be altered according to the plasmon number density, \(n_{p}\) based on transition probability of electrons to the so-called plasmon energy band. According to the new energy band definition for collective excitations, one is able to compute the spill-out electron density at the surface of metals and nanoparticles, which is considered in the following section.
## III Spill-out electron excitations
The spill-out electrons in region 2 has been recently modeled through the following non-Hermitian pseudoforce system [85]
\[\frac{d^{2}\Psi_{2}(x)}{dx^{2}}+2\kappa\frac{d\Psi_{2}(x)}{dx}+ \Phi_{2}(x)+E\Psi_{2}(x)=0, \tag{5a}\] \[\frac{d^{2}\Phi_{2}(x)}{dx^{2}}+2\kappa\frac{d\Phi_{2}(x)}{dx}- \Psi_{2}(x)=0, \tag{5b}\]
in which \(\kappa\) is the normalized damping parameter characterizing the spacial decay of the collective statefunctions in vacuum side of the metallic sandwich. Although, this parameter may have a complex dependence on other parameters, it can be directly measured using a Langmuir probe. This damping parameter plays the analogous role of the tunneling parameter in the
single-electron quantum system. The only difference in this case is that a collective tunneling of electrons leading to the electron spill-out effect has also an oscillatory density behavior along the damping character. Different scenarios of the damping are considered in the Ref. [84]. Recently it has been shown that electrostatic interactions among the spill-out electrons in combination of the single-electron excitations leads to a new photo-plasmonic phenomenon and the collective tunneling of the electrons on the irradiated metallic surface though the photo-plasmonic energy level [87].
The real-valued solutions to the coupled non-Hermitian system (5) follows
\[\Phi_{2}(x)=\frac{e^{-\kappa x}}{\alpha}\left[\left(d_{1}k_{21}^{2 }+d_{3}\right)\cos(k_{22}x)-\left(d_{1}k_{11}^{2}+d_{3}\right)\cos(k_{12}x)\right] \tag{6a}\] \[+\frac{e^{-\kappa x}}{\alpha k_{12}}\left[\left(\alpha-E\right) \left(d_{1}\kappa+d_{2}\right)-2\left(d_{3}\kappa+d_{4}\right)\right]\sin(k_{1 2}x)\] (6b) \[+\frac{e^{-\kappa x}}{\alpha k_{22}}\left[\left(\alpha+E\right) \left(d_{1}\kappa+d_{2}\right)+2\left(d_{3}\kappa+d_{4}\right)\right]\sin(k_{ 22}x).\] (6c) \[\Psi_{2}(x)=\frac{e^{-\kappa x}}{\alpha}\left[\left(d_{1}+d_{3}k _{21}^{2}\right)\cos(k_{22}x)-\left(d_{1}+d_{3}k_{11}^{2}\right)\cos(k_{12}x)\right]\] (6d) \[+\frac{2e^{-\kappa x}}{\alpha k_{22}}\left[\kappa d_{1}+d_{2}+k_ {21}^{2}\left(\kappa d_{3}+d_{4}\right)\right]\sin(k_{22}x)\] (6e) \[-\frac{2e^{-\kappa x}}{\alpha k_{12}}\left[\kappa d_{1}+d_{2}+k_ {11}^{2}\left(\kappa d_{3}+d_{4}\right)\right]\sin(k_{12}x), \tag{6f}\]
where \(d_{i}\) are integration constants set by the boundary conditions, \(k_{12}=\sqrt{k_{11}^{2}-\kappa^{2}}\) and \(k_{22}=\sqrt{k_{21}^{2}-\kappa^{2}}\) are the characteristic generalized wavenumbers of the spill-out electron excitations admitting the new energy dispersion relation, \(E=(k^{2}+\kappa^{2})+1/(k^{2}+\kappa^{2})\), including the pseudo-damping effect. The complementarity-like relation in this case it, \(k_{12}k_{22}=\sqrt{\kappa^{4}-E\kappa^{2}+1}\) which reduces to the undamped plasmon energy dispersion in the limit \(\kappa=0\).
The plasmonic gas has some distinct thermodynamic properties which can be obtained quite analogous to the free electron gas. The number density of plasmon mode is obtained via, \(N(k)=4\pi k^{3}/3\), resulting in the plasmon density of states (DoS), \(g(k)=(dN/dk)/|dE/dk|\). Given that the electron occupation function in plasmon mode is dictates the Pauli exclusion we have, \(F(k,\kappa,\theta)=1/[1+\exp(E/\theta)]\) in normalized form. Therefore, the thermodynamic quantities such as the normalized plasmon electron number-density \(n_{p}(\theta,\kappa)\), the internal energy of plasmon electrons, \(U_{p}(\theta,\kappa)\) and heat capacity contribution from plasmonic excitations, \(c_{p}(\theta,\kappa)\), are given
as
\[n_{p}(\theta,\kappa) = \int\limits_{0}^{\infty}\frac{g(k,\kappa)d_{k}E(k,\kappa)dk}{1+\exp[ E(k,\kappa)/\theta]},\ \ g(k,\kappa)=\frac{2\pi k(k^{2}+\kappa^{2})^{2}}{\left|\left(k^{2}+\kappa^{2} \right)^{2}-1\right|}, \tag{7a}\] \[U_{p}(\theta,\kappa) = \int\limits_{0}^{\infty}\frac{g(k,\kappa)E(k,\kappa)d_{k}E(k, \kappa)dk}{1+\exp[E(k,\kappa)/\theta]},\ \ E(k,\kappa)=(k^{2}+\kappa^{2})+1/(k^{2}+\kappa^{2}),\] (7b) \[c_{p}(\theta,\kappa) = \int\limits_{0}^{\infty}g(k,\kappa)E(k,\kappa)d_{\theta}F(k, \kappa,\theta)d_{k}E(k,\kappa)dk, \tag{7c}\]
As previously noted, there is a fundamental difference between plasmon electron number density, \(n_{p}\), and that of the free electrons, \(n_{0}\), appearing in (7) and the Fermi energy, respectively. The former quantity refers to the excited electrons to the plasmon level, whereas, the later is the total electron number density at the equilibrium temperature. In the fully degenerate quantum plasmas only fraction of free electrons below the Fermi energy are excited to the plasmonic level, depending on the plasma density-temperature regime and contribute to collective oscillations.
In Fig. 2 we have shown the variations of thermodynamic quantities corresponding to electron which are excited to the plasmon energy band. The generalized plasmon energy dispersion for different values of the damping parameter, \(\kappa\), is depicted in Fig. 3(a). The dashed curve corresponds to the free electron energy dispersion. For undamped case, \(\kappa=0\), the plasmon conduction band takes place at, \(k=1\), (in plasmon wavenumber unit), with an energy band gap for \(0<E<2\) (in plasmon energy unit) in which the plasmon excitations are unstable. For values of \(E(k,0)>2\) the collective excitations are of dual-tone nature, while, for \(E\simeq 2\) the quantum beating behavior takes place for which the collective and single-electron excitation scale-lengths match approximately. The dispersion curve consists of two distinct branches joining at \(k=1\) with the left/right branch corresponding to the wave-like/particle-like excitations. For the damped plasmons, \(\kappa\neq 0\), (as the case for spill-out electrons) the wave-like oscillations become unstable above a critical wavenumber value, \(k_{c}=\kappa^{2}+1/\kappa^{2}\) and only single-electron excitations are allowed. However, the damping parameter does not significantly affect the particle-like oscillation branch. The plasmon electron DoS is shown in Fig. 2(b) for different values of the damping parameter. The dashed line corresponds to the free electron model. There is a Van-Hove-like singularity at the critical wavenumber value, \(\sqrt{1-\kappa^{2}}\) which moves to lower wavenumbers with increase of the damping parameter. This feature is quite analogous to the well-known Van-Hove singularity in the DoS at the Fermi level of most solids [30; 31].
It is clearly remarked that, at the small wavelength limit the plasmon electron DoS approaches the value of the free electron gas. The later is because at such limit the single electron excitations dominate the collective ones. Figure 2(c) depicts the plasmon occupation number showing a peak which coincides with \(k=1\) for \(\kappa=0\), with the dashed curve corresponding to the free electron occupation function peaking at \(k=0\). The plasmon occupation number peak position, however, moves to lower \(k\)-values, as the damping parameter is increased and
Figure 2: (a) The plasmon energy dispersion profiles for different damping parameter values. (b) The plasmon density of states (DoS) for different values of the damping parameter, \(\kappa\). (c) The plasmon energy band occupation function for different values of the damping parameter. (d) The plasmon heat capacity contribution for different values of \(\kappa\). The dashed curves indicate the corresponding curves for the free electron model. The increase in thickness of curves in plots refer to increases in the varied parameter above given frames.
for the total damping \(\kappa=1\) the occupation number peaks at \(k=0\). Note that the long wavelength plasmon modes become more occupied as the damping parameter increases and looks like the single-electron dashed curve as expected due to domination of these excitations. The variation of plasmonic heat capacity is shown in Fig. 2(d) for different values of the damping parameter in terms of the normalized temperature. The corresponding single-electron heat capacity (dashed curve) shows increased contribution at lower temperature because the low temperature limits the electron transitions to plasmon band are greatly diminished. It is also remarked that plasmon damping has significant effect on the plasmon density only at higher electron temperatures.
## IV Surface plasmon resonance effect
In order to study the surface plasmon resonance effect, we consider a more generalized driven coupled pseudoforce system
\[\frac{d^{2}\Psi_{2}(x)}{dx^{2}}+2\kappa\frac{d\Psi_{2}(x)}{dx}+ \Phi_{2}(x)+E\Psi_{2}(x)=F\cos(qx), \tag{8a}\] \[\frac{d^{2}\Phi_{2}(x)}{dx^{2}}+2\kappa\frac{d\Phi_{2}(x)}{dx}- \Psi_{2}(x)=0, \tag{8b}\]
in which \(F\) and \(q\) are, respectively, the pseudoforce driving normalized amplitude and wavenumber. The real-valued complementary solutions of the system (8) reads
\[\Phi(x)=\Phi_{2}(x)-\frac{Fe^{-\kappa x}}{\alpha\eta_{1}\eta_{2}} \left[\eta_{1}\left(q^{2}-k_{21}^{2}\right)\cos(k_{22}x)-\eta_{2}\left(q^{2}- k_{11}^{2}\right)\cos(k_{12}x)\right] \tag{9a}\] \[+\frac{F\kappa e^{-\kappa x}}{\alpha\eta_{1}\eta_{2}}\left[\eta_ {1}\left(q^{2}+k_{21}^{2}\right)\frac{\sin(k_{22}x)}{k_{22}}-\eta_{2}\left(q^ {2}+k_{11}^{2}\right)\frac{\sin(k_{12}x)}{k_{12}}\right]\] (9b) \[+\frac{F}{\eta_{1}\eta_{2}}\left\{\left[\left(q^{2}-k_{11}^{2} \right)\left(q^{2}-k_{21}^{2}\right)-4q^{2}\kappa^{2}\right]\cos(qx)+2q\kappa \left(k_{11}^{2}+k_{21}^{2}-2q^{2}\right)\sin(qx)\right\},\] (9c) \[\Psi(x)=\Psi_{2}(x)+\frac{Fe^{-\kappa x}}{\alpha\eta_{1}\eta_{2} }\left[\eta_{1}k_{21}^{2}\left(q^{2}-k_{21}^{2}\right)\cos(k_{22}x)-\eta_{2}k_ {11}^{2}\left(q^{2}-k_{11}^{2}\right)\cos(k_{12}x)\right]\] (9d) \[-\frac{F\kappa e^{-\kappa x}}{\alpha\eta_{1}\eta_{2}}\left[\eta_ {1}k_{21}^{2}\left(q^{2}+k_{21}^{2}\right)\frac{\sin(k_{22}x)}{k_{22}}-\eta_{ 2}k_{11}^{2}\left(q^{2}+k_{11}^{2}\right)\frac{\sin(k_{12}x)}{k_{12}}\right]\] (9e) \[-\frac{F}{\eta_{1}\eta_{2}}\left\{\left[1-\left(k_{11}^{2}+k_{21} ^{2}-q^{2}\right)\left(q^{2}+4\kappa^{2}\right)\right]q^{2}\cos(qx)+2q\kappa \left(1-q^{4}-4\kappa^{2}q^{2}\right)\sin(qx)\right\}, \tag{9f}\]
where \(\eta_{1}=(q^{2}-k_{11}^{2})^{2}+4q^{2}\kappa^{2}\), \(\eta_{2}=(q^{2}-k_{21}^{2})^{2}+4q^{2}\kappa^{2}\) with \(\Phi_{2}\) and \(\Psi_{2}\) are the solutions of homogenous system, given in (5). The solutions in the regions 1 (metallic) and 2 (vacuum) can be matched via appropriate boundary conditions. Assuming, \(c_{1}=c_{2}=c_{4}=0\) and \(c_{3}=1\)
without loss of generality, we arrive at the following continuous solutions in the region 1
\[\Phi_{1}(x)=\frac{1}{\alpha}\left[\cos(k_{11}x)-\cos(k_{21}x)\right], \tag{10a}\] \[\Psi_{1}(x)=1+\frac{1}{\alpha}\left[k_{21}^{2}\cos(k_{21}x)-k_{11 }^{2}\cos(k_{11}x)\right]\!. \tag{10b}\]
with the normalized wavefunction defined as, \(\Psi_{N1}(x)=\Psi_{1}(x)/\Psi_{N}\), where \(\Psi_{N}\) is the normalizing factor, so that, \(\int_{-L}^{0}\left[\Psi_{N1}^{2}(x)\right]dx=1\). The normalization factor, then, reads
\[\Psi_{N}=\left\{1+\frac{a_{1}^{2}+a_{2}^{2}}{2}+\frac{a_{1}^{2}}{ 8Lk_{11}}\sin(2k_{11}L)+\frac{a_{2}^{2}}{8Lk_{21}}\sin(2k_{21}L)\right. \tag{11a}\] \[\left.+\frac{a_{1}}{L}\left[\frac{1}{k_{11}}+\frac{a_{2}k_{11} \cos(k_{21}L)}{k_{11}^{2}-k_{21}^{2}}\right]\sin(k_{11}L)+\frac{a_{2}}{L} \left[\frac{1}{k_{21}}-\frac{a_{1}k_{21}\cos(k_{11}L)}{k_{11}^{2}-k_{21}^{2}} \right]\sin(k_{21}L)\right\}^{1/2}\!\!, \tag{11b}\]
where \(a_{1}=(\alpha-E)/2\alpha\) and \(a_{2}=(\alpha+E)/2\alpha\). The \(d_{i}\) coefficients of wavefunctions in the spill-out region, are given as
\[d_{1}=\frac{F\left\{\alpha\left(q^{4}+1\right)+k_{21}^{2}\eta_{1 }-k_{11}^{2}\eta_{2}-q^{2}\left[\eta_{1}-\eta_{2}+\alpha\left(k_{11}^{2}+k_{2 1}^{2}+4\kappa^{2}\right)\right]\right\}}{2\eta_{1}\eta_{2}\left(k_{11}^{2}-k_{ 21}^{2}\right)}, \tag{12a}\] \[d_{2}=\frac{F\kappa\left\{\alpha k^{4}\left(3k_{11}^{2}-3k_{21}^ {2}-\alpha\right)-\left(k_{11}^{2}-k_{21}^{2}+\alpha\right)\left[\left(\alpha+ k_{21}^{2}\eta_{1}\right)-k_{11}^{2}\eta_{2}\right]\right\}}{\alpha\eta_{1}\eta_{2} \left(k_{11}^{2}-k_{21}^{2}\right)}\] (12b) \[+\frac{F\kappa\left(k_{11}^{2}-k_{21}^{2}+\alpha\right)\left\{2 \alpha\kappa^{4}-k^{2}\left[\eta_{1}-\eta_{2}+\alpha\left(k_{11}^{2}+k_{21}^{ 2}\right)\right]\right\}}{\alpha\eta_{1}\eta_{2}\left(k_{11}^{2}-k_{21}^{2} \right)},\] (12c) \[d_{3}=-\frac{\left(k_{21}^{2}-k_{11}^{2}+\alpha\right)}{2\left( k_{11}^{2}-k_{21}^{2}\right)}\] (12d) \[+\frac{F\left\{k_{21}^{2}\eta_{1}\left(q^{2}-k_{21}^{2}\right)-k_ {11}^{2}\eta_{2}\left(q^{2}-k_{11}^{2}\right)-q^{2}\alpha\left[1-\left(k_{11}^ {2}+k_{21}^{2}-q^{2}\right)\left(q^{2}+4\kappa^{2}\right)\right]\right\}}{2 \eta_{1}\eta_{2}\left(k_{11}^{2}-k_{21}^{2}\right)},\] (12e) \[d_{4}=\frac{2\kappa\left[\left(k_{21}^{2}+\alpha\right)-k_{11}^{2 }\right]\eta_{1}\eta_{2}-F\kappa\left[k_{11}^{4}+\alpha q^{4}\left(k_{11}^{2}+ k_{21}^{2}-12\kappa^{2}\right)\right]}{\eta_{1}\eta_{2}\left(k_{11}^{2}-k_{21}^{2} \right)},\] (12f) \[+\frac{F\kappa\left\{k_{21}^{4}\eta_{1}+3\alpha q-q^{2}\left[ \alpha+3\left(k_{21}^{2}\eta_{1}-k_{11}^{2}\eta_{2}\right)+4\alpha\kappa^{2} \left(k_{11}^{2}+k_{21}^{2}\right)\right]\right\}}{\eta_{1}\eta_{2}\left(k_{11}^ {2}-k_{21}^{2}\right)}. \tag{12g}\]
Ignoring the transient part of the solutions in region 2, we arrive at the following simple steady state solutions far beyond the metallic interface
\[\Phi_{ss}(x)=-A\sin(qx+\gamma),\ \ \gamma=\arctan\left[\frac{\left(q^{2}-k_{11}^{2} \right)\left(q^{2}-k_{12}^{2}\right)-4q^{2}\kappa^{2}}{2\kappa q\left(k_{11}^ {2}+k_{12}^{2}-2q^{2}\right)}\right], \tag{13a}\] \[\Psi_{ss}(x)=A\sin(qx+\delta),\ \ \delta=\arctan\left[\frac{\left(k_{11}^{2}+k_{12}^{2}-q^{2} \right)\left(q^{2}+4\kappa^{2}\right)-1}{2(\kappa/q)\left(q^{4}+4\kappa^{2}q^{ 2}-1\right)}\right]. \tag{13b}\]
where \(A=F/\eta_{1}\eta_{2}\) is the amplitude of the steady state solutions.
Figure 3 depicts the amplitude of steady state solution as a function of the input wavenumber for different values of the normalized quasiparticle energy orbital, \(E\). It is clearly remarked that there is a dual resonance feature occurring due to both wave-like and particle-like wavenumber
match with that of the driving pseudoforce. However, the current plasmon resonance effect is caused by the wavenumber matching unlike conventional cases which takes place due to the frequency matching, hence, it is called the pseudo-resonance effect. Figure 3(a) shows that the increase in quasiparticle energy leads to further separation of the resonance peaks, whereas, the decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to further separation of the resonance peaks, whereas, the decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to further separation of the resonance peaks, whereas, the decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to further separation of the resonance peaks, whereas, the decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to further separation of the resonance peaks, whereas, the decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to further separation of the resonance peaks, whereas, the decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to further separation of the resonance peaks, whereas, the decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to further separation of the resonance peaks, whereas, the decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy leads to a decrease in the quasiparticle energy. The increase in the quasiparticle energy leads to a decrease in
the amplitude of both peaks decreases. Conversely, increase in the damping parameter leads to sharp decrease in peak magnitudes, whereas the their positions are left unaffected, as shown in Fig. 3(b). Figure 3(c) depicts the phase angle between the steady state solution and the driving pseudo-force as a function of the driving wavenumber. It is remarked that the angle changes sign at resonant wavenumbers, which is a known characteristic feature of the resonant coupled systems. The phase angle variation between the driving force and the electrostatic energy function is shown in Fig. 3(d). It is remarked that there is a phase lag between the wavefunction and electrostatic energy for small driving wavenumbers, whereas, for large wavenumbers no phase lag occurs.
Figure 4 shows the local wavefunction and electrostatic energy in the metallic and vacuum regions connected by the boundary conditions for given values of the energy orbital, \(E\), damping parameter, \(\kappa\) and pseudoforce magnitude, \(F\). The dual-tone nature of oscillations is clearly apparent in the metallic region. In the absence of driving pseudoforce, the wavefunction in the spill-out region is of damped oscillatory type, as shown in Fig. 4(a). However, as remarked from Fig. 4(b), in the presence of pseudoforce, the oscillation extend further into the vacuum. Such plane-wave oscillations beyond the metallic interface may be an indication of resonant collective electron tunneling or amplified electromagnetic radiation with matched wavenumber of quasiparticle energy orbital and the corresponding driving psedoforce can be either an electron beam or the electromagnetic wave directed into the metallic surface at the resonant angle. Figure 4(c) shows the electrostatic energy distribution around the metal-vacuum interface. The oscillation in metallic region are again dual-tone and strongly damp in vacuum side. The Fig. 4(d) shows a similar feature as Fig. 4(b) for the case of electrostatic potential energy in the presence of driving pseudoforce.
The solutions to psedoforce system, studied sofar, represent the pure collective quantum states. The collective statistical quantities corresponding to an equilibrium state are, however, mixed quantum states which can be obtained via the standard averaging of the given quantity over all possible quasiparticle energy orbital using the appropriate occupation function and the Dos. For instance, the local electron distribution is given as
\[\langle n(x)\rangle=\frac{\int\limits_{2}^{\infty}\left|\Psi_{N}(x)\right|^{2 }g(E,\kappa)f(E,\theta)dE}{\int\limits_{2}^{\infty}g(E,\kappa)f(E,\theta)dE}, \tag{14}\]
where the density of state in the plasmonic energy band is given in terms of the energy eigen
values, \(E\)
\[g(E,\kappa)=\frac{\pi(E+\alpha)^{2}\sqrt{E-2\kappa^{2}+\alpha}}{\sqrt{2}\left[E \left(E+\alpha\right)-4\right]}. \tag{15}\]
Figure 4: (a) The wavefunction profile in metallic and vacuum region for given parameters in the absence of driving pseudoforce. (b) The wavefunction profile in metallic and vacuum region for given parameters in the presence of driving pseudoforce. (c) The electrostatic energy profile in metallic and vacuum region for given parameters in the absence of driving pseudoforce. (d) The electrostatic energy profile in metallic and vacuum region for given parameters in the presence of driving pseudoforce.
On the other hand, the electrostatic energy distribution is
\[\langle\Phi(x)\rangle=\frac{\int\limits_{2}^{\infty}\Phi(x)g(E,\kappa)f(E,\theta )dE}{\int\limits_{2}^{\infty}g(E,\kappa)f(E,\theta)dE}. \tag{16}\]
Figure 5 shows the local equilibrium electron number density and electrostatic potential energy distribution in the metal-vacuum junction for given plasma parameters in the absence and presence of the driving pseudoforce. Figure 5(a) reveals that in the metallic region density distribution is strongly oscillatory with the amplitude of oscillations increasing toward the metallic edge. Before the metal edge, in the metallic region, a strong electron depletion occurs leading to a pronounced electric dipole, hence, an abrupt change in the dielectric properties is expected at the metal-vacuum interface. Beyond the metal interface, however, the sharp oscillatory decrease in the electron concentration ocuurs, in the absence of driving pseudoforce. Figure 5(b) shows the electron number density in the presence of the pseudoforce with given amplitude and driver wavenumber and similar other parameters as in Fig. 5(a). It is remarked that, presence of driving pseudoforce does not significantly affect the plasmon excitations in the metallic region. However, electron density of spill-out region extends to far beyond the interface. Note that amplitude of resonant spill-out electron excitations depends on the damping parameter and the temperature other than the driving force amplitude and wavenumber. Figure 5(c) depicts the spacial electrostatic energy distribution in the metallic and vacuum region in the absence of driving pseudoforce. Similar features as in Fig. 5(a) are present in Fig. 5(c) showing that the electrostatic potential energy drops sharply before the metallic edge due to the electron depletion. It is also remarked that the electrostatic energy penetrates into the vacuum even in the absence of a driving but quickly fades further away from the interface. The spill-out of electrons has the same oscillatory behavior as the static charge screening and can produce Friedel-like feature at the metallic edge. The later feature can have fundamental effect on scanning probe microscopy (SPM) measuring quantum device operations such as the scanning tunneling microscope (STM).
Figures 6(a) and 6(b) show the schematic density plot of the electron density distribution in the absence and presence of driving pseudoforce (which is assumed to be the radiation). In Fig. 6(a) the electron depletion has led to the formation of strong post interface dipole. The surface plasmon resonance is shown in Fig. 6(b) where the driving pseudoforce has been applied. It is remarked that the surface charge oscillations is significantly manipulated by the presence of pseudoforce. Figure 6(c) shows the schematic of the surface plasmon resonance
Figure 5: (a) The quantum statistically averaged electron number density distribution in metal-vacuum configuration for given normalized temperature and damping parameter, in the absence of external driving pseudoforce. (b) The quantum statistically averaged electron number density distribution in metal-vacuum configuration for given normalized temperature and damping parameter, in the presence of external driving pseudoforce. (c) The quantum statistically averaged electrostatic potential energy distribution in metal-vacuum configuration for given normalized temperature and damping parameter, in the absence of external driving pseudoforce. (d) The quantum statistically averaged electrostatic potential energy distribution in metal-vacuum configuration for given normalized temperature and damping parameter, in the presence of external driving pseudoforce.
in vacuum-metal-vacuum sandwich. Note that vacuum side may be replaced with a dielectric layer with the dielectric permittivity ratio \(\epsilon_{r}\). In that case the plasmon frequency definition replaces with \(\omega_{p}=\sqrt{4\pi e^{2}n_{0}/(\epsilon_{r}m_{e})}\). Each region in the plot is characterized by two functions corresponding to the region. This is contrary to the conventional quantum theory which uses the dielectric permittivity ratio \(\epsilon_{r}\).
Figure 6: (a) The density plot of electron distribution around the metal-vacuum interface in the absence of driving pseudoforce showing the large pre-interface electric dipole. (b) The density plot of electron distribution around the metal-vacuum interface in the presence of driving pseudoforce for given values of normalized temperature, chemical potential, damping parameter and driving parameters. (c) The schematic plot of vacuum-metal-vacuum sandwich in metal slab configuration. (d) Dynamics of spill-out electrons in the damped plasmon energy band structure.
only the wavefunction to characterize the single electron behavior in a given potential. The appropriately irradiated surface spill-out electrons cause the resonant electron-photon interactions and a collective tunneling or amplified reflected radiation intensity. Figure 6(d) shows the electron dynamics in the energy band structure of the plasmonic device. At an equilibrium temperature finite number of electrons are excited to the plasmon energy band in which collective oscillations among the excited electrons takes place. The plasmon band plays the role of a stirring room to produce hot electrons where the electrons behave collectively analogous to the starling murmuration effect in nature. Note that at low electron concentration or relatively higher temperatures greater majority of electron excite to the plasmon band. However, in the metallic dense electron regime only a fraction of electrons below the Fermi energy level take part in the collective phenomena. Therefore, the effective plasmon frequency of the quantum system, which is defined erroneously based on the Fermi electron density, must be reformulated according to the new equation of state (EoS) for plasmon electrons in an equilibrium temperature.
Statistical averages of other quantities, such as the (pseudo)resonance amplitude, \(A_{s}=\left\langle A\right\rangle\), and the (pseudo)resonance power, \(P_{s}\) can be obtained, accordingly. The pseudoresonance power may be written as
\[P=\frac{F^{2}q}{\eta_{1}\eta_{2}}\cos(qx)\cos(qx+\delta), \tag{17}\]
Averaging over the complete cycle one gets
\[\bar{P}=\frac{F^{2}q\cos\delta}{2\eta_{1}\eta_{2}}, \tag{18}\]
where \(\cos\delta\) is the pseudoresonance power coefficient given by
\[\cos\delta=\frac{q\left[\left(k_{11}^{2}+k_{21}^{2}-q^{2}\right)\left(q^{2}+4 \kappa^{2}\right)-1\right]}{\sqrt{q^{2}[\left(k_{11}^{2}+k_{21}^{2}-q^{2} \right)\left(q^{2}+4\kappa^{2}\right)-1]^{2}+4\kappa^{2}(q^{4}+4q^{2}\kappa^{ 2}-1)^{2}}}. \tag{19}\]
The quantum statistically averaged pseudoresonance power over all energy orbital leads to
\[\left\langle\bar{P}\right\rangle=\frac{\int\limits_{2}^{\infty}\bar{P}f(E, \theta)g(E,\kappa)dE}{\int\limits_{2}^{\infty}f(E,\theta)g(E,\kappa)dE}. \tag{20}\]
Figure 7 shows the variation in quantum statistically averaged pseudoresonance parameters of the driven spill-out electrons. The variation of steady state damped driven solution amplitude is depicted in Fig. 7(a) for different values of the damping parameter and fixed other
parameters. It is remarked that the amplitude maximized close to the plasmon wavenumber for small damping. The amplitude decreases sharply and its maximum shifts toward smaller driving wavenumbers as the damping increases. The temperature dependence of the pseudoresonance amplitude is shown in Fig. 7(b). It is remarked that, close to the plasmon wavenumber
Figure 7: (a) The variation of psedoresonance amplitude with the driver wavenumber for different values of damping paramter. (b) Temperature dependence of the pseudoresonance amplitude for different values of driver wavenumber. (c) The power coefficient of the pseudoresonance for various damping level. (d) The power loss/gain in pseudo resonance in terms of driver wavenumber for different damping values. The increase in thickness of curves in plots refer to increases in the varied parameter above given frames.
driving the resonance amplitude maximizes at small temperatures. However, the later feature appears insignificant for wavenumbers further away from the critical wavenumber. The variation of power coefficient, \(\cos\delta\) is shown with respect to the driving frequency for different damping parameter values in Fig. 7(c). This quantity changes the sign at the critical driving \(q=1\), whereas, it maximizes at small \(q\) and approximately \(q=1.5\). It is also remarked that this quantity is not significantly affected by the change in damping parameter. In Fig. 7(d) the variation in pseudoresonance power is depicted for different damping parameter values. The quantity changes the sign close to the critical wavenumber indicating a power loss/gain below/above the corresponding critical point. It is remarked that the magnitude of power loss/gain strongly varies with the damping parameter.
The electron density profiles for different wavenumber regimes are compared in Fig. 8. It is remarked that the electron density profile close to the driven metallic interface strongly depends on the driver wavenumber regime. For small wavenumbers \(q=0.2\), in Fig. 8(a), the resonance amplitude is very small with the transient state extending to average distances beyond the interface. For \(q=0.8\) the same feature repeats with increased resonance amplitude. Close to the critical wavenumber the transient surface density has increased significantly. Finally, for overcritical wavenumber value \(q=1.2\) the electron accumulation in front of the metallic surface is further increased. It seems that the sign of the pseudoresonance power has a keen relation to the momentum transfer to the surface electrons.
In Fig. 9 we show some important quantities regarding the DoS of plasmon energy band, the EoS of plasmon electrons, the real plasmon wavelength defined in terms of number density of the excited electron to the plasmon band. As already mentioned, the plasmon scale parameter \(l_{p}\) depicted in Fig. 1 is defined through the total electron concentrations in the metal. However, the correct definition must be via the EoS of the excited electrons. The energy DoS of the plasmon band as given in Eq. (15) is plotted in Fig. 9(a). It is seen that the DoS diverges for minimum plasmon conduction energy, \(E=2\), and has minimum value which shifts toward this critical value as the damping parameter increases. However, the increase in damping parameter leads to overall decrease of plasmon DoS. Note that for large energies the DoS approaches the free electron values \(g_{0}(E)=2\pi\sqrt{E_{0}}\) with \(E_{0}=k^{2}\) is the normalized free electron energy dispersion. The number density of plasmon electrons is shown in Fig. 9(b) in terms of the total electron density, for different values electron temperature. The dashed line in (b) indicates the plasmon saturation limit \(n_{p}=n_{0}\). The plasmon electron density is obtained using the following
conventional definition
\[n_{p}(\sigma,\theta,\kappa)=n_{0}(\sigma,\theta)\frac{\int_{2}^{\infty}g(E,\kappa )f(E,\theta)dE}{\int_{0}^{\infty}g_{0}(E_{0})f(E_{0},\theta)dE_{0}}, \tag{21}\]
where \(n_{0}=n_{c}\theta^{6}\text{Li}_{3/2}[-\text{exp}(\sigma/\theta)]^{4}\) is the total electron density given by (2) with \(n_{c}=16e^{6}m^{3}/\pi^{3}\hbar^{6}\) being the characteristic electron number density, Li is the polylog function and \(f(E,\theta)=1/[1+\text{exp}(E/\theta)]\), is the plasmon electron occupation function. It is remarked that for a given electron temperature up to a lower critical value of, \(n_{0}\), we have \(n_{p}=n_{0}\), that is,
Figure 8: The electron number density distribution of driven surface plasmon resonance for different values the driving wavenumber and similar other parameters. (a) Small under-critical regime \(q=0.2\). (b) Under-critical regime \(q=0.8\). (c) Near-critical wavenumber regime \(q=1\). (d) Over-critical wavenumber regime \(q=1.2\).
all the electrons in the gas contribute to the collective phenomena, whereas, with increase of \(n_{0}\) beyond this critical value the \(n_{p}\) sharply falls off and vanishes for another upper critical, \(n_{0}\). Hence for a given temperature there is a cutoff plasmon electron density and a maximum electron density. The electron density is then given by
\[\frac{\partial\rho}{\partial t}=\frac{\partial\rho}{\partial t} \tag{22}\]
Figure 9: (a) The plasmon energy density of states for different values of the damping parameter. (b) The plasmon electron number density versus the total electron number density of the electron gas for different values of the electron temperatures. (c) Variation of the plasmon wavelength \(\lambda_{p}=2\pi/k_{p}\) with respect to temperature for different values of the total electron number density. (d) Variation of the plasmon wavelength for different dielectric replaces with the vacuum. The dashed line in (b) indicates the plasmon saturation limit \(n_{p}=n_{0}\). The increase in thickness of curves in plots refer to increases in the varied parameter above given frames.
value before this cutoff value. The later phenomenon is an important feature of the quantum plasmon excitations which is usually ignored in the plasmonic research. The cutoff electron density increases with increase of the electron gas temperature. Variations of plasmon wavelength \(\lambda_{p}=2\pi/k_{p}\) (in nanometer unit) is depicted in Fig. 9(c) with respect to total electron temperature (in 100K unit) for different total electron number density values. It is shown that for larger electron concentrations the plasmon wavelength increases significantly at lower temperatures and sharply decreases at higher temperatures. The variation of \(\lambda_{p}\) with the temperature is shown for the case where the given semiconductor is replaced by the vacuum side. The dielectric values are \(\epsilon_{r}(AlN)\simeq 4.53\), \(\epsilon_{r}(ZnSe)\simeq 5.4\), \(\epsilon_{r}(SiC)\simeq 6.7\) and \(\epsilon_{r}(GaP)\simeq 9.09\), respectively in the plot from thin to thick curves.
In Fig. 10 we have shown the variations of the surface plasmon resonance wavelength in nanometers in terms of various electron gas parameters. Figure 10(a) shows that the resonance wavelength increases sharply with the total electron number density of electrons falling in the visible range for electron density \(10^{19}\)cm\({}^{-3}\)\(<n_{0}<10^{20}\)cm\({}^{-3}\) at room temperature and \(\kappa=0.2\). Figure 10(b) reveals that \(\lambda_{res}\) decreases sharply with increase of the electron temperature for given electron concentration. In Fig. 10(c) variation in resonance wavelength is studied in terms of the damping parameter for given values of electron density and temperature. It is remarked that increase in this parameter leads to increase in the resonance wavelength. Finally, Fig. 10(d) reveals that increase of the dielectric permittivity of substance replaced with the vacuum leads to increase in the resonance wavelength for given values of other parameters. Note that in current model of the surface plasmon resonance we have assumed that the spill-out electrons have the same normalized temperature as the metal electrons. This is far from being exact because the plasmon temperature depends on the plasmon electron density which itself is calculated from the normalized temperature. Therefore, for correct theoretical modeling the plasmon spill-out electron density can be measured, directly, using Langmuir probe and then a better estimate for the plasmon energy of spill-out region can be obtained.
Finally, in Fig. 11 we have shown the characteristic (not conventional) energy and length scales variation with the total number density of electron gas calculated based on the excited electrons to the plasmon energy band for given electron temperature. The dashed curves correspond to the free electron dispersion relation. It is remarked from Fig. 11(a) that the plasmon energy has a maximum value depending of the electron temperature and the maximum plasmon energy shifts to higher electron density as the temperature increases. The overall plasmon energy value also increases with increase of the electron temperature. It is noted that
the plasmon energy of free electron gas (dashed curve) increases monotonically. Figure 11(b) reveals that the plasmon wavelength, which is a characteristic scale of surface plasmon resonance effect, has a minimum value shifting higher electron densities as the temperature increases. This minimum also decreases substantially with increase of electron temperature. The plasmon length of free electron gas (dashed curve) decreases monotonically. Figure 11(c) shows the plasmon electron pressure in terms of the electron temperature for different total electron density. The plasmon energy of free electron gas (dashed curve) increases monotonically with increase of electron temperature.
Figure 11: (a) The variation of plasmon energy with respect to the total number density of electron gas as calculated based on the excited electron number density. (b) The variation of plasmon wavelength (resonance scale-length) with respect to the total number density of electron gas as calculated based on the excited electron number density. (c) The plasmon pressure in terms of electron temperature for different total electron number density. (d) The plasmon pressure in terms of total electron number density for different electron temperature. The dashed curves correspond to free electron energy dispersion. The increase in thickness of curves in plots refer to increases in the varied parameter above given frames.
concentrations. It can be shown that for large values of the temperature the plasmon pressure obeys a simple relation of the form, \(P_{p}=T+10^{\tau}\), \(\tau\simeq 1.1122\). It is also remarked that increase in the total electron density lowers the plasmon pressure and that the free electron pressure corresponding to the largest value (dashed curve) is even lower. Figure 11(d) depict the plasmon pressure as a function of total electron number density for different electron temperatures. It is shown that the plasmon pressure of an arbitrary degenerate electron gas is higher for lower total electron number density and higher temperatures. Pressure of the free electron gas (dashed curve) indicates a slightly lower pressure corresponding to \(T=800\)K.
## V Conclusion
In this research we developed a theory of surface resonance effect based on the effective Schrodinger-Poisson system. The Hermitian and non-Hermitian models were used to respectively, model the metallic and spill-out electron excitations which are shown to be of dual-scale type. The quantum statistically averaged resonance parameters were obtained and discussed in terms of various plasmon parameters. Particularly, it was shown that the conventional definitions of plasmon parameter should be corrected for dense electron gas based on the electron transition probability into the plasmon energy band. It was shown that the resonance wavelength falls into the visible and infrared region for typical lower degenerate electron number densities at room temperature. Current theory can further elucidate the underlying physical mechanism involved in collective quantum resonance and unveils the dominant effect of electrostatic interactions in collective phenomena.
## VI Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request. |
2307.07107 | Graph Positional and Structural Encoder | Positional and structural encodings (PSE) enable better identifiability of
nodes within a graph, rendering them essential tools for empowering modern
GNNs, and in particular graph Transformers. However, designing PSEs that work
optimally for all graph prediction tasks is a challenging and unsolved problem.
Here, we present the Graph Positional and Structural Encoder (GPSE), the
first-ever graph encoder designed to capture rich PSE representations for
augmenting any GNN. GPSE learns an efficient common latent representation for
multiple PSEs, and is highly transferable: The encoder trained on a particular
graph dataset can be used effectively on datasets drawn from markedly different
distributions and modalities. We show that across a wide range of benchmarks,
GPSE-enhanced models can significantly outperform those that employ explicitly
computed PSEs, and at least match their performance in others. Our results pave
the way for the development of foundational pre-trained graph encoders for
extracting positional and structural information, and highlight their potential
as a more powerful and efficient alternative to explicitly computed PSEs and
existing self-supervised pre-training approaches. Our framework and pre-trained
models are publicly available at https://github.com/G-Taxonomy-Workgroup/GPSE.
For convenience, GPSE has also been integrated into the PyG library to
facilitate downstream applications. | Semih Cantürk, Renming Liu, Olivier Lapointe-Gagné, Vincent Létourneau, Guy Wolf, Dominique Beaini, Ladislav Rampášek | 2023-07-14T01:04:18Z | http://arxiv.org/abs/2307.07107v2 | # Graph Positional and Structural Encoder
###### Abstract
Positional and structural encodings (PSE) enable better identifiability of nodes within a graph, as in general graphs lack a canonical node ordering. This renders PSEs essential tools for empowering modern GNNs, and in particular graph Transformers. However, designing PSEs that work optimally for a variety of graph prediction tasks is a challenging and unsolved problem. Here, we present the graph positional and structural encoder (GPSE4), a first-ever attempt to train a graph encoder that captures rich PSE representations for augmenting any GNN. GPSE can effectively learn a common latent representation for multiple PSEs, and is highly transferable. The encoder trained on a particular graph dataset can be used effectively on datasets drawn from significantly different distributions and even modalities. We show that across a wide range of benchmarks, GPSE-enhanced models can significantly improve the performance in certain tasks, while performing on par with those that employ explicitly computed PSEs in other cases. Our results pave the way for the development of large pre-trained models for extracting graph positional and structural information and highlight their potential as a viable alternative to explicitly computed PSEs as well as to existing self-supervised pre-training approaches.
## 1 Introduction
Graph neural networks (GNN) are the dominant paradigm in graph representation learning [29; 11], spanning diverse applications across many domains in biomedicine [77], molecular chemistry [73], and more [16; 31; 33; 45]. For most of its relatively short history, GNN algorithms were developed within the message-passing neural network (MPNN) framework [23], where vertices exchange internal states within their neighborhoods defined by the graph structure, which are typically sparse. Despite being computationally efficient, the sparsity leveraged by MPNN has raised many fundamental limits, such as the 1-WL bounded expressiveness [75], under-reaching [5], and over-squashing [3; 66]. More recently, leveraging the success of the Transformer model in natural language processing [67], graph Transformer (GT) models were developed as a new paradigm for GNN to address the above limitations [15]. Attending to all pairs of nodes in a graph circumvents the aforementioned sparsity-induced limitations of MPNN, but it also discards all inductive biases relating to the graph struc
ture [6], which MPNNs leverage well. Thus, reintroducing such inductive bias via positional and structural encodings (PSE) has been one of the most essential steps that led to the early success of GT [54; 82; 47; 40; 15; 78].
While many different types of PSEs have been hand-crafted and used by various GT models, a consensus on the best PSE to use in a general context has yet to be reached. In fact, the optimal choice of PSE often depends on the specific task at hand. For example, random walk encodings are typically effective for small molecular tasks [54; 17], while graph Laplacian eigenvectors might be more suitable for tasks that involve long-range dependencies [18; 16]. Therefore, developing a systematic approach to learn a unified encoding that integrates various PSEs remains an open challenge.
Encoding information that is transferable between certain graph datasets and tasks is typically achieved via self-supervised learning (SSL) methods. Pre-training GNNs via SSL has proven effective in learning graph features in data-abundant settings, which can then be transferred to downstream tasks [32; 70; 74]. However, SSL on graphs is plagued by a critical drawback: SSL that performs well on one downstream task may not help or even lead to negative transfer on another, and the success of graph SSL methods typically hinges on whether the data for the pre-training and downstream tasks are well-aligned [64; 32; 70].
ContributionsIn this work, we approach the abovementioned problems by introducing the graph positional and structural encoder (\(\mathsf{GPSE}\)). We summarize our main contributions as follows.
1. We propose \(\mathsf{GPSE}\), the first attempt at training a graph encoder that extracts rich _positional and structural representations_ from graph structures, which can be applied to any MPNN or GT model as a replacement for explicitly constructed PSEs.
2. We show the superior performance of \(\mathsf{GPSE}\) over traditionally used PSEs across a variety of benchmarks, even achieving SOTA performance on ZINC and Peptides-struct datasets.
3. Extensive experiments demonstrate that \(\mathsf{GPSE}\) is highly _transferable_ across graphs of different sizes, connectivity patterns, and modalities.
### Related work
Positional and structural encodings (PSE)_Positional encoding_ was originally implemented as a series of sinusoidal functions in the Transformer model to capture the ordinal position of words in a sentence [67]. However, capturing the position of nodes in a graph is harder, as in general, such a canonical ordering does not exist in graphs. Many recent works on graph Transformers (GT) use the graph Laplacian eigenvectors as the positional encodings [15; 54; 40; 30], which are direct analogues to the sinusoids in Euclidean space [61]. Other methods for encoding positional information include electrostatic potential encodings [40], shortest-path distances [78], directed sub-graph SVD [60], tree-based encodings [58], etc. _Structural encodings_, on the other hand, have been developed particularly on graph-structured data to encode rich local and global connectivity patterns. The random walk encoding [42; 17; 54; 12] is among the most remarkable and successful structural encoding, which has been proven to be powerful enough to distinguish \(r\)-regular graphs with high probabilities [42]. It has also shown great performance when used with GT models, particularly on small molecular graph benchmarks [54; 15; 17]. Other notable structural encodings include the heat kernel [40; 47], subgraph counting [8; 83], node degree centralities [78], etc. The usage of PSEs has also demonstrated success in MPNN, besides GT, as additional node features that are combined with the original graph features before feeding into the models [16; 44; 17; 71; 17]. Despite this great amount of work in developing different PSEs and ways to extract invariants from them further [44; 13], it is still unclear how to systematically encode information from multiple types of PSEs to effectively augment GNNs.
Self-supervised learning (SSL)The general goal of SSL is learning to extract useful graph representations from large amounts of unlabeled, but typically featured, graphs [74; 46]. One popular approach to achieve this is _contrastive learning_, which aims to embed corrupted or augmented views of the same graph into more similar representations than those from other graphs. Most contrastive graph pre-training methods generally differ in (i) what type of graphs and associated domain-specific information they use, such as 3D and geometric molecular structure [63; 43; 25; 81; 2], (ii) how they generate contrastive views [32; 80; 76; 85], and (iii) what contrastive loss they use [69; 80; 36]. _Predictive learning_, on the other hand, aims to recover graph properties that are (a) intrinsic, such as
the adjacency matrix [38; 27], or (b) extrinsic, such as molecular motif predictions [55]. However, the above-mentioned works have limited transferability as they all rely on domain-specific graph features or self-supervision. A fully generalizable and transferable SSL graph encoding method that can be trained on _unfeatured_ and _unlabeled_ graph datasets is currently lacking.
## 2 Methods
Our core idea is to train an MPNN as the graph encoder to extract rich positional and structural representations of any query graph based _solely_ on its graph structure (Figure 1A). To achieve this, we design a collection of PSEs encompassing a broad range of encodings and use them as self-supervision to train the encoder (Figure 1B). Once the encoder is trained on graphs from a particular dataset, it can extract PSE representations for any downstream dataset as the encodings to augment any downstream model (Figure 1C).
For downstream tasks, we primarily build on top of a powerful graph Transformer framework, GPS [54], that leverages the advantages of both the inductive bias of the local message passing [6] and the expressiveness of the global attention mechanisms [67]. We also demonstrate GPSE's usefulness for more general MPNN in our experiments.
### Self-supervision via positional and structural encodings (PSE)
We design a diverse collection of six PSEs for GPSE to learn against, including the Laplacian eigenvectors (4) and eigenvalues (4), the electrostatic positional encodings (7), the random walk structural encodings (20), the heat kernel structural encodings (20), and the cycle counting graph encodings (7). In short, positional encodings informs the relative position of each node in the graph, while structural encodings describe the local connectivity patterns around a node (Figure 1). See Appendix A for their precise mathematical definitions.
### GPSE architecture
It is a commonly held view that major obstructions to learning for MPNNs are over-smoothing and over-squashing. Our GPSE is no exception to these challenges. In this section, we present different architectural choices and their relationship with these two phenomena, as well as a theoretical justification behind these choices. Please refer to Appendix C for technical definitions.
Figure 1: Overview of Graph Positional and Structural Encoder (GPSE) training and application.
Evidence suggests that over-smoothing and squashing in graph networks relate to the graph's curvature [66; 50; 24]. There are many versions of graph curvature [20; 51; 62; 66] but intuitively, they encode how a node's neighborhood looks like a clique (positive curvature), a grid (zero curvature), or a tree (negative curvature). A clique's high connectivity leads to rapid smoothing, while a tree's exponentially increasing size of \(k\)-hop neighborhood causes over-squashing. These phenomena are in competition, and negating both is impossible in an MPNN using graph rewiring (architectures using modified adjacency for node aggregation). However, there seems to be a _sweet spot_ where both effects are not minimized but the sum of the two effects is minimized. This minimum is sought in Giraldo et al. [24] and Nguyen et al. [50]. Some of our following choices of architecture are justified by this search of a sweet spot in the smoothing-squashing trade-off.
Deep GNNAs several of the target PSEs require having a global view of the query graph, such as the Laplacian eigenvectors, it is crucial for the encoder to capture long-range dependencies accurately. To accomplish this, we need to use an unconventionally deep MPNN with 20 layers. However, if a graph network suffers from over smoothing, having this many layers will result in approximately uniform node features [52; 41].
Residual connectionA first attempt at reducing the smoothing is to exploit the proven ability of residual connections in reducing over smoothing [41].
Gating mechanismWe claim that using gating mechanism in the aggregation helps reduce the over smoothing even further. Indeed, the gating mechanism allows the network to reduce the weight of some edges and in the limit effectively re-wire the graph by completely or partially ignoring some edges. Essentially, we argue that it is possible for the gating mechanism to act as a graph sparsification device, which decreases the graph curvature and have been shown to alleviate over-smoothing [24; 56].
Virtual nodeIn addition, we use virtual node (VN) [23] to enable global message passing; as the virtual node has access to the states of all nodes, it allows for (a) better representation of graph-level information and (b) faster propagation of information between nodes that are further apart, and thus faster convergence of states. In technical terms, adding the virtual node drastically increases the connectivity of the graph and in turn increases its curvature (Appendix C, proposition 1), and consequently decreases the over squashing. Alternatively, one can easily see that the Cheeger constant (another measure of _bottleneckness_[66]) of the graph increases after adding the virtual node.
Random node featuresOne critical question is whether an MPNN is expressive enough to learn all the target PSEs. In particular, some PSEs, such as the Laplacian eigenvalues, may require distinguishability beyond 1-WL [21]. For example, the multi-sets of eigenvalues of two triangles versus a hexagon are different; this graph pair is commonly known to be 1-WL indistinguishable. Despite the known 1-WL expressiveness limitation of a standard MPNN when using constant node features [75], many recent works have shown empirical evidence that random node features can help MPNNs surpass the 1-WL expressiveness[57; 1; 37]. Thus, we base our encoder architecture on an MPNN coupled with random input node features, as shown in Figure 1A.
We argue and later validate that together, the above architectural design choices lead to an effective graph encoder that finds the balance between smoothing and squashing (SS3.4), and even has an elevated expressiveness due to the random features (SS3.3).
### Training GPSE
Given a query graph structure \(G=(V,E)\), we first generate a \(k\)-dimensional feature \(\mathbf{X}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) from a standard normal distribution for each node, which is then passed through a linear projection layer to match the \(d\)-dimensional hidden dimension. Then, the projected features and the graph structure are processed by \(N\) message-passing layers with residual connections [41], resulting in the final GPSE representations Figure 1. To train the encoder using the target PSEs, we decode the GPSE using multiple independent MLP heads, one per PSE, and compute the loss based on the sum of \(\ell_{1}\) and cosine similarity losses. This learning approach falls into the category of predictive learning. We note that contrastive approaches are infeasible for learning PSE representations as it is undesirable to obtain a representation that is insensitive to structural perturbations.
Training datasetPCQM4MV2 [33] is a typical choice of pre-training datasets for molecular tasks. However, since GPSE only extracts features from the graph structures (for example, methane, CH4, would be treated as the same graph as silane, SiH4), the amount of training samples reduces to
273,920 after extracting unique graphs. Instead, we use MolPCBA [31] as the training dataset for GPSE, which contains 323,555 unique molecular graphs, with an average number of nodes of 25. We randomly select 5% validation and 5% testing data that is fixed across runs, and the remaining data for training.
## 3 Experiments
GPSE successfully predicts a wide range of target PSEsThe self-supervised goal of our GPSE is to learn graph representations from which it is possible to recover predefined positional and structural encodings. For each PSE type, we quantify the prediction performance in terms of the coefficient of determination (R\({}^{2}\)) scores, as presented in Table 1. When trained on a 5% (16,177) subset of MolPCBA molecular graphs, our GPSE achieves 0.9790 average test R\({}^{2}\) score across the 6 PSEs. In Table 1, we also highlight the importance of several architectural choices, as indicated by the difference in test performance when replacing selected model components in GPSE by alternative choices. Further, we empirically show that the test performance improves asymptotically as the number of training samples increases (SS3.4), achieving 0.9979 test R\({}^{2}\) when training on 90% (291,199) of MolPCBA unique graphs. These results demonstrate the ability of the GPSE to extract rich positional and structural information from a query graph, as well as its ability to learn from an increasing amount of data.
### Enhancing performance on molecular graph datasets
In this series of experiments, we demonstrate that GPSE is a general augmentation to a wide range of GNN models and is a viable alternative to existing SSL pre-training approaches.
GPSE-augmented GPS achieves SOTA performance on ZINCIn this set of experiments, we compare performance of the GPS model augmented with our GPSE encodings versus the same model using (a) no PSE, (b) random features as PSE, and (c) LapPE and RWSE (two PSEs from SS2.1) on four common molecular property prediction benchmarks [16; 31; 33]. For ZINC [26] and PCQM4Mv2 [31], we use their subset versions following [16] and [54], respectively.
\begin{table}
\begin{tabular}{l l|c c c c c c} \hline \hline \multicolumn{2}{c|}{**ZINC (subset)**} & \multicolumn{2}{c}{**PCQM4Mv2 (subset)**} & \multicolumn{2}{c}{**MolHIV**} & \multicolumn{2}{c}{**MolPCBA**} \\ & \multicolumn{2}{c}{**MAE \(\downarrow\)**} & \multicolumn{2}{c}{**MAE \(\downarrow\)**} & \multicolumn{2}{c}{**AUROC \(\uparrow\)**} & \multicolumn{2}{c}{**AP \(\uparrow\)**} \\ \hline GCN [39] & 0.3670 \(\pm\) 0.0110 & – & 0.7599 \(\pm\) 0.0119 & 0.2424 \(\pm\) 0.0034 \\ GIN [75] & 0.5260 \(\pm\) 0.0510 & – & 0.7707 \(\pm\) 0.0149 & 0.2703 \(\pm\) 0.0023 \\ CIN [7] & 0.0709 \(\pm\) 0.0060 & – & **0.8094 \(\pm\) 0.0057** & – \\ CRAW [65] & 0.0850 \(\pm\) 0.0040 & – & **0.2986 \(\pm\) 0.0025** \\ \hline GPS+rand & 0.8766 \(\pm\) 0.0107 & 0.4768 \(\pm\) 0.0171 & 0.6210 \(\pm\) 0.0444 & 0.0753 \(\pm\) 0.0045 \\ GPS+none & 0.1182 \(\pm\) 0.0049 & 0.1329 \(\pm\) 0.0030 & 0.7798 \(\pm\) 0.0077 & 0.2869 \(\pm\) 0.0012 \\ GPS+LaPPE & 0.1078 \(\pm\) 0.0084 & 0.1267 \(\pm\) 0.0004 & 0.7736 \(\pm\) 0.0097 & 0.2939 \(\pm\) 0.0016 \\ GPS+RWSE & 0.0700 \(\pm\) 0.0040 & 0.1230 \(\pm\) 0.0008 & 0.7880 \(\pm\) 0.0101 & 0.2907 \(\pm\) 0.0028 \\ \hline GPS+GPSE & **0.0648 \(\pm\) 0.0030** & **0.1196 \(\pm\) 0.0004** & 0.7815 \(\pm\) 0.0133 & 0.2911 \(\pm\) 0.0036 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance in four molecular property prediction benchmarks, averaged over 10 seeds.
\begin{table}
\begin{tabular}{l l|c c c c c c} \hline \hline \multicolumn{2}{c|}{**Ablated setting**} & GPSE default & Overall & Elastic-PE & LapPE & RWSE & HKdiagSE & EigValSE & CycleSE \\ \hline
10 layers & 20 layers & 0.9585 & 0.9376 & 0.9302 & 0.9645 & 0.9622 & 0.9543 & 0.9701 \\
128 dim & 512 dim & 0.9688 & 0.9484 & 0.9501 & 0.9729 & 0.9734 & 0.9706 & 0.9739 \\ \hline GCN & GatedGCN & 0.0409 & 0.0408 & 0.0325 & 0.0424 & 0.0396 & 0.0483 & 0.0410 \\ GIN & GatedGCN & 0.6095 & 0.6953 & 0.4180 & 0.6237 & 0.6349 & 0.4002 & 0.6391 \\ GATv2 & GatedGCN & 0.9580 & 0.9560 & 0.9476 & 0.9643 & 0.9530 & 0.9679 & 0.9561 \\ \hline No VN & VN & 0.9478 & 0.9340 & 0.9359 & 0.9552 & 0.9479 & 0.9314 & 0.9568 \\ Shared MLP head & Indep. MLP heads & 0.9751 & 0.9619 & 0.9644 & 0.9802 & 0.9764 & 0.9714 & **0.9778** \\ \hline GPSE & **0.9790** & **0.9638** & **0.9725** & **0.9837** & **0.9808** & **0.9818** & 0.9774 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Held-out positional and structural encodings prediction performance when trained on 5% MolPCBA (R\({}^{2}\) scores \(\uparrow\)). Ablated settings of the GPSE architecture are listed and compared to the full GPSE settings. The performance of the full GPSE architecture is shown in the bottom row.
We first highlight that, with \(\mathsf{GPSE}\)-augmented input features, GPS archives a new SOTA result on ZINC, reaching 0.0648 MAE (Table 2). Moreover, we show that \(\mathsf{GPSE}\) always improves the baseline by a margin similar to or higher than standard PSEs.
\(\mathsf{GPSE}\) as a universal PSE augmentationThe utility of \(\mathsf{GPSE}\) encodings is not specific to GPS. On ZINC (12k subset) benchmark, we show that augmenting different MPNN methods and the Transformer with \(\mathsf{GPSE}\) universally results in significant improvements: 56.24% reduction in test MAE on average compared to baselines that do not make use of any PSE (Table 3).
Feature augmentation using \(\mathsf{GPSE}\) vs. SSL pre-trainingOur \(\mathsf{GPSE}\) feature augmentation approach is closely related to the SSL pre-training approaches [32, 80, 74, 76] in that both try to transfer knowledge from a large pre-training dataset to another, typically smaller, dataset for downstream evaluation. Yet, at the same time, our approach is a notable departure from previous SSL approaches in two distinct aspects:
1. The trained \(\mathsf{GPSE}\) model is only used as a feature extractor that can be coupled with _any_ type of downstream prediction model, which will be trained from scratch.
2. \(\mathsf{GPSE}\) extracts representations _solely_ from the graph structure and does not make use of the domain-specific features such as atom and bond types [32], allowing \(\mathsf{GPSE}\) to be utilized on general graph datasets.
To compare the performance of SSL pre-training and \(\mathsf{GPSE}\) feature augmentation, we follow Sun et al. [64] and use a selection of five MoleculeNet [72, 32] datasets. For the downstream model, we use the exact GNN architecture GINE as Hu et al. [32], which is an edge attribute aware variant of GIN [75], with five layers and 300-wide hidden dimensions. Finally, the extracted features from the \(\mathsf{GPSE}\) are concatenated with the initial atom embeddings and are then fed into the GINE model.
We note that \(\mathsf{GPSE}\) augmented GINE achieves the best performance on two out of the five datasets against previously reported performances (Table 4). Moreover, \(\mathsf{GPSE}\) augmentation improves performance over the baseline across _all_ five datasets, unlike some previously reported results that showed negative transfer effects. Together, these results corroborate with the findings from [64] that rich features can make up for the benefits of SSL pre-training. In our case, the \(\mathsf{GPSE}\) encodings act as the rich features that contain positional and structural information from the graphs.
In future work, the proposed \(\mathsf{GPSE}\) can be combined with other SSL methods since we have shown its ability to improve the expressivity of GINE.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & **BBBP** & **BACE** & **Tox21** & **ToxCast** & **SIDER** \\ \hline No pre-training (baseline) [32] & \(65.8\pm 4.5\) & \(70.1\pm 5.4\) & \(74.0\pm 0.8\) & \(63.4\pm 0.6\) & \(57.3\pm 1.6\) \\ \hline Self-supervised pre-trained [32] & \(68.8\pm 0.8\) & \(79.9\pm 0.9\) & \(\mathbf{76.7\pm 0.4}\) & \(64.2\pm 0.5\) & \(61.0\pm 0.7\) \\ GraphCL pre-trained [80] & \(69.7\pm 0.7\) & \(75.4\pm 1.4\) & \(73.9\pm 0.7\) & \(62.4\pm 0.6\) & \(60.5\pm 0.9\) \\ InfoGraph pre-trained [70] & \(66.3\pm 0.6\) & \(64.8\pm 0.8\) & \(68.1\pm 0.6\) & \(58.4\pm 0.6\) & \(57.1\pm 0.8\) \\ JOAO2 pre-trained [70] & \(66.4\pm 0.9\) & \(67.4\pm 0.7\) & \(68.2\pm 0.8\) & \(57.0\pm 0.5\) & \(59.1\pm 0.7\) \\ GraphLG [76] & \(\mathbf{72.5\pm 0.8}\) & \(\mathbf{83.5\pm 1.2}\) & \(75.7\pm 0.5\) & \(63.5\pm 0.7\) & \(61.2\pm 1.1\) \\ \hline GPSE augmented & \(66.4\pm 0.1\) & \(78.3\pm 2.7\) & \(74.9\pm 0.4\) & \(\mathbf{65.7\pm 0.6}\) & \(\mathbf{61.5\pm 0.8}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance on MoleculeNet small datasets (scaffold test split), evaluated in AUROC (%) \(\uparrow\). Red indicates worse than baseline performance.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & GCN & GatedGCN & GIN & GINE & Transformer & Avg. reduction \\ \hline none & \(0.288\pm 0.004\) & \(0.236\pm 0.008\) & \(0.285\pm 0.004\) & \(0.118\pm 0.005\) & \(0.686\pm 0.017\) & – \\ \hline rand & \(1.27\pm 0.340\) & \(1.228\pm 0.012\) & \(1.239\pm 0.011\) & \(0.877\pm 0.011\) & \(1.451\pm 0.002\) & N/A \\ LapPE & \(0.209\pm 0.008\) & \(0.194\pm 0.006\) & \(0.214\pm 0.004\) & \(0.108\pm 0.008\) & \(0.501\pm 0.145\) & \(21.12\%\) \\ RWSE & \(0.181\pm 0.003\) & \(0.167\pm 0.002\) & \(0.175\pm 0.003\) & \(0.070\pm 0.004\) & \(0.219\pm 0.007\) & \(42.75\%\) \\ \(\mathsf{GPSE}\) & \(\mathbf{0.129\pm 0.003}\) & \(\mathbf{0.113\pm 0.003}\) & \(\mathbf{0.124\pm 0.002}\) & \(\mathbf{0.065\pm 0.003}\) & \(\mathbf{0.189\pm 0.016}\) & \(\mathbf{56.24\%}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Four PSE augmentations combined with five different GNN models evaluated on ZINC (12k subset) dataset. Performance is evaluated as mean absolute error (MAE \(\downarrow\)) and averaged over \(4\) seeds.
### Transferability across diverse graph benchmarking datasets
GPSE can be used on arbitrary types of graphs as it is trained using the graph structures alone, in contrast with the SSL pre-trained methods. Here, we show that GPSE is transferable to general graph datasets apart from molecular datasets, even under extreme out-of-distribution (OOD) cases.
Transferability to molecular graph sizesWe use the two peptides benchmarking datasets (Peptides-struct and Peptides-func) from the long-range graph benchmarks [18] to test whether GPSE can still work when the downstream (macro-)molecular graphs are significantly larger than those used for training GPSE. Despite this difference in the graph sizes, GPS+GPSE outperforms the original GPS that uses explicitly computed PSEs (Table 5). More strikingly, GPS+GPSE resulted in the new SOTA performance for Peptides-struct, surpassing Graph MLP-Mixer [30]. The improved performance due to GPSE emphasizes its ability to help better extracts global information from the query graphs by providing a more informative initial encoding for the global attention mechanism embedded in the GPS model.
Transferability to graph connectivity patternsWe further test if GPSE generalizes to distinct graph connectivity patterns from its training dataset. Particularly, we use the superpixel graph benchmarking datasets CIFAR10 and MNIST from Dwivedi et al. [16], which are \(k\)-nearest neighbor graphs with \(k\!=\!8\), significantly differing from the molecular graph connectivity patterns. Again, GPS+GPSE achieves comparable performance against GPS using explicitly computed positional encoders (Table 5).
Transferability to extreme OOD node-classification benchmarksTaking the evaluation of GPSE one step further, we test its ability to provide useful information to transductive node classification tasks, where the graphs contain hundreds of thousands of nodes, which are completely out of distribution from the GPSE training dataset. We only use standard baseline methods as a proof of concept, including GCN [39], GraphSAGE [28], and GATv2 [68, 10]. Remarkably, GPSE successfully improves the baseline performance of SAGE and GATv2 on the arXiv dataset, and it never results in any noticeable negative transfer (Table 6).
Meanwhile, the indifference in performance on the Proteins dataset is not unexpected, as the connectivity structures of the protein interaction network do not contribute to the proteins' functions meaningfully. Instead, what matters are the identity of the proteins' interacting partners, commonly referred to as _homophily_ in the graph representation learning community [84] or more generally known as the _Guilt-by-Association_ principle in the network biology community [14]. This result provides valuable insights into the usefulness of GPSE as an augmentation: It is more beneficial when the underlying graph structure contains informative signals to the downstream tasks.
### Expressiveness of GPSE encodings
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \multicolumn{2}{c}{**arXiv**} & \multicolumn{2}{c}{**Proteins**} \\ & +GPSE? & **ACC** (\%) \(\uparrow\) & **AUROC** (\%) \(\uparrow\) \\ \hline GCN [31] & ✗ & \(71.74\pm 0.29\) & \(79.91\pm 0.24\) \\ & ✓ & \(71.67\pm 0.12\) & \(79.62\pm 0.12\) \\ SAGE [31] & ✗ & \(71.74\pm 0.29\) & \(80.35\pm 0.07\) \\ & ✓ & \(72.19\pm 0.32\) & \(80.14\pm 0.22\) \\ GAT(E)v2 [10] & ✗ & \(71.69\pm 0.21\) & \(83.47\pm 0.13\) \\ & ✓ & \(72.17\pm 0.42\) & \(83.51\pm 0.11\) \\ \hline _Best of above_ & ✗ & \(71.74\pm 0.29\) & \(83.47\pm 0.13\) \\ & ✓ & \(72.19\pm 0.32\) & \(83.51\pm 0.11\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Out-of-distribution transferability to OGB node classification benchmarks.
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline & **Peptides-struct** & **Peptides-func** & **CIFAR10** & **MNST** \\ & **MAE**\(\downarrow\) & **AP**\(\uparrow\) & **ACC** (\(\uparrow\)) \(\uparrow\) & **ACC** (\(\uparrow\)) \(\uparrow\) \\ Avg. \# nodes & 150.9 & 150.9 & 117.6 & 70.6 \\ Avg. connectivity & 0.022 & 0.022 & 0.069 & 0.117 \\ \hline GIN & – & – & \(55.26\pm 1.53\) & 96.49 \(\pm\) 0.25 \\ GINE & 0.3547 \(\pm\) 0.0045 & 0.5498 \(\pm\) 0.0079 & – & – \\ GatedGCN [9] & 0.3420 \(\pm\) 0.0013 & 0.5864 \(\pm\) 0.0077 & 67.31 \(\pm\) 0.31 & 97.34 \(\pm\) 0.14 \\ Graph MLP-Mixer [30] & 0.2475 \(\pm\) 0.0020 & 0.6920 \(\pm\) 0.0054 & 72.46 \(\pm\) 0.36 & 98.35 \(\pm\) 0.10 \\ \hline GPS+(RMSE/LapPE) [54] & 0.2500 \(\pm\) 0.0005 & 0.6535 \(\pm\) 0.0041 & 72.30 \(\pm\) 0.36 & 98.05 \(\pm\) 0.13 \\ GPS+GPSE & 0.2464 \(\pm\) 0.0025 & 0.6688 \(\pm\) 0.0151 & 72.31 \(\pm\) 0.25 & 98.08 \(\pm\) 0.13 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Out-of-distribution transferability to graph size and connectivity pattern. The GPSE pre-training dataset MolPCBA contains graphs with 26 nodes and 0.093 connectivity on average.
Given that \(\mathsf{GPSE}\) can recover different PSEs so well (Table 2.1), it is natural to wonder whether it can boost standard MPNN expressiveness. We first confirm that \(\mathsf{GPSE}\) encodings surpass 1-Wl distinguishability by observing a clear visual separation of \(\mathsf{GPSE}\) encodings on 1-WL indistinguishable graph pairs (Figure E.1).
To more rigorously and systematically study the expressivity of \(\mathsf{GPSE}\) encodings, we perform two synthetic benchmarks [16, 1] that require beyond 1-WL power using a 1-WL expressivity bounded MPNN model GIN [75]. Indeed, we find that \(\mathsf{GPSE}\) provides extra power to the base MPNN model to correctly distinguish graph isomorphism classes (Table 7). This expressivity boosts by \(\mathsf{GPSE}\) is remarkable, considering that (1) \(\mathsf{GPSE}\) is pre-trained on MolPCBA, whose graph structures are not designed to be 1-WL indistinguishable like these synthetic graphs, and (2) naively adding random features to the input does not provide such noticeable improvement.
We further point out that, in fact, augmenting the base GIN model using common PSEs like LapPE and RWSE readily archives nearly perfect graph isomorphism classification, corroborating with previous theoretical results on distance-encodings [42] and spectral invariants [21]. This finding partially explains why \(\mathsf{GPSE}\) provides additional power and also why previous methods using LapPE achieve perfect classification on these tasks [30]. Finally, we note the performance difference between LapPE/RWSE and \(\mathsf{GPSE}\) is not unexpected, as random input features only act as a patch to the MPNN expressivity limitation rather than fully resolving it. Thus, developing powerful and scalable \(\mathsf{GPSE}\) models that losslessly, to the extent they can, capture the latent semantics of various PSEs is a valuable venue to explore in the future.
### Ablation studies
\(\mathsf{GPSE}\) makes good use of the depth and the global message passing from the VNDespite the commonly known issue with MPNN oversmoothing as the number of message passing layers increases, we observe that \(\mathsf{GPSE}\) does not oversmooth thanks to the gating mechanism and the residual connections in GatedGCN [9], contrasting with more typical MPNN layers like GIN [75]. Indeed, \(\mathsf{GPSE}\) benefits from both the global message passing by VN and the model depth as shown in Figure 2, confirming our theoretical justifications about the architectural choices in SS2.2.
\(\mathsf{GPSE}\) **benefits from the wide variety of pre-training target PSE in downstream tasks** Since \(\mathsf{GPSE}\) is trained to capture latent semantics for recovering a wide range of PSEs, it mitigates the reliance on manually selecting task-specific PSEs, a major shortcoming of graph Transformer models such as GPS [54] and Graphormer [78] which could be task specific. For instance, RWSE typically performs well for molecular tasks, while LapPE could be more useful for long-range dependency tasks [54, 18]. Here, we investigate whether a particular type of PSE contributes more or less to \(\mathsf{GPSE}\) by testing the downstream performance of PCQM4Mv2 and MolHIV using different variants of \(\mathsf{GPSE}\) that excludes one type of PSE during training. We observe from Table F.2 that excluding any type of PSE generally reduces its performance in the downstream tasks slightly, indicating the usefulness of different PSEs' semantics to the downstream tasks at various levels.
**Asymptotic behavior with respect to the \(\mathsf{GPSE}\) training sample sizes** We perform a scaling law experiment with respect to the training sample sizes, from 5% to 80% of the MolPCBA. As shown in Figure F.1, the testing loss (Appendix B) reduces as the training sample increases. This asymptotic behavior suggests that \(\mathsf{GPSE}\) can further benefit from the increasing amount of training data.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{**CSL**} & \multicolumn{2}{c}{**EXP**} \\ \cline{2-5} & Train & Test & Train & Test \\ \hline GIN & \(10.0\pm 0.0\) & \(10.0\pm 0.0\) & \(49.8\pm 1.8\) & \(48.7\pm 2.2\) \\ GIN+rand & \(11.6\pm 3.7\) & \(12.7\pm 6.4\) & \(51.0\pm 2.0\) & \(51.3\pm 2.9\) \\ GIN+GPSE & \(98.2\pm 1.5\) & \(42.9\pm 7.9\) & \(84.6\pm 6.8\) & \(68.3\pm 7.5\) \\ \hline GIN+LaPE & \(100.0\pm 0.0\) & \(92.5\pm 4.2\) & \(99.9\pm 0.2\) & \(99.5\pm 0.8\) \\ GIN+RWSE & \(100.0\pm 0.0\) & \(100.0\pm 0.0\) & \(99.7\pm 0.2\) & \(99.7\pm 0.6\) \\ \hline \hline \end{tabular}
\end{table}
Table 7: Synthetic graph benchmarks with ten times stratified five-fold cross-validation evaluated in ACC (%) \(\uparrow\).
The choice of GPSE pre-training dataset affects its downstream performance minimallyWe reevaluate the performance of GPSE on PCQM4MV2 and ZINC when trained on several other choices of molecular graph datasets, including GEOM [4], ZINC 250k [26], PCQM4MV2 [33], and ChEMBL [22]. On ZINC, we observe that GPSE performance is variable across different training datasets (Table F.3). Particularly, training GPSE on ChEMBL and MolPCBA, two largest datasets here, results in much better performances than using other, relatively smaller datasets. The superior downstream performance achieved using larger pre-training datasets aligns well with our asymptotic results above, where a larger amount of training samples results in a more accurate GPSE for capturing PSEs, hence leading to better downstream performance. However, we did not observe the same performance difference in the PCQM4MV2 subset downstream task, indicating that the training size is not always the most crucial factor for good performance, an observation similar to Sun et al. [64].
Finally, we investigate whether finetuning the GPSE model specifically on the downstream dataset could further improve its downstream performance (Table F.3). Similar to the above findings, we see that further fine-tuning GPSE helps in a task-specific manner, generally providing slight improvements to the ZINC task but less so to the PCQM4MV2 task. Together, these results of GPSE being minimally affected by different options of pre-training datasets and further fine-tuning to specific downstream datasets reemphasize that GPSE learns _general_ and _transferable_ knowledge about various PSEs.
## 4 Discussion
**Why does GPSE improve over precomputed PSEs?** Our results demonstrate that GPSE encodings can improve upon augmenting GNNs with precomputed PSEs in downstream tasks. The fact that we can _recover_ the target PSEs in pretraining (Table 1) already accounts for why we can match the original PSEs. Why we _improve_ upon them, meanwhile, can be attributed to our joint encoding: Learning to encode a diverse collection of PSEs leads to a general embedding space that abstracts both local and global perspectives of the query graph, which are more readily useable by the downstream model compared to the unprocessed PSEs.
**What advantages does GPSE bring over traditional SSL pre-training?** In addition to being less prone to negative transfer and having competitive performance (Table 4), GPSE provides a few more advantages over traditional SSL pre-training methods: (1) GPSE uses randomly generated features instead of dataset-specific graph features, thus can be applied to arbitrary graph datasets; (2) GPSE is only used as a feature extractor and hence does not impose any constraint on the downstream model. Despite these differences, we emphasize that GPSE can be complementary to traditional SSL to further enhance the prediction performance, for example, by using GPSE encodings as input features to the SSL pre-training.
**Why is GPSE transferable to OOD data?** The transferability of GPSE to OOD data is uncommon in the graph SSL pre-training literature, particularly for applications with molecular graphs. We hypothesize that GPSE's transferability is a consequence of the choice of its predictive self-supervision tasks, that contain a mixture of both local and global intrinsic graph information. This encourages GPSE to capture global invariances using local information, hence allowing it to extract valuable representations on graphs that are different in sizes and connectivity from the training graphs.
**When does GPSE help, and when does it not?** We argue that GPSE provides essential information to the model when the downstream task requires positional or structural information of the graph or better node identifiability in general, which is typically the case for molecular property predictions [32]. Conversely, for downstream tasks that do not rely on such information, such as protein function prediction using the protein interaction network (Table 6), the direct benefits from GPSE are not apparent.
## 5 Conclusion
We have introduced the GPSE, a unifying graph positional and structural encoder for augmenting any graph learning dataset while being applicable to all graph Transformers and message-passing GNN models. GPSE extracts rich node encodings by learning to predict a diverse collection of predefined PSEs in the initial self-supervised training phase on a set of unattributed graphs. We demonstrated a superior performance of GPSE encodings over the explicitly constructed PSEs on a variety of graph learning benchmarks. Furthermore, GPSE showed great transferability across diverse benchmarking
datasets and even achieved a new SOTA performance on the Peptides-struct long-range benchmark, whose graph structures are significantly different from those in the MolPCBA dataset, that was used to train the GPSE. Our study opens up exciting opportunities for learning a graph encoder as a unified PSE to augment GNN models, and we hope our work will motivate future studies toward learning even more powerful graph PSE encoders to advance graph analysis.
Limitations and future directionsDespite the effectiveness of our GPSE model design choices, it is currently prohibitively large to be trained on graph datasets with over one million graphs efficiently. As GPSE asymptotically achieves perfect PSE recovery, it is a promising future direction to make GPSE more efficient and thus allow it to be trained on billion scale molecular graph datasets [53, 35].
Broader ImpactWe do not foresee immediate societal impact of GPSE as a general graph encoder.
|
2306.07548 | From quantum loop superalgebras to super Yangians | The goal of this paper is to generalize a statement by Drinfeld, asserting
that Yangians can be constructed as limit forms of the quantum loop algebras,
to the super case. We establish a connection between quantum loop superalgebra
and super Yangian of the general linear Lie superalgebra $\mathfrak{gl}_{M|N}$
in RTT type presentation. In particular, we derive the
Poincar\'e-Birkhoff-Witt(PBW) theorem for the quantum loop superalgebra
$\mathrm{U}_q\big(\mathfrak{Lgl}_{M|N}\big)$. Additionally, we investigate the
application of the same argument to twisted super Yangian of the
ortho-symplectic Lie superalgebra. For this purpose, we introduce the twisted
quantum loop superalgebra as a one-sided coideal of
$\mathrm{U}_q\big(\mathfrak{Lgl}_{M|2n}\big)$ with respect to the
comultiplication. | Hongda Lin, Yongjie Wang, Honglian Zhang | 2023-06-13T05:39:37Z | http://arxiv.org/abs/2306.07548v2 | # From quantum loop superalgebras to super Yangians
###### Abstract
The goal of this paper is to generalize a statement of Drinfeld that Yangians can be constructed as limit forms of the quantum loop algebras to super case. We establish the connections between quantum loop superalgebra and super Yangian of general linear Lie superalgebra \(\mathfrak{gl}_{M|N}\) in \(RT\,T\)-presentation. We also apply the same argument to twisted super Yangian of the ortho-symplectic Lie superalgebra. For this purpose, we introduce the twisted quantum loop superalgebra as a one-sided coideal of quantum group superalgebra \(\mathrm{U}_{q}(\mathfrak{gl}_{M|2n})\) with respect to the comultiplication. There is a PBW type basis of this coideal sub-superalgebra derived from its \(\mathbb{A}\)-form.
1 Department of Mathematics, Shanghai University, Shanghai 200444, China
2 School of Mathematics, Hefei University of Technology, Hefei, Anhui, 230009, China.
**keywords**: super Yangian, twisted super Yangian, twisted quantum loop superalgebra, PBW type basis.
## 1 Introduction
Quantum groups, first appearing in the theory of quantum integrable system, were formalized independently by Drinfeld and Jimbo as certain special Hopf algebras around 1985 [8, 16], which were found to have closed connections with many fields in mathematics and physics, representation theory, algebraic combinatorics, and low dimensional topology. Briefly, a quantum group is a \(q\)-deformation of the universal enveloping algebra of semi-simple Lie algebra or Kac-Moody algebra. There are two important families of quantized enveloping algebras with rich representation theory: Yangians and quantum loop (affine) algebras. Drinfeld in [9] proposed the Drinfeld realizations of Yangians and quantum loop algebras, which were widely used in studying the classification of their highest weight representations and finite-dimensional representations [6]. Meanwhile, Yangians and quantum loop algebras admit the third presentations, which were called \(RT\,T\)-presentations and originated from Faddeev, Reshetikhin and Takhtajan's research [11]. Their \(RT\,T\)-presentations admit natural Hopf algebra structures, which are convenient to study tensor representations.
Sklyanin [31] first introduced the twisted version of Yangians to study on boundary conditions for quantum integrable systems, named twisted Yangians by Olshanskii [27]. These
algebras constructed from classical symmetric pairs \((\mathfrak{gl}_{N},\mathfrak{o}_{N})\) and \((\mathfrak{gl}_{N},\mathfrak{sp}_{N})\). There are two methods to realize a twisted Yangian [23]: One way is to define it as a subalgebra of the Yangian associated with \(\mathfrak{gl}_{N}\) by using the usual transposition; another way is to construct an abstract algebra in terms of generators with the quaternary relation and the symmetry relation. It is well known that Yangians are limit forms of the quantum loop algebra in some sense, which was stated in [10] by Drinfeld in 1986 and a complete proof was given by [13] and [14]. The same argument between the twisted quantum loop algebras and twisted Yangians of type \(\mathbf{B},\mathbf{C},\mathbf{D}\) has been obtained by Conner and Guay in [5] in terms of \(RTT\)-presentation.
As a supersymmetric generalization of quantum groups, quantum supergroups (also called quantum superalgebras) were introduced in [3, 33]. The study on quantum affine superalgebras was carried out by many scholars in related fields. On one hand, Yamane [34] had systematically investigated the Serre-type presentations of type \(\mathbf{A}\)-\(\mathbf{G}\) (quantum) affine Lie superalgebras and introduced the Drinfeld-Jimbo realization of quantum affine superalgebras. Recently, the authors in [18] constructed a minimalistic presentation of type \(\mathbf{A}\) quantum affine superalgebra and proved the isomorphism between Drinfeld-Jimbo realization and Drinfeld realization. On the other hand, Y.-Z. Zhang in [38] investigated the super RS construction and generalized the Ding-Frenkel [7] homomorphism to quantum affine superalgebra associated with \(\mathfrak{gl}_{M|N}\). These results can be naturally generalized to quantum loop superalgebras which is commonly regarded as a quotient superalgebra of quantum affine superalgebras by the center.
In the early 1990s, Nazarov [26] introduced a super-analogue \(Y(\mathfrak{gl}_{M|N})\) by extending the definition of the Yangian \(Y(\mathfrak{gl}_{M})\) in terms of \(RTT\)-presentation. The finite-dimensional irreducible representations were classified by R. B. Zhang [37]. Based on \(RTT\)-presentation and Gauss decomposition technique, Gow obtained the Drinfeld realization for \(Y(\mathfrak{gl}_{M|N})\) with respect to distinguished Borel subalgebra [12]. Likewise, one may define various realizations for the super Yangians starting from non-conjugacy Dynkin diagrams, which are known to be pairwise isomorphic due to [32]. The parabolic presentation for super Yanigan of the general linear Lie superalgebra associated with arbitrary Borel subalgebras had been established by Y.-N. Peng in [28, 29]. The definitions of the super Yangian and twisted super Yangian associated with ortho-symplectic Lie superalgebras \(\mathfrak{osp}_{M|N}\) had been introduced in [1] and [4], respectively. Recently, a complete description of the finite-dimensional irreducible representations of the orthosymplectic Yangian \(Y(\mathfrak{osp}_{M|N})\) had been obtained by Molev in series of papers [19, 20, 22], and the Drinfeld presentations had been established in [21, 24].
Inspired by the works of Conner, Guay and Ma [5, 14], one naturally expects that there is a direct connection between (twisted) super Yangians and (twisted) quantum loop superalgebras. In this paper, the first target is to construct a filtered superalgebra with respect to a certain filtration of the quantum loop superalgebra, which is isomorphic to the quantum super Yangian of type \(\mathfrak{gl}_{M|N}\). Indeed, we use the fact that the \(\mathbb{A}\)-form of quantum loop superalgebra tends to the related universal enveloping superalgebra as the classical limit \(q\to 1\). Owing to the certified non-super version [8, 17], the fact is commonly accepted, while there are still not a systematic proof for super case. For this reason, it is necessary to study on the \(\mathbb{A}\)-form in the first part of this paper. The second main goal is to generalize a similar result to twisted case. In order to investigate twisted case, we need to introduce a new twisted quan
tum loop superalgebra to approach the twisted super Yangian in [4]. Our work of this part is to prove that this twisted quantum loop superalgebra shares many important properties with non-super case [25], especially PBW theorem. For orthosymplectic Yangians \(Y(\mathfrak{osp}_{M|N})\), we will treat it in an upcoming paper.
This paper is organized as follows. In Section 2, we introduce some essential notations and review the definitions of super Yangian \(Y_{h}(\mathfrak{gl}_{M|N})\) and quantum loop superalgebra \(\mathrm{U}_{q}(\mathfrak{Lagl}_{M|N})\) exactly. We develop a \(RT\,T\)-version \(\mathbb{A}\)-form \(\mathrm{U}_{\mathbb{A}}\) of \(\mathrm{U}_{q}(\mathfrak{Lagl}_{M|N})\). One of main results in this paper is to prove that the super Yangian \(Y_{h}(\mathfrak{gl}_{M|N})\) is isomorphism to the graded algebra of \(\mathrm{U}_{\mathbb{A}}\). In Section 3, we recall the definition of twisted super Yangian \(Y_{h}(\mathfrak{osp}_{M|2n})\). We also define the twisted quantum loop superalgebra as a sub-superalgebra of \(\mathrm{U}_{q}(\mathfrak{Lagl}_{M|N})\) with respect to a super-involution and construct its PBW type basis inspired in [25, 36]. Finally we prove the statement that the twisted super Yangian \(Y_{h}(\mathfrak{osp}_{M|2n})\) isomorphic to a graded algebra based on the twisted quantum loop superalgbra.
Throughout this paper, all superspaces, associative (super)algebras and Lie (super)algebras are over \(\mathbb{C}\). We write \(\mathbb{Z}/2\mathbb{Z}=\{\tilde{0},\tilde{1}\}\) and the parity of an element \(x\) is denoted by \(|x|\). If both \(\mathcal{A}\) and \(\mathcal{B}\) are associative superalgebras, then \(\mathcal{A}\otimes\mathcal{B}\) is understood as the associative superalgebra with graded multiplication
\[(a_{1}\otimes b_{1})(a_{2}\otimes b_{2})=(-1)^{|a_{2}||b_{1}|}a_{1}a_{2} \otimes b_{1}b_{2},\text{ for homogeneous elements }a_{2}\in\mathcal{A},b_{1}\in \mathcal{B}.\]
## 2 Super Yangian and Quantum loop superalgebra
In this section, we will review some basic definitions of the \(RT\,T\)-presentation of the Super Yangian \(Y(\mathfrak{gl}_{M|N})\) and the quantum loop superalgebra \(\mathrm{U}_{q}(\mathfrak{Lagl}_{M|N})\) to fix notations.
For nonnegative integers \(M\) and \(N\), we set \(I_{M|N}:=\{1,\ldots,M+N\}\), which is simply denoted by \(I\), on which we define the parity of \(i\in I\) to be
\[|i|:=\begin{cases}\tilde{0},&\text{if }\;1\leq i\leq M,\\ \tilde{1},&\text{if }\;M+1\leq i\leq M+N.\end{cases}\]
Let \(V:=\mathbb{C}^{M|N}\) be the superspace with standard basis \(v_{i}\) of parity \(|i|\) for \(i\in I\). Thus \(\mathrm{End}(V)\) is an associative superalgebra with standard basis \(E_{ij}\) of parity \(|i|+|j|\) for \(i,j\in I\). Under the standard supercommutator, \(\mathrm{End}(V)\) is also a Lie superalgebra, which is denoted by \(\mathfrak{gl}_{M|N}\).
We fix _the standard Cartan sub-superalgebra_\(\mathfrak{h}:=\mathrm{Span}_{\mathbb{C}}\{h_{i}:=E_{ii}\mid i\in I\}\). Let \(\{\varepsilon_{1},\ldots,\varepsilon_{M+N}\}\) be the basis of \(\mathfrak{h}^{*}\) dual to \(\{h_{1},\ldots,h_{M+N}\}\) with respect to the nondegenerate symmetric bilinear form \((\cdot|\cdot)\) given by \((\varepsilon_{i}|\varepsilon_{j})=(-1)^{|i|}\delta_{ij}\). Then \(P:=\mathbb{Z}\varepsilon_{1}\oplus...\oplus\mathbb{Z}\varepsilon_{M+N}\) is _the weight lattice_. For \(i\in I^{\prime}_{M|N}=I\backslash\{M+N\}\) (simply denoted by \(I^{\prime}\)), we denote \(\alpha_{i}=\varepsilon_{i}-\varepsilon_{i+1}\) to be the simple root. Therefore the set \(\Pi=\{\alpha_{i}\mid i\in I^{\prime}\}\) is the root base and the set \(\Pi^{\vee}=\{\alpha_{i}^{\vee}\mid i\in I^{\prime}\}\) is the coroot base with simple coroot \(\alpha_{i}^{\vee}=h_{i}-(-1)^{|i|+|i+1|}h_{i+1}\). _The special linear Lie superalgbra \(\mathfrak{sl}_{M|N}\)_ is a sub-superalgebra of \(\mathfrak{gl}_{M|N}\) generated by \(\alpha_{i}^{\vee}\) and \(e_{i}=E_{i,i+1}\), \(f_{i}=E_{i+1,i}\) for \(i\in I^{\prime}\). The Cartan matrix associated to standard Borel sub-superalgebra of \(\mathfrak{sl}_{M|N}\) is \(A=(a_{ij})_{i,j\in I^{\prime}}\) with \(a_{ij}=(\alpha_{i}|\alpha_{j})\), and the Dynkin diagram is given as follows,
### Super Yangian \(\mathrm{Y}(\mathfrak{gl}_{M|N})\)
We recall the definition and some basic facts of the super Yangian \(\mathrm{Y}(\mathfrak{gl}_{M|N})\), more details can be found in [26]. The super Yangian \(\mathrm{Y}(\mathfrak{gl}_{M|N})\) is an associative superalgebra over \(\mathbb{C}\) generated by the generators \(\{t_{ij}^{(m)}|i,j\in I,m\geq 1\}\) with the parity \(|t_{ij}^{(m)}|=|i|+|j|\) and satisfing the following relation
\[\left[\,t_{ij}^{(m)},\,t_{kl}^{(n)}\,\right]=(-1)^{|i||j|+|i||k|+|j||k|}\sum_{r =0}^{\min(m,n)-1}\left(t_{kj}^{(r)}t_{il}^{(m+n-1-r)}-t_{kj}^{(m+n-1-r)}\,t_{il} ^{(r)}\right). \tag{2.1}\]
Equivalently,
\[\left[\,t_{ij}^{(m+1)},\,t_{kl}^{(n)}\,\right]-\left[\,t_{ij}^{(m)},\,t_{kl}^{ (n+1)}\,\right]=(-1)^{|i||j|+|i||k|+|j||k|}\left(t_{kj}^{(m)}\,t_{il}^{(n)}-t_ {kj}^{(n)}\,t_{il}^{(m)}\right). \tag{2.2}\]
for all \(i,j,k,l\in I\) and \(n,m\geq 0\).
RTT-presentation of the super Yangian \(\mathrm{Y}(\mathfrak{gl}_{M|N})\) can be given as follows. The rational \(R\)-matrix \(\mathrm{R}(u)=1\otimes 1-\frac{1}{u}P\in\mathrm{End}(V^{\otimes 2})\), where \(P=\sum\limits_{i\in I}(-1)^{|j|}E_{ij}\otimes E_{ji}\). The \(R\)-matrix \(\mathrm{R}(u)\) satisfies the quantum Yang-Baxter
\[\mathrm{R}^{12}(u)\mathrm{R}^{13}(u+v)\mathrm{R}^{23}(v)=\mathrm{R}^{23}(v) \mathrm{R}^{13}(u+v)\mathrm{R}^{12}(u),\]
where \(\mathrm{R}^{ij}:V\otimes V\otimes V\longrightarrow V\otimes V\otimes V\) is a map acting on the \((i,j)\)-th position by \(\mathrm{R}\). Let
\[t_{ij}(u)=\sum\limits_{m\geq 0}t_{ij}^{(m)}u^{-m}\in\mathrm{Y}(\mathfrak{gl}_{M|N })\left[\left[\,u^{-1}\right]\,\right],\ \ \ \text{with}\ \ t_{ij}^{(0)}=\delta_{ij},\]
and
\[T(u)=\sum\limits_{i,j\in I}(-1)^{|i||j|+|j|}E_{ij}\otimes t_{ij}(u).\]
Then the defining relation (2.1) can be rewritten as
\[\mathrm{R}^{12}(u-v)\,T^{1}(u)\,T^{2}(v)=T^{2}(v)\,T^{1}(u)\mathrm{R}^{12}(u- v), \tag{2.3}\]
where \(T^{1}(u)=\sum\limits_{i,j\in I}(-1)^{|i||j|+|j|}E_{ij}\otimes 1\otimes t_{ij}(u)\) and \(T^{2}(u)=\sum\limits_{i,j\in I}(-1)^{|i||j|+|j|}1\otimes E_{ij}\otimes t_{ij }(u)\) are in \(\mathrm{End}(V^{\otimes 2})\otimes\mathrm{Y}(\mathfrak{gl}_{M|N})\left[\left[u^{-1} \right]\,\right].\)
The super Yangian \(\mathrm{Y}(\mathfrak{gl}_{M|N})\) is a Hopf superalgebra endowed with the following comultiplication, and antipode given by
\[\Delta_{1}\big{(}T(u)\big{)}=T(u)\otimes T(u)\ \ \text{and}\ \ S_{1}\big{(}T(u) \big{)}=T(u)^{-1},\ \ \text{respectively}.\]
Let \(\hbar\in\mathbb{C}^{*}\) be the deformation parameter. We take a deformation of generators in \(Y(\mathfrak{gl}_{M|N})\) by \(\hat{t}^{(m)}_{i,j}=\hbar^{1-m}\,t^{(m)}_{i,j}\) for all \(i,j\in I,m\geq 1\) and change relation (2.2) to the following form
\[\left[\hat{t}^{(m+1)}_{ij},\,\hat{t}^{(n)}_{kl}\right]-\left[\hat{t}^{(m)}_{ij},\,\hat{t}^{(n+1)}_{kl}\right]=(-1)^{|i||j|+|i||k|+j||k|}\hbar\left(\hat{t}^{(m )}_{kj}\,\hat{t}^{(n)}_{il}-\hat{t}^{(n)}_{kj}\,\hat{t}^{(m)}_{il}\right). \tag{2.4}\]
The new superalgebra called the _quantum super Yangian_, denoted by \(Y_{\hbar}(\mathfrak{gl}_{M|N})\). The superalgebras \(Y_{\hbar}(\mathfrak{gl}_{M|N})\) for all \(\hbar\in\mathbb{C}^{*}\) are isomorphic. It is a deformation of \(U(\mathfrak{gl}_{M|N}[x])\) in terms of the limit \(\hbar\to 0\).
### Quantum loop superalgebra \(\mathrm{U}_{q}\big{(}\mathfrak{Lgl}_{M|N}\big{)}\)
Let \(\mathfrak{Lgl}_{M|N}=\mathfrak{gl}_{M|N}\otimes\mathbb{C}[x,x^{-1}]\) be the loop superalgebra associated to the general linear Lie superalgebra. It is a Lie superalgebra generated by the elements \(e_{i,r}\), \(f_{i,r}\), \(h_{j,r}\) for \(i\in I^{\prime}\), \(j\in I\). Here \(z_{r}=z\otimes x^{r}\) for \(r\in\mathbb{Z}\) and \(z\in\mathfrak{gl}_{M|N}\). The Perk-Schultz solution of the quantum Yang-Baxter equation associated with the tensor product of natural representation \(V\otimes V\) over \(\mathrm{U}_{q}(\mathfrak{gl}_{M|N})\) is given by (c.f. [30, 35])
\[\Re_{q}(u,v)= \sum_{i,j\in I}(uq^{\delta_{ij}}_{i}-vq^{-\delta_{ij}}_{i})E_{ii} \otimes E_{jj}+u\sum_{i<j}\big{(}q_{i}-q^{-1}_{i}\big{)}E_{ji}\otimes E_{ij}\] \[+v\sum_{i<j}\big{(}q_{j}-q^{-1}_{j}\big{)}\,E_{ij}\otimes E_{ji} \in\big{(}\mathrm{End}\big{(}V^{\otimes 2}\big{)}\big{)}[u,v],\]
which satisfies the following quantum Yang-Baxter equation
\[\Re_{q}^{12}(u,v)\Re_{q}^{13}(u,w)\Re_{q}^{23}(v,w)=\Re_{q}^{23}(v,w)\Re_{q}^ {13}(u,w)\Re_{q}^{12}(u,v).\]
Here \(q\) is not a root of unit and non-zero over \(\mathbb{C}\), \(q_{i}:=q^{(-1)^{|i|}}\) and \(\Re_{q}^{ij}(1\leq i<j\leq 3)\) is obtained from \(\Re_{q}\) similarly as in Section 2.1.
If setting \(\Re_{q}=\Re_{q}(1,0)\) and \(\widetilde{\Re}_{q}=P\Re_{q}^{-1}P\), then we have \(\Re_{q}(u,v)=u\Re_{q}-v\Re_{q}^{-1}\) and \(\Re_{q}-\widetilde{\Re}_{q}=\big{(}q-q^{-1}\big{)}\,P\). Then the quantum loop superalgebra \(\mathrm{U}_{q}(\mathfrak{Lgl}_{M|N})\) is defined via the Faddeev-Reshetikhin-Takhtajan presentation as follows.
**Definition 2.1**.: The _quantum loop superalgebra_\(\mathrm{U}_{q}(\mathfrak{Lgl}_{M|N})\) is an associative superalgebra generated by the set of generators \(\{T^{(r)}_{ij},\widetilde{T}^{(r)}_{ij}\big{|}\,\,i,j\in I,r\in\mathbb{Z}_{ \geq 0}\}\) with the parity \(\big{|}\,T^{(r)}_{ij}\big{|}=\big{|}\,\widetilde{T}^{(r)}_{ij}\big{|}=|i|+|j|\), satisfying the following \(RT\,T\)-relations
\[T^{(0)}_{ji}=\widetilde{T}^{(0)}_{ij}=0,\,\,\mathrm{if}\,\,\,1 \leq i<j\leq M+N, \tag{2.5}\] \[T^{(0)}_{ii}\,\widetilde{T}^{(0)}_{ii}=\widetilde{T}^{(0)}_{ii}T^ {(0)}_{ii}=1,\,\,\mathrm{if}\,\,\,i\in I,\] (2.6) \[\Re_{q}^{12}(u,v)T^{1}(u)\,T^{2}(v)=T^{2}(v)\,T^{1}(u)\Re_{q}^{12 }(u,v),\] (2.7) \[\Re_{q}^{12}(u,v)\,\widetilde{T}^{1}(u)\,\widetilde{T}^{2}(v)= \widetilde{T}^{2}(v)\,\widetilde{T}^{1}(u)\Re_{q}^{12}(u,v),\] (2.8) \[\Re_{q}^{12}(u,v)\,\widetilde{T}^{1}(u)\,T^{2}(v)=T^{2}(v)\, \widetilde{T}^{1}(u)\Re_{q}^{12}(u,v), \tag{2.9}\]
where the matrices \(T(u)\) and \(\widetilde{T}(u)\) have the forms
\[T(u)=\sum_{i,j\in I}E_{ij}\otimes T_{ij}(u)\ \ \ \text{with}\ \ T_{ij}(u)=\sum_{r\geq 0 }T_{ij}^{(r)}u^{r},\]
and
\[\widetilde{T}(u)=\sum_{i,j\in I}E_{ij}\otimes\widetilde{T}_{ij}(u)\ \ \ \text{with}\ \ \widetilde{T}_{ij}(u)=\sum_{r\geq 0}\widetilde{T}_{ij}^{(r)}u^{-r}.\]
It is a Hopf superalgebra endowed with the comultiplication
\[\Delta_{2}(T_{ij}^{(r)})=\sum_{k\in I}\sum_{p=0}^{r}\varepsilon_{ik;kj}T_{ik}^ {(p)}\otimes T_{kj}^{(r-p)},\ \ \ \Delta_{2}(\widetilde{T}_{ij}^{(r)})=\sum_{k\in I}\sum_{p=0}^{r} \varepsilon_{ik;kj}\,\widetilde{T}_{ik}^{(p)}\otimes\widetilde{T}_{kj}^{(r-p)},\]
where \(\varepsilon_{ab;cd}=(-1)^{(|a|+|b|)(|c|+|d|)}\).
We often use a more explicit expression of the generators for relations (2.7)\(-\)(2.9). For example, we can rewrite relation (2.7) as
\[\begin{split}& q_{i}^{-\delta_{ik}}T_{ij}^{(r+1)}T_{kl}^{(s)}-q_{i }^{\delta_{ik}}T_{ij}^{(r)}T_{kl}^{(s+1)}-\varepsilon_{ij;kl}\left(q_{j}^{- \delta_{jl}}T_{kl}^{(s)}T_{ij}^{(r+1)}-q_{j}^{\delta_{jl}}T_{kl}^{(s+1)}T_{ij }^{(r)}\right)\\ =&\varepsilon_{ik;kl}\left(q_{k}-q_{k}^{-1}\right) \left(\delta_{k<i}\,T_{kj}^{(r)}T_{il}^{(s+1)}-\delta_{j<l}\,T_{kj}^{(s+1)}\,T _{il}^{(r)}+\delta_{l<k}\,T_{kj}^{(r+1)}\,T_{il}^{(s)}-\delta_{l<j}\,T_{kj}^{( s)}\,T_{il}^{(r+1)}\right).\end{split} \tag{2.10}\]
_Remark 2.2_.: For \(N=0\), we set \(O\) to be the matrix \(\sum\limits_{i=1}^{M}E_{i,M+1-i}\), then the \(R\)-matrix \(\Re_{q}(u,v)\) satisfies the following equation
\[(O\otimes O)\Re_{q}(u,v)(O\otimes O)=uvR(-u^{-1},-v^{-1}),\]
where \(R(u,v)\) is the non-super trigonometric \(R\)-matrix (c.f. [25, Formula 3.4]). Then the algebra \(X\) generated by the coefficients of the matrix elements of the matrices \(OT(-u^{-1})O\) and \(O\widetilde{T}(-u^{-1})O\) with relations (2.5)\(-\)(2.9) is isomorphic to \(\mathrm{U}_{q}(\mathfrak{Lgl}_{M})\). Similarly, it coincides with the quantum loop algebra \(\mathrm{U}_{q}(\mathfrak{Lgl}_{N})\) for \(M=0\).
The quantum loop superalgebra has another equivalent presentation, named Drinfeld realization. An usual method to construct the Drinfeld realization of quantum loop superalgebra is Ding-Frenkel homomorphism via \(RT\,T\)-realization. The elements \(T(u)\) and \(\widetilde{T}(u)\) can be expressed, uniquely in \(\mathrm{End}(V)\otimes\mathrm{U}_{q}\big{(}\mathfrak{Lgl}_{M|N}\big{)}\,\big{[} \big{[}u,u^{-1}\big{]}\big{]}\), by the Gauss decomposition below, more details can be found in [7, 35, 38].
\[\begin{split}& T(u)=\left(1\otimes 1+\sum_{i<j}E_{ji}\otimes f_{ji}^{ +}(u)\right)\left(\sum_{j}E_{jj}\otimes k_{j}^{+}(u)\right)\left(1\otimes 1+ \sum_{i<j}E_{ij}\otimes e_{ij}^{+}(u)\right),\\ &\widetilde{T}(u)=\left(1\otimes 1+\sum_{i<j}E_{ji}\otimes f_{ji}^{ -}(u)\right)\left(\sum_{j}E_{jj}\otimes k_{j}^{-}(u)\right)\left(1\otimes 1+ \sum_{i<j}E_{ij}\otimes e_{ij}^{-}(u)\right),\end{split} \tag{2.11}\]
where \(k_{j}^{\pm}(u)=\sum_{r\in\mathbb{Z}_{\geq 0}}k_{j,\mp r}^{\pm}u^{\pm r}\) are invertible for every \(j\in I\). Let
\[X_{i}^{+}(u)=\sum_{r\in\mathbb{Z}}X_{i,r}^{+}u^{-r}=e_{i,i+1}^{+}(u)-e_{i,i+1}^{ -}(u),\quad X_{i}^{-}(u)=\sum_{r\in\mathbb{Z}}X_{i,r}^{-}u^{-r}=f_{i+1,i}^{+}(u )-f_{i+1,i}^{-}(u),\]
where \(i\in I^{\prime}\). Using the formal power series \(X_{i}^{\pm}(u)\) and \(k_{j}^{\pm}(u)\), we give the defining relations of quantum loop superalgebra \(\mathrm{U}_{q}(\mathfrak{LagI}_{M|N})\) as follows.
**Definition 2.3**.: Quantum loop superalgebra \(\mathrm{U}_{q}(\mathfrak{LagI}_{M|N})\) is an associative \(\mathbb{C}(q)\)-superalgebra generated by \(X_{i}^{\pm}(u)\), \(k_{j}^{\pm}(u)\) with \(i\in I^{\prime}\), \(j\in I\), satisfying the relations
\[k_{j,0}^{+}k_{j,0}^{-}=k_{j,0}^{-}k_{j,0}^{+}=1,\quad k_{j,0}^{ \pm}=q_{j}^{\pm h_{j}}, \tag{2.12}\] \[k_{j}^{\pm}(u)k_{j^{\prime}}^{\pm}(v)=k_{j^{\prime}}^{\pm}(v)k_{ j}^{\pm}(u),\quad k_{j}^{\pm}(u)k_{j^{\prime}}^{\mp}(v)=k_{j^{\prime}}^{\mp}(v)k_{ j}^{\pm}(u),\] (2.13) \[k_{j}^{\pm}(u)X_{i}^{\epsilon}(v)k_{j}^{\pm}(u)^{-1}=X_{i}^{ \epsilon}(v),\ \ \mathrm{if}\ \ j\neq i,i+1,\] (2.14) \[k_{i}^{\pm}(u)X_{i}^{\epsilon}(v)k_{i}^{\pm}(u)^{-1}=\left(\frac {uq_{i}-vq_{i}^{-1}}{u-v}\right)^{-\epsilon}X_{i}^{\epsilon}(v),\] (2.15) \[k_{l+1}^{\pm}(u)X_{i}^{\epsilon}(v)k_{l+1}^{\pm}(u)^{-1}=\left( \frac{uq_{i+1}^{-1}-vq_{i+1}}{u-v}\right)^{-\epsilon}X_{i}^{\epsilon}(v),\] (2.16) \[k_{l}^{\pm}(u)X_{M}^{\epsilon}(v)k_{l}^{\pm}(u)^{-1}=\left(\frac {uq-vq^{-1}}{u-v}\right)^{-\epsilon}X_{M}^{\epsilon}(v),\ \ \mathrm{if}\ \ l=M,M+1,\] (2.17) \[X_{i}^{\pm}(u)X_{i^{\prime}}^{\pm}(v)=X_{i^{\prime}}^{\pm}(v)X_{ i}^{\pm}(u),\ \ \mathrm{if}\ \ i^{\prime}\neq i,i\pm 1,\] (2.18) \[X_{i}^{\pm}(u)X_{i}^{\pm}(v)=\frac{uq_{i}^{\pm 1}-vq_{i}^{\mp 1}}{uq_ {i}^{\mp 1}-vq_{i}^{\pm 1}}X_{i}^{\pm}(v)X_{i}^{\pm}(u),\] (2.19) \[X_{i}^{\pm}(u)X_{i+1}^{\pm}(v)=\left(\frac{uq_{i}-vq_{i}^{-1}}{u -v}\right)^{\mp 1}X_{i+1}^{\pm}(v)X_{i}^{\pm}(u),\] (2.20) \[X_{M}^{\pm}(u)X_{M}^{\pm}(v)+X_{M}^{\pm}(v)X_{M}^{\pm}(u)=0,\] (2.21) \[\left[X_{i}^{+}(u),\,X_{i^{\prime}}^{-}(v)\right]=\delta_{i^{ \prime}}(q_{i}-q_{i}^{-1})\left\{\delta\left(\frac{v}{u}\right)k_{i+1}^{-}(v) k_{i}^{-}(v)^{-1}-\delta\left(\frac{u}{v}\right)k_{i+1}^{+}(u)k_{i}^{+}(u)^{-1} \right\},\] (2.22) \[\left[\left[X_{M-1}^{\epsilon}(u_{1}),\,X_{M}^{\epsilon}(v_{1}) \right]_{q}\,\ X_{M+1}^{\epsilon}(u_{2})\right]_{q^{-1}},\ X_{M}^{\epsilon}(v_{2}) \right]+\left\{v_{1}\leftrightarrow v_{2}\right\}=0, \tag{2.24}\]
where \(i,i^{\prime}\in I^{\prime}\), \(j,j^{\prime}\in I\), \(\epsilon=\pm\) or \(\pm 1\), the formal power series \(\delta(u)=\sum_{r\in\mathbb{Z}}u^{r}\), and the notation
\[\left[x,y\right]_{q}=xy-(-1)^{|x||y|}qyx\]
for homogeneous elements \(x,y\in\mathrm{U}_{q}(\mathfrak{LagI}_{M|N})\).
Let \(\mathbb{A}\) be the localization of \(\mathbb{C}(q)\) at the ideal \((q-1)\) given by
\[\mathbb{A}=\left\{\frac{f(q)}{g(q)}\Big{|}f(q),g(q)\in\mathbb{C}[q],g(1)\neq 0 \right\}.\]
For simplicity, we denote
\[\left[q^{\lambda};\ n\right]=\frac{q^{\lambda}q^{n}-1}{q-1},\quad \Phi_{i}^{\pm}(u)=\sum_{r\geq 0}\Phi_{i,\pm r}^{\pm}=k_{i+1}^{\mp}(u)k_{i}^{\mp}(u)^{-1},\] \[\breve{X}_{i,n}^{\pm}=\frac{X_{i,n}^{\pm}}{q_{i}-q_{i}^{-1}},\quad \breve{\Phi}_{i,n}=\frac{\Phi_{i,n}^{+}-\Phi_{i,n}^{-}}{q_{i}-q_{i}^{-1}}, \quad\breve{k}_{j,\mp\ell}^{\pm}=\frac{k_{j,\mp\ell}^{\pm}}{q_{j}-q_{j}^{-1}},\]
for \(\lambda\in\mathbb{P}^{\nu}\), \(a\in\mathbb{C}(q)\), \(n\in\mathbb{Z}\), \(\ell\in\mathbb{Z}_{>0}\), \(i\in I^{\prime}\) and \(j\in I\) (c.f. [2, 15]).
**Definition 2.4**.: The \(\mathbb{A}\)-form \(\mathrm{U}_{\mathbb{A}}\) denotes the \(\mathbb{A}\)-sub-superalgebra of \(\mathrm{U}_{q}(\mathcal{Q}\mathfrak{gl}_{M|N})\) generated by the elements \(q_{j}^{\pm h_{j}}\), \(\left[q_{j}^{\pm h_{j}};\ 0\right]\), \(\breve{k}_{j,\mp\ell}^{\pm}\), and \(\breve{X}_{i,n}^{\pm}\) with \(i\in I^{\prime}\), \(j\in I\), \(n\in\mathbb{Z}\), \(\ell\in\mathbb{Z}_{>0}\).
Let \(\mathrm{U}_{\mathbb{A}}^{+}\) (resp. \(\mathrm{U}_{\mathbb{A}}^{-}\)) be the \(\mathbb{A}\)-sub-superalgebra of \(\mathrm{U}_{\mathbb{A}}\) generated by the elements \(\breve{X}_{i,n}^{+}\) (resp. \(\breve{X}_{i,n}^{-}\)), and let \(\mathrm{U}_{\mathbb{A}}^{0}\) be the \(\mathbb{A}\)-sub-superalgebra of \(\mathrm{U}_{\mathbb{A}}\) generated by the elements \(q_{j}^{\pm h_{j}}\), \(\left[q_{j}^{\pm h_{j}};\ 0\right]\), and \(\breve{k}_{j,\mp\ell}^{\pm}\). The triangular decomposition of superalgebra \(\mathrm{U}_{\mathbb{A}}\) can be given by
\[\mathrm{U}_{\mathbb{A}}\cong\mathrm{U}_{\mathbb{A}}^{-}\otimes\mathrm{U}_{ \mathbb{A}}^{0}\otimes\mathrm{U}_{\mathbb{A}}^{+}.\]
For non-super case, please refer to [15].
**Lemma 2.5**.: _For any \(j\in I\), \(n\in\mathbb{Z}\), we have \(\left[1;\ n\right]\in\mathbb{A}\) and \(\left[\ q_{j}^{h_{j}};\ n\right]\in\mathrm{U}_{\mathbb{A}}^{0}\)._
**Proposition 2.6**.: _The generators of \(\mathrm{U}_{\mathbb{A}}\) satisfy the following relations,_
\[q_{j}^{h_{j}}q_{j}^{-h_{j}}=q_{j}^{-h_{j}}q_{j}^{h_{j}}=1, \tag{2.25}\] \[q_{j}^{h_{j}}\breve{X}_{i,r}^{+}q_{j}^{-h_{j}}=q_{j}^{\pm(\delta _{ij}-\delta_{i+1,j})}\breve{X}_{i,r}^{\pm},\] (2.26) \[\left[\breve{k}_{j,\mp n}^{\pm},\breve{k}_{j^{\prime},\mp m}^{\pm }\right]=\left[\breve{k}_{j,\mp n}^{\pm},\breve{k}_{j^{\prime},\pm m}^{\mp} \right]=0,\] (2.27) \[\left[\ q_{j}^{h_{j}};\ 0\right]\breve{X}_{i,r}^{\pm}=\breve{X}_{i,r}^{ \pm}\left[\ q_{j}^{h_{j}};\ \pm(\delta_{ij}-\delta_{i+1,j})\right],\] (2.28) \[\breve{k}_{j,\mp n}^{\pm}\breve{X}_{i,r}^{\varepsilon}=\breve{X} _{i,r}^{\varepsilon}\breve{k}_{j,\mp n}^{\pm},\ \text{if }\ j\neq i,i+1,\] (2.29) \[q_{l}^{\pm(\delta_{i+1,l}-\delta_{il})}\breve{k}_{l,\mp 1}^{\pm} \breve{X}_{i,r\pm 1}^{+}-\breve{X}_{i,r\pm 1}^{+}\breve{k}_{l,\mp 1}^{\pm}=\pm \left(\delta_{il}-\delta_{i+1,l}\right)q_{l}^{\pm h_{l}}\breve{X}_{i,r}^{+},\] (2.30) \[\breve{k}_{l,\mp 1}^{\pm}\breve{X}_{i,r\pm 1}^{-}-q_{l}^{\pm( \delta_{i+1,l}-\delta_{il})}\breve{X}_{i,r\pm 1}^{-}\breve{k}_{l,\mp 1}^{\pm}=\pm \left(\delta_{i+1,l}-\delta_{il}\right)\breve{X}_{i,r}^{-}q_{l}^{\pm h_{l}}\] (2.31) \[q_{l}^{\pm(\delta_{il}-\delta_{i+1,l})}\breve{k}_{l,\mp n}^{\pm} \breve{X}_{i,r}^{+}-\breve{X}_{i,r}^{+}\breve{k}_{l,\mp n}^{\pm}=q_{l}^{\mp( \delta_{il}-\delta_{i+1,l})}\breve{k}_{l,\mp(n+1)}^{\pm}\breve{X}_{i,r\pm 1}^{+}- \breve{X}_{l,r\pm 1}^{+}\breve{k}_{l,\mp(n+1)}^{\pm},\] (2.32) \[\breve{k}_{l,\mp n}^{\pm}\breve{X}_{i,r}^{-}-q_{l}^{\pm(\delta_{il }-\delta_{i+1,l})}\breve{X}_{i,r}^{-}\breve{k}_{l,\mp n}^{\pm}=\breve{k}_{l, \mp(n+1)}^{\pm}\breve{X}_{i,r\pm 1}^{-}-q_{l}^{\mp(\delta_{il}-\delta_{i+1,l})}\breve{X}_{i,r\pm 1}^{-} \breve{k}_{l,\mp(n+1)}^{\pm},\] (2.33) \[\left[\breve{X}_{i,r}^{+},\breve{X}_{i^{\prime},s}^{-}\right]= \delta_{i\bar{l}^{\prime}}\breve{\Phi}_{i,r+s},\] (2.34) \[\left[\breve{X}_{i,r+1}^{\pm},\breve{X}_{i^{\prime},s}^{\pm} \right]_{q^{\pm a_{i\bar{l}^{\prime}}}}+\varepsilon_{i,l+1;l^{\prime},\bar{l }^{\prime}+1}\left[\breve{X}_{i^{\prime},s+1}^{\pm},\breve{X}_{i,r}^{\pm} \right]_{q^{\pm a_{i\bar{l}^{\prime}}}}=0,\quad\text{if }\ a_{i\bar{l}^{\prime}}\neq 0,\] (2.35) \[\left[\breve{X}_{i,r}^{\pm},\breve{X}_{i^{\prime},s}^{\pm}\right]=0, \quad\text{if }\ a_{i\bar{l}^{\prime}}=0,\] (2.36) \[\left[\breve{X}_{i,r_{1}}^{\varepsilon},\left[\breve{X}_{i,r_{2}}^{ \varepsilon},\breve{X}_{i\pm 1,s}^{\varepsilon}\right]_{q}\right]_{q^{-1}}+\left\{r_{1} \leftrightarrow r_{2}\right\}=0,\quad\text{if }\ i\neq M,\] (2.37) \[\left[\left[\left[\breve{X}_{M-1,r_{1}}^{\varepsilon},\breve{X}_{M, s_{1}}^{\varepsilon}\right]_{q},\breve{X}_{M,r_{2}}^{\varepsilon}\right]_{q^{-1}},\breve{X}_{M,s_{2}}^{ \varepsilon}\right]+\left\{s_{1}\leftrightarrow s_{2}\right\}=0. \tag{2.38}\]
_where \(r,s,r_{1},s_{1},r_{2},s_{2}\in\mathbb{Z}\), \(n,m\in\mathbb{Z}_{>0}\), \(j,j^{\prime}\in I\), \(i,i^{\prime}\in I^{\prime}\), \(l=i,i+1\) and \(\epsilon=\pm\)._
**Lemma 2.7**.: _The element \(q^{\lambda}\equiv 1\ modulo\ (q-1)\mathrm{U}_{\mathbb{A}}\) for \(\lambda\in\mathbb{P}^{\vee}\). Moreover, for any non-zero integer \(n\), \(\left[q^{\lambda};n\right]\equiv\left[q^{\lambda};\,0\right]+n\ modulo\ (q-1)\mathrm{U}_{\mathbb{A}}\)._
Proof.: Since \(\left[q_{j}^{h_{j}};\,0\right]\in\mathrm{U}_{\mathbb{A}}\), we have
\[q_{j}^{h_{j}}-1=(q-1)\left[q_{j}^{h_{j}};\,0\right]\in(q-1)\mathrm{U}_{ \mathbb{A}}.\]
This implies \(q^{\lambda}\equiv 1\ modulo\ (q-1)\mathrm{U}_{\mathbb{A}}\) for any \(\lambda\in\mathbb{P}^{\vee}\). In addition, one gets
\[\left[q_{j}^{h_{j}};\,n\right]=q^{n}\left[q_{j}^{h_{j}};\,0\right]+\left[1;\,n \right]\equiv\left[q_{j}^{h_{j}};\,0\right]+n\]
modulo \((q-1)\mathrm{U}_{\mathbb{A}}\).
Define \(\mathrm{U}_{1}=\mathrm{U}_{\mathbb{A}}/(q-1)\mathrm{U}_{\mathbb{A}}\), the following theorem is crucial.
**Theorem 2.8**.: _The superalgebra \(\mathrm{U}_{1}\) is isomorphic to \(\mathrm{U}(\mathfrak{Lag}\mathfrak{l}_{M|N})\). In other words, there exists a superalgebra isomorphism \(\psi:\mathrm{U}_{\mathbb{A}}\longrightarrow\mathrm{U}(\mathfrak{Lag}\mathfrak{ l}_{M|N})\) such that_
\[q\mapsto 1,\ \check{X}_{i,r}^{+}\mapsto e_{i,r},\ \check{X}_{i,r}^{-} \mapsto f_{i,r},\ \left[q_{j}^{h_{j}};\,0\right]\mapsto(-1)^{|j|}h_{j},\ \check{k}_{i,\mp n}^{\pm}\mapsto\pm h_{i,\mp n}\]
_where \(r\in\mathbb{Z}\), \(n\in\mathbb{Z}_{>0}\), \(j\in I\), \(i\in I^{\prime}\), and \(ker\psi=(q-1)\mathrm{U}_{\mathbb{A}}\)._
Proof.: The universal enveloping superalgebra \(\mathrm{U}(\mathfrak{Lag}\mathfrak{l}_{M|N})\) is generated by \(e_{i,r},f_{i,r}\), \(h_{j,r}\), with the parity \(|e_{i,r}|=|f_{i,r}|=|i|+|i+1|\) and \(|h_{j,r}|=0\), for \(i\in I^{\prime}\), \(j\in I\) and \(r\in\mathbb{Z}\), satisfying the following non-trivial relations
\[\left[h_{j,r},\,h_{j^{\prime},s}\right]=0, \tag{2.39}\] \[\left[h_{j,r},\,e_{i,s}\right]=\left(\delta_{ji}-\delta_{j,i+1} \right)e_{i,r+s},\quad\left[h_{j,r},\,f_{i,s}\right]=\left(\delta_{j,i+1}- \delta_{ji}\right)f_{j,r+s},\] (2.40) \[\left[e_{i,r},\,f_{i^{\prime},s}\right]=\delta_{ii^{\prime}} \alpha_{i,r+s}^{\vee},\] (2.41) \[e_{i,r}e_{i^{\prime},s}=e_{i^{\prime},s}e_{i,r},\quad f_{i,r}f_{ i^{\prime},s}=f_{i^{\prime},s}f_{i,r},\quad\text{if}\ \ i^{\prime}\neq i,i\pm 1,\] (2.42) \[\left[e_{i,r_{1}},\,\left[e_{i,r_{2}},\,e_{i\pm 1,s}\right] \right]+\left\{r_{1}\mapsto r_{2}\right\}=\left[f_{i,r_{1}},\,\left[f_{i,r_{2} },\,f_{i\pm 1,s}\right]\right]+\left\{r_{1}\mapsto r_{2}\right\}=0,\quad\text{if} \ i\neq M,\] (2.43) \[e_{M,r}e_{M,s}+e_{M,s}e_{M,r}=f_{M,r}f_{M,s}+f_{M,s}f_{M,r}=0,\] (2.44) \[\left[\left[\left[e_{M-1,s},\,e_{M,r_{1}}\right],\,e_{M+1,s_{2}} \right],\,e_{M,r_{2}}\right]+\left\{r_{1}\mapsto r_{2}\right\}=0,\] (2.45) \[\left[\left[\left[f_{M-1,s_{1}},\,f_{M,r_{1}}\right],\,f_{M+1,s_{ 2}}\right],\,f_{M,r_{2}}\right]+\left\{r_{1}\mapsto r_{2}\right\}=0. \tag{2.46}\]
where \(r,s,r_{1},s_{1},r_{2},s_{2}\in\mathbb{Z}\), \(j,j^{\prime}\in I\), \(i,i^{\prime}\in I^{\prime}\).
To prove \(\psi\) is a homomorphism, it is enough to show that all relations in Proposition 2.6 are invariant under \(\psi\). Relations (2.25) and (2.26) are trivial. The correspondence of \(\psi\) between generators of \(\mathrm{U}_{\mathbb{A}}\) and \(\mathrm{U}(\mathfrak{Lag}\mathfrak{l}_{M|N})\) implies that relations (2.35)\(-\)(2.38)(resp. relation (2.27)) are equivalent to the relations (2.42)\(-\)(2.46) (resp. relation (2.39) except for the pair
\((r,s)=(0,0)\)). We are left to check relations (2.28) - (2.34). Relation (2.28) follows from the second statement of Lemma 2.7 and relation (2.40) for \(r=0\). Relations (2.29)\(-\) (2.33) follow from relation (2.40). For relation (2.34), we have
\[\check{\Phi}_{i,\pm n}\equiv(-1)^{|i|+|i+1|}\check{k}_{i+1,\pm n}^{\mp}-\check{ k}_{i,\pm n}^{\mp}\]
modulo \((q-1)U_{\text{A}}\) by using induction on \(n>0\). If \(n=0\),
\[\check{\Phi}_{i,0}=\frac{k_{i+1,0}^{-}\big{(}k_{i,0}^{-}\big{)}^{-1}-k_{i+1,0} ^{+}\big{(}k_{i,0}^{+}\big{)}^{-1}}{q_{i}-q_{i}^{-1}}=(-1)^{|i|}\frac{1+q_{i}^ {-a_{i}^{\vee}}}{1+q^{-1}}\left(q_{i+1}^{-h_{i+1}}\left[q_{i}^{h_{i}};\,0 \right]+\left[q_{i+1}^{-h_{i+1}};\,0\right]\right).\]
Then \(\psi(\check{\Phi}_{i,\pm n})=\alpha_{i,\pm n}^{\vee}\) and relation (2.34) can be checked.
The surjectivity of \(\psi\) is trivial. Thus it is necessary to prove \(ker\psi=(q-1)\text{U}_{\text{A}}\). It is clear that \(ker\psi\supseteq(q-1)\text{U}_{\text{A}}\). We claim that relations (2.25)\(-\) (2.34) imply relations (2.39)\(-\) (2.46) if modulo \((q-1)\text{U}_{\text{A}}\). According to the previous description, we only need to prove relation (2.39) for \((r,s)=(0,0)\) and relation (2.40). For relation (2.39), it is trivial. Relations (2.29)\(-\) (2.33) imply
\[\left[\check{k}_{j,\pm n}^{\mp},\,\check{X}_{i,r}^{\epsilon}\right]\equiv \left[\check{k}_{j,\pm 1}^{\mp},\,\check{X}_{i,r\pm(n-1)}^{\epsilon}\right]=\pm \epsilon\big{(}\delta_{i,j}-\delta_{i+1,j}\big{)}\check{X}_{i,r\pm n}^{\epsilon}\]
modulo \((q-1)\text{U}_{\text{A}}\) for \(n\geq 1\), \(r\in\mathbb{Z}\) and \(\epsilon=\pm\). For relation (2.40) in the case of \(r=0\), one has
\[\left[\,q_{j}^{h_{j}};\,0\,\right]\check{X}_{i,r}^{\pm}-\check{X} _{i,r}^{\pm}\left[\,q_{j}^{h_{j}};\,0\,\right] =\check{X}_{i,r}^{\pm}\left(\left[q_{j}^{h_{j}};\,\pm(\delta_{i,j }-\delta_{i+1,j})\right]-\left[q_{j}^{h_{j}};\,0\,\right]\right)\] \[=\check{X}_{i,r}^{\pm}q_{j}^{h_{j}}\left[1;\,\pm\big{(}\delta_{i,j }-\delta_{i+1,j}\big{)}\right]\equiv\pm\big{(}\delta_{i,j}-\delta_{i+1,j}\big{)} \check{X}_{i,r}^{\pm}q_{j}^{h_{j}}\]
modulo \((q-1)\text{U}_{\text{A}}\) from relation (2.28). Therefore, \(ker\psi\subseteq(q-1)\text{U}_{\text{A}}\).
_Remark 2.9_.: We can construct the Drinfeld realization of quantum loop superalgebra \(\text{U}_{q}(\mathfrak{L}\mathfrak{sl}_{M|N})\) by taking \(K_{i}^{\pm}(u)=(-1)^{|i|}\Phi_{i}^{\pm}(uq_{i}^{i-M})\) and \(x_{i}^{\pm}(u)=\left(q_{i}-q_{i}^{-1}\right)^{-1}X_{i}^{\pm}(uq_{i}^{i-M})\) as in [2] with \(q^{c}=q^{d}=1\). Note that \(K_{i,0}^{\pm}=(-1)^{|i|}q_{i}^{\pm\alpha_{i}^{\vee}}\) for \(i\in I^{\prime}\) and define the elements \(a_{i;\pm\ell}(\ell>0)\) by
\[K_{i}^{\pm}(u):=\sum_{r\geq 0}K_{i,\pm r}^{\pm}u^{\mp r}=K_{i,0}^{\pm}\exp \left(\pm\left(q-q^{-1}\right)\sum_{\ell>0}a_{i;\pm\ell}u^{\mp\ell}\right).\]
Then the \(\mathbb{A}\)-form of \(\text{U}_{q}(\mathfrak{L}\mathfrak{sl}_{M|N})\) is generated by the elements \(x_{i,r}^{\pm},\,a_{i;\pm\ell},\,q_{i}^{a_{i}^{\vee}},\,\left[q_{i}^{a_{i}^{ \vee}};\,0\right]\), and \((q-q^{-1})^{-1}K_{i,\pm\ell}^{\pm}\) with \(i\in I^{\prime}\), \(r\in\mathbb{Z}\), \(\ell\in\mathbb{Z}_{>0}\), satisfying certain relations listed in [2].
In order to investigate the relationship between the quantum loop superalgebra and super Yangian, we need revise the \(\mathbb{A}\)-form of quantum loop superalgebra associated with \(\mathfrak{gl}_{M|N}\) in terms of \(RTT\)-generators. Here we need to use the isomorphism between the two equivalent presentations. The Guass decomposition (2.11) shows that
\[T_{jj}^{(0)}=q_{j}^{h_{j}},\qquad\widetilde{T}_{jj}^{(0)}=q_{j}^{-h_{j}},\]
and for \(r>0\), we have
\[\left(q_{j}-q_{j}^{-1}\right)^{-1}T^{(r)}_{jj} \equiv\breve{k}^{+}_{j,-r}, \left(q_{j}-q_{j}^{-1}\right)^{-1}\widetilde{T}^{(r)}_{jj} \equiv\breve{k}^{-}_{j,r},\] \[\left(q_{i}-q_{i}^{-1}\right)^{-1}T^{(r)}_{i,i+1} \equiv q_{i}^{h_{i}}\breve{X}^{+}_{i,-r}, \left(q_{i}-q_{i}^{-1}\right)^{-1}\widetilde{T}^{(r)}_{i,i+1} \equiv-q_{i}^{-h_{i}}\breve{X}^{+}_{i,r},\] \[\left(q_{i}-q_{i}^{-1}\right)^{-1}T^{(r)}_{i+1,i} \equiv\breve{X}^{-}_{i,-r}q_{i}^{h_{i}}, \left(q_{i}-q_{i}^{-1}\right)^{-1}\widetilde{T}^{(r)}_{i+1,i} \equiv-\breve{X}^{-}_{i,r}q_{i}^{-h_{i}},\] \[\left(q_{i}-q_{i}^{-1}\right)^{-1}T^{(0)}_{i,i+1} \equiv q_{i}^{h_{i}}\breve{X}^{+}_{i,0}, \left(q_{i}-q_{i}^{-1}\right)^{-1}\widetilde{T}^{(0)}_{i+1,i} \equiv-\breve{X}^{-}_{i,0}q_{i}^{-h_{i}},\]
modulo \((q-1)\mathrm{U}_{\mathbb{A}}\). Then the superalgebra \(\mathrm{U}_{\mathbb{A}}\) is generated by the following elements
\[\gamma^{(0)}_{ii}=(-1)^{|i|}\frac{T^{(0)}_{i,i}-1}{q-1},\quad \widetilde{\gamma}^{(0)}_{ii}=(-1)^{|i|}\frac{\widetilde{T}^{(0)}_{ii}-1}{q- 1},\]
and for \(r>0\) or \(1\leq i\neq j\leq M+N\),
\[\gamma^{(r)}_{ij}=\frac{T^{(r)}_{ij}}{q_{i}-q_{i}^{-1}},\quad \widetilde{\gamma}^{(r)}_{ij}=\frac{\widetilde{T}^{(r)}_{ij}}{q_{j}-q_{j}^{-1 }}.\]
Therefore, the isomorphism of Theorem 2.8 between \(\mathrm{U}_{\mathbb{A}}\) and \(\mathrm{U}(\mathfrak{Lagl}_{M|N})\) can be revised as
\[\gamma^{(0)}_{ij}\to E_{ij},\quad\widetilde{\gamma}^{(0)}_{ji} \mapsto-E_{ji},\quad\text{for $1\leq i\leq j\leq M+N$},\]
and
\[\gamma^{(r)}_{ij}\mapsto E_{ij}x^{-r},\quad\widetilde{\gamma}^{(r )}_{ij}\mapsto-E_{ij}x^{r},\quad\text{for $i,j\in I,r>0$}.\]
### From quantum loop superalgebra to super Yangian
As known, Yangians can be constructed as limit forms of the quantum loop algebras. The long-lost proofs were finally proposed by Gautam, Toledano Laredo in [13] and Guay, Ma in [14]. Similar statements for twisted Yangians and twisted quantum loop algebras had been established by Conner and Guay [5]. In this subsection, we aim to establish an isomorphism between the super Yangian \(\mathrm{Y}_{h}(\mathfrak{gl}_{M|N})\) and a limit form of quantum loop superalgebra \(\mathrm{U}_{q}(\mathfrak{Lagl}_{M|N})\) in terms of \(RTT\)-presentation, motivated by [5, 14]. In order to prove the isomorphism, we first introduce some notations. Consider the following superalgebra homomorphisms sequence
\[\mathrm{U}_{\mathbb{A}}\twoheadrightarrow\mathrm{U}_{\mathbb{A}}/(q-1)\mathrm{ U}_{\mathbb{A}}\stackrel{{\simeq}}{{\longrightarrow}}\mathrm{U}( \mathfrak{Lagl}_{M|N}).\]
Let \(\psi^{\prime}\) be the composite as above. We define the notations \(\Gamma^{(r,m)}_{ij}\) and \(\widetilde{\Gamma}^{(r,m)}_{ij}(r,m\geq 0)\) in the following way. Denote
\[\Gamma^{(0,0)}_{ij}=\gamma^{(0)}_{ij},\quad\widetilde{\Gamma}^{(0,0)}_{ji}=\widetilde{\gamma}^{(0)}_{ji}\quad\text{if $i\leq j$},\]
\[\Gamma^{(0,0)}_{ij}=-(-1)^{|l|+|j|}\widetilde{\Gamma}^{(0)}_{ij},\quad\widetilde{ \Gamma}^{(0,0)}_{ji}=-(-1)^{|l|+|j|}\gamma^{(0)}_{ji},\quad\text{if $i>j$.}\]
For \(r>0\), we set
\[\Gamma^{(r,0)}_{ij}=\gamma^{(r)}_{ij},\quad\widetilde{\Gamma}^{(r,0)}_{ij}= \widetilde{\gamma}^{(r)}_{ij},\]
and recursively
\[\Gamma^{(r,m+1)}_{ij}=\Gamma^{(r+1,m)}_{ij}-\Gamma^{(r,m)}_{ij},\quad \widetilde{\Gamma}^{(r,m+1)}_{ij}=\widetilde{\Gamma}^{(r+1,m)}_{ij}- \widetilde{\Gamma}^{(r,m)}_{ij}.\]
It is clear that \(\psi^{\prime}\left(\Gamma^{(r,m)}_{ij}\right)=(-1)^{m}E_{ij}x^{-(r+m)}(x-1)^{m}\) and \(\psi^{\prime}\left(\widetilde{\Gamma}^{(r,m)}_{ij}\right)=-E_{ij}x^{r}(x-1)^{m}\) for every \(r\geq 0\).
For \(-m\leq r<0\), define \(\Gamma^{(-m,m)}_{ij}=(-1)^{m+1}\widetilde{\Gamma}^{(0,m)}_{ij}\), and recursively \(\Gamma^{(r,m+1)}_{ij}=\Gamma^{(r+1,m)}_{ij}-\Gamma^{(r,m)}_{ij}\) either. It ensures that the definition of \(\Gamma^{(r,m)}_{ij}\) with \(r<0\) is compatible with \(\psi^{\prime}\left(\Gamma^{(r,m)}_{ij}\right)\) given as before. Introduce a filtration on \(\mathrm{U}_{\mathrm{A}}\)
\[\mathsf{F}_{[0]}\supset\mathsf{F}_{[1]}\supset\mathsf{F}_{[2]}\supset\ldots\]
by setting \(\deg\Gamma^{(r,m)}_{i,j}=m\) and \(\deg\left(q-q^{-1}\right)=1\) and let \(\mathsf{F}_{[m]}\) be the span of all monomials of the form
\[\left(q-q^{-1}\right)^{m_{0}}\Gamma^{(-r_{1},m_{1})}_{i_{s},j_{s}}\Gamma^{(-r _{2},m_{2})}_{i_{s},j_{s}}\ldots\Gamma^{(-r_{k},m_{k})}_{i_{k},j_{k}},\]
where all \(i_{s},j_{s}\in I,0\leq r_{s}\leq\frac{1}{2}(m_{s}-1)\), \(m_{0},m_{s}\geq 0\) for \(1\leq s\leq k\), \(m_{0}+m_{1}+\cdots+m_{k}\geq m\). For \(r>0\), \(m\geq 0\), we have
\[\Gamma^{(r,m)}_{ij}=\Gamma^{(0,m)}_{ij}+\sum_{p=1}^{r}\binom{r}{p }\Gamma^{(0,m+p)}_{ij}\in\mathsf{F}_{[m]},\] \[\Gamma^{(1-m,m)}_{ij}=\Gamma^{(0,m)}_{ij}+\sum_{p=1}^{m-1}(-1)^{p }\binom{m-1}{p}\Gamma^{(-p,m+p)}_{ij}\in\mathsf{F}_{[m]},\] \[\widetilde{\Gamma}^{(0,m)}_{ij}=(-1)^{m+1}\Gamma^{(1-m,m)}_{ij}-( -1)^{m+1}\Gamma^{(-m,m+1)}_{ij}\in\mathsf{F}_{[m]},\] \[\widetilde{\Gamma}^{(r,m)}_{ij}=\widetilde{\Gamma}^{(0,m)}_{ij}+ \sum_{p=1}^{r}\binom{r}{p}\widetilde{\Gamma}^{(0,m+p)}_{ij}\in\mathsf{F}_{[m]}.\]
Therefore,
\[(-1)^{m+1}\widetilde{\Gamma}^{(r,m)}_{ij}=\overline{\Gamma}^{(r,m)}_{ij}= \overline{\Gamma}^{(0,m)}_{ij}\in\mathsf{F}_{[m]}/\mathsf{F}_{[m+1]}. \tag{2.47}\]
Notice that \(\mathsf{F}_{[0]}=\mathrm{U}_{\mathrm{A}}\). Denote the associated \(\mathbb{Z}_{2}\)-graded algebra with respect to this filtration by \(\mathrm{grU}_{\mathrm{A}}\), that is, \(\mathrm{grU}_{\mathrm{A}}=\bigoplus_{m=0}^{\infty}\mathsf{F}_{[m]}/\mathsf{F}_{[m +1]}\). Now we give the main result as follows.
**Theorem 2.10**.: _There exists an isomorphism \(\varphi\) of superalgebras from the super Yangian \(\mathrm{Y}_{h}(\mathfrak{gl}_{M|N})\) to the filtrated algebra \(\mathrm{grU}_{\mathbb{A}}\) such that \(\hat{t}_{ij}^{(m+1)}\to(-1)^{|i||j|}\overline{\Gamma_{ij}^{(0,m)}}\) for \(i,j\in I\) and \(m\geq 0\)._
Proof.: The surjectivity of \(\varphi\) follows from (2.47). The injectivity of \(\varphi\) can be similarly proven (c.f. [5, Theorem 2.2]). We are left to show that \(\varphi\) is a homomorphism.
One gets from (2.10) for \(r,s\geq 1\),
\[q_{i}^{-\delta_{ik}}\Gamma_{ij}^{(r,1)}\Gamma_{kl}^{(s,0)}-q_{i} ^{\delta_{ik}}\Gamma_{ij}^{(r,0)}\Gamma_{kl}^{(s,1)}-\left(q_{i}^{\delta_{ik}} -q_{i}^{-\delta_{ik}}\right)\Gamma_{ij}^{(r,0)}\Gamma_{kl}^{(s,0)}\] \[\qquad\qquad-\varepsilon_{ij;kl}\left\{q_{j}^{-\delta_{jl}} \Gamma_{kl}^{(s,0)}\Gamma_{ij}^{(r,1)}-q_{j}^{\delta_{jl}}\Gamma_{kl}^{(s,1)} \Gamma_{ij}^{(r,0)}-\left(q_{j}^{\delta_{jl}}-q_{j}^{-\delta_{jl}}\right) \Gamma_{kl}^{(s,0)}\Gamma_{ij}^{(r,0)}\right\}\] \[= \,\varepsilon_{ik;kl}\left(q_{k}-q_{k}^{-1}\right)\times\left\{ \delta_{k<i}\Gamma_{kj}^{(r,0)}\Gamma_{il}^{(s,1)}+\delta_{i<k}\Gamma_{kj}^{( r,1)}\Gamma_{il}^{(s,0)}+\left(\delta_{k<i}+\delta_{i<k}\right)\Gamma_{kj}^{(r,0)} \Gamma_{il}^{(s,0)}\right.\] \[\qquad\qquad\left.-\left(\delta_{j<l}\Gamma_{kj}^{(s,1)}\Gamma_ {il}^{(r,0)}+\delta_{l<j}\Gamma_{kj}^{(s,0)}\Gamma_{il}^{(r,1)}+\left(\delta_{ j<l}+\delta_{l<j}\right)\Gamma_{kj}^{(s,0)}\Gamma_{il}^{(r,0)}\right)\right\}.\]
Using induction on \(m,n\geq 0\), we have
\[q_{i}^{-\delta_{ik}}\Gamma_{ij}^{(r,m+1)}\Gamma_{kl}^{(s,n)}-q_ {i}^{\delta_{ik}}\Gamma_{ij}^{(r,m)}\Gamma_{kl}^{(s,n+1)}-\left(q_{i}^{\delta _{ik}}-q_{i}^{-\delta_{ik}}\right)\Gamma_{ij}^{(r,m)}\Gamma_{kl}^{(s,n)}\] \[\qquad\qquad-\varepsilon_{ij;kl}\left\{q_{j}^{-\delta_{jl}} \Gamma_{kl}^{(s,n)}\Gamma_{ij}^{(r,m+1)}-q_{j}^{\delta_{jl}}\Gamma_{kl}^{(s,n+ 1)}\Gamma_{ij}^{(r,m)}-\left(q_{j}^{\delta_{jl}}-q_{j}^{-\delta_{jl}}\right) \Gamma_{kl}^{(s,n)}\Gamma_{ij}^{(r,m)}\right\}\] \[= \,\varepsilon_{ik;kl}\left(q_{k}-q_{k}^{-1}\right)\times\left\{ \delta_{k<l}\Gamma_{kj}^{(r,m)}\Gamma_{il}^{(s,n+1)}+\delta_{i<k}\Gamma_{kj}^{( r,m+1)}\Gamma_{il}^{(s,n)}+\left(\delta_{k<i}+\delta_{i<k}\right)\Gamma_{kj}^{(r,m)} \Gamma_{il}^{(s,n)}\right.\] \[\qquad\qquad-\left(\delta_{j<l}\Gamma_{kj}^{(s,n+1)}\Gamma_{il}^{ (r,m)}+\delta_{l<j}\Gamma_{kj}^{(s,n)}\Gamma_{il}^{(r,m+1)}+\left(\delta_{j<l}+ \delta_{l<j}\right)\Gamma_{kj}^{(s,n)}\Gamma_{il}^{(r,m)}\right)\right\}.\]
Taking the pair \((r,s)=(1,1)\), one gets,
\[q_{i}^{-\delta_{ik}}\Gamma_{ij}^{(1,m+1)}\Gamma_{kl}^{(1,n)}-q_ {i}^{\delta_{ik}}\Gamma_{ij}^{(1,m)}\Gamma_{kl}^{(1,n+1)}-\left(q_{i}^{\delta_{ ik}}-q_{i}^{-\delta_{ik}}\right)\Gamma_{ij}^{(1,m)}\Gamma_{kl}^{(1,n)}\] \[\qquad\qquad-\varepsilon_{ij;kl}\left\{q_{j}^{-\delta_{jl}} \Gamma_{kl}^{(1,n)}\Gamma_{ij}^{(1,m+1)}-q_{j}^{\delta_{jl}}\Gamma_{kl}^{(1,n+ 1)}\Gamma_{ij}^{(1,m)}-\left(q_{j}^{\delta_{jl}}-q_{j}^{-\delta_{jl}}\right) \Gamma_{kl}^{(1,n)}\Gamma_{ij}^{(1,m)}\right\}\] \[= \,\varepsilon_{ik;kl}\left(q_{k}-q_{k}^{-1}\right)\times\left\{ \delta_{k<i}\Gamma_{kj}^{(1,m)}\Gamma_{il}^{(1,n+1)}+\delta_{i<k}\Gamma_{kj}^{( 1,m+1)}\Gamma_{il}^{(1,n)}+\left(\delta_{k<i}+\delta_{i<k}\right)\Gamma_{kj}^{( 1,m)}\Gamma_{il}^{(1,n)}\right.\] \[\qquad\qquad-\left(\delta_{j<l}\Gamma_{kj}^{(1,n+1)}\Gamma_{il}^{ (1,m)}+\delta_{l<j}\Gamma_{kj}^{(1,n)}\Gamma_{il}^{(1,m+1)}+\left(\delta_{j<l}+ \delta_{l<j}\right)\Gamma_{kj}^{(1,n)}\Gamma_{il}^{(1,m)}\right)\right\}.\]
Since both sides of this relation are included in \(\mathrm{F}_{[m+n+1]}\) and \(\Gamma_{ij}^{(1,m)}=\Gamma_{ij}^{(0,m+1)}+\Gamma_{ij}^{(0,m)}\), one gets for all \(m,n\geq 0\),
\[q_{i}^{-\delta_{ik}}\Gamma_{ij}^{(0,m+1)}\Gamma_{kl}^{(0,m)}-q_{i} ^{\delta_{ik}}\Gamma_{ij}^{(0,m)}\Gamma_{kl}^{(0,n+1)}-\delta_{ik}(-1)^{|i|} \left(q-q^{-1}\right)\Gamma_{ij}^{(0,m)}\Gamma_{kl}^{(0,n)}\] \[\qquad\qquad-\varepsilon_{ij;kl}\left\{q_{j}^{-\delta_{jl}} \Gamma_{kl}^{(0,n)}\Gamma_{ij}^{(0,m+1)}-q_{j}^{\delta_{jl}}\Gamma_{kl}^{(0,n+ 1)}\Gamma_{ij}^{(0,m)}-\delta_{jl}(-1)^{|j|}(q-q^{-1})\Gamma_{kl}^{(0,n)} \Gamma_{ij}^{(0,m)}\right\}\] \[= \,\varepsilon_{ik;kl}(-1)^{|k|}\left(q-q^{-1}\right)\times\left\{ \left(\delta_{k<i}+\delta_{i<k}\right)\Gamma_{kj}^{(0,m)}\Gamma_{il}^{(0,n)}- \left(\delta_{j<l}+\delta_{l<j}\right)\Gamma_{kj}^{(0,n)}\Gamma_{il}^{(0,m)} \right\}.\]
modulo \(\mathrm{F}_{[m+n+2]}\). That is to say, we have in \(\mathrm{F}_{[m+n+1]}/\mathrm{F}_{[m+n+2]}\)
\[\overline{\Gamma_{ij}^{(0,m+1)}}\,\overline{\Gamma_{kl}^{(0,n)}} -\overline{\Gamma_{ij}^{(0,m)}}\,\overline{\Gamma_{kl}^{(0,n+1)}}-\varepsilon_{ ij;kl}\left(\overline{\Gamma_{kl}^{(0,n)}}\,\overline{\Gamma_{ij}^{(0,m+1)}}- \overline{\Gamma_{kl}^{(0,n+1)}}\,\overline{\Gamma_{ij}^{(0,m)}}\right)\] \[= \,\varepsilon_{ik;kl}(-1)^{|k|}\left(\overline{q-q^{-1}}\right) \times\left(\overline{\Gamma_{kj}^{(0,m)}}\,\overline{\Gamma_{il}^{(0,n)}}- \overline{\Gamma_{kj}^{(0,m)}}\,\overline{\Gamma_{il}^{(0,m)}}\,\right)\]
It coincides with relation (2.4) in \(\mathrm{Y}_{h}(\mathfrak{gl}_{M|N})\) if we set \(\hbar=\overline{q-q^{-1}}\in\mathrm{F}_{[1]}/\mathrm{F}_{[2]}\).
Twisted super Yangian and Twisted quantum loop superalgebra
In this section, we focus on the case of twisted super Yangian. We will review the definition of twisted super Yangian, more details can be found in [1, 4]. Let \(I_{+}=\{1,2,\cdots,M\}\), \(I_{-}=I\backslash I_{+}\), and \(N=2n\). For each index \(i\in I\), we define \(\theta_{i}\) and \(\tilde{i}\) as follows.
\[\theta_{i}=\begin{cases}1,&\text{if }i\in I_{+},\\ (-1)^{i-M-1},&\text{if }i\in I_{-},\end{cases}\qquad\tilde{i}=\begin{cases}M+1-i,& \text{if }i\in I_{+},\\ 2M+2n+1-i,&\text{if }i\in I_{-}.\end{cases}\]
Then we define a super-transposition \(\iota^{1}\) for a matrix \(X=\sum_{i,j\in I}E_{ij}\otimes X_{ij}\) by
\[X^{t}=\sum_{i,j\in I}(-1)^{|i||j|+|i|}\theta_{i}\theta_{j}E_{ \tilde{j}\tilde{i}}\otimes X_{ij}=\sum_{i,j\in I}E_{ij}\otimes X^{t}_{ij},\]
where
\[X^{t}_{ij}=(-1)^{|i||j|+|i|}\theta_{i}\theta_{j}X_{\tilde{j}\tilde {i}}.\]
Similar to the usual transposition of non-super case, the equality \((X^{t})^{t}=X\) still holds. Moreover, \((XY)^{t}=Y^{t}X^{t}\) holds if either \(X\) or \(Y\) is a constant matrix. The notation \(\iota_{i}\) denotes the partial super-transposition by applying \(\iota\) to the \(i\)-th tensor factor, e.g. if \(X=X_{1}\otimes X_{2}\), \(X^{\iota_{1}}=(X_{1})^{t}\otimes X_{2}\) and \(X^{\iota_{2}}=X_{1}\otimes(X_{2})^{t}\). The super-involution \(\tau\) of \(\mathfrak{gl}_{M|2n}\) is given by
\[\tau(E_{ij})=-E^{t}_{ij}.\]
The fixed point sub-superalgebra \(\mathfrak{osp}_{M|2n}\) of \(\mathfrak{gl}_{M|2n}\) with respect to super-involution \(\tau\) is generated by the generators \(E_{ij}-(-1)^{|i||j|+|i|}\theta_{i}\theta_{j}E_{\tilde{j}\tilde{i}}\). Extending \(\tau\) to the polynomial current Lie superalgebra \(\mathfrak{gl}_{M|N}[x]=\mathfrak{gl}_{M|N}\otimes\mathbb{C}[x]\) and the loop superalgebra \(\mathfrak{Algl}_{M|N}\) by
\[\tau\left(E_{ij}\otimes x^{r}\right)=\tau\left(E_{ij}\right) \otimes(-x)^{r},\quad\text{for }r\geq 0,\quad\text{and}\quad\tau\left(E_{ij}\otimes x^{r}\right)= \tau\left(E_{ij}\right)\otimes x^{-r},\quad\text{for }r\in\mathbb{Z},\]
respectively.
### Twisted super Yangian \(\mathrm{Y}^{tw}(\mathfrak{osp}_{M|2n})\)
Similar to twisted Yangian case, the twisted super Yangian \(\mathrm{Y}^{tw}(\mathfrak{osp}_{M|2n})\) can be defined as a sub-superalgebra of the super Yangian \(\mathrm{Y}(\mathfrak{gl}_{M|2n})\).
**Definition 3.1**.: ([1, 4]) Let
\[S(u)=\sum_{i,j\in I}\sum_{m\geq 0}E_{ij}\otimes s^{(m)}_{ij}u^{-m}, \quad\text{with }s^{(0)}_{ij}=\delta_{ij}, \tag{3.1}\]
be the element of \(\mathrm{End}(V)\otimes\mathrm{Y}(\mathfrak{gl}_{M|2n})\big{[}\left[u^{-1} \right]\big{]}\) such that \(S(u)=T(u)\,(T(-u))^{t}\). The twisted super Yangian \(\mathrm{Y}^{tw}(\mathfrak{osp}_{M|2n})\) is defined by the generators \(s^{(m)}_{ij}\) with \(i,j\in I\) and \(m\geq 0\).
The generators satisfy the following relations
\[\mathsf{R}^{12}(u-v)S^{1}(u)\left(\mathsf{R}^{12}(-u-v)\right)^{t_{ 1}}S^{2}(u) =S^{2}(u)\left(\mathsf{R}^{12}(-u-v)\right)^{t_{1}}S^{1}(u)\mathsf{R}^{12}(u -v), \tag{3.2}\] \[(S(-u))^{t} =S(u)+\frac{S(u)-S(-u)}{2u}. \tag{3.3}\]
We will refer to (3.2) and (3.3) as the _quaternary_ and _supersymmetric_ relations, respectively. The abstract superalgebra \(\mathcal{Y}\) generated by \(1\) and \(y^{(m)}_{ij}\) for \(i,j\in I,m>0\), subjected to quaternary and supersymmetric relations with respect to the matrix \(Y(u)=\left(y_{ij}(u)\right)\) where \(y_{ij}(u)=\delta_{ij}+\sum_{m\geq 1}y^{(m)}_{ij}u^{-m}\) is isomorphic to the twisted super Yangian \(\mathrm{Y}^{tw}(\mathfrak{osp}_{M|2n})\). The super Yangian \(\mathrm{Y}^{tw}(\mathfrak{osp}_{M|2n})\) is a left coideal of \(\mathrm{Y}(\mathfrak{gl}_{M|2n})\) with respect to the comultiplication \(\Delta_{1}\) such that
\[\Delta_{1}\left(s^{(m)}_{i,j}\right) =\sum_{x,y\in I}\sum_{p=1}^{m-1}\sum_{a=1}^{p-1}(-1)^{p-a}(-1)^{| j|+|y|}\theta_{j}\theta_{y}t^{(a)}_{i,x}t^{(p-a)}_{j,y}\otimes s^{(m-p)}_{x,y}\] \[+\sum_{x\in I}\sum_{p=1}^{m-1}t^{(p)}_{i,x}\otimes s^{(m-p)}_{x,j }+\sum_{y\in I}\sum_{p=1}^{m-1}(-1)^{p}(-1)^{|j|+|y|}\theta_{j}\theta_{y}t^{(p) }_{j,j}\otimes s^{(m-p)}_{x,y}\] \[+\sum_{x\in I}\sum_{a=0}^{m}(-1)^{m-a}(-1)^{|j|+|x|}\theta_{j} \theta_{x}\,t^{(a)}_{i,x}\,t^{(m-b)}_{j,\bar{x}}\otimes 1+1\otimes s^{(m)}_{i,j}.\]
_Remark 3.2_.: The map
\[s_{ij}(u)\mapsto\sum_{k\in I}(-1)^{|k||j|+|k|}\operatorname{sgn}\left(\frac{ 2M+2n+1}{2}-k\right)\operatorname{sgn}\left(\frac{2M+2n+1}{2}-j\right)t_{ik}( u)\,t_{j\bar{k}}(-u)\]
extends to an isomorphism between \(\mathrm{Y}^{tw}(\mathfrak{osp}_{M|2n})\) and the superalgebra \(Y(M|2n)^{+}\) defined in [4].
We can also define the superalgebra \(\mathrm{Y}^{tw}(\mathfrak{osp}_{M|2n})\) by taking a deformation of generators related to \(\hbar\in\mathbb{C}\backslash\{0\}\): \(\hat{s}^{(m)}_{ij}=\hbar^{1-m}s^{(m)}_{ij}\), denoted by \(\mathrm{Y}^{tw}_{\hbar}(\mathfrak{osp}_{M|2n})\). Moreover, it degenerates into \(\mathrm{U}(\mathfrak{osp}_{M|2n}[x])\) as \(\hbar\mapsto 0.\) As a sub-superalgebra of \(\mathrm{Y}_{\hbar}(\mathfrak{gl}_{M|2n})\), one gets that
\[\hat{s}^{(m)}_{ij}=\hbar\sum_{k\in I}\sum_{p=0}^{m}(-1)^{p}(-1)^{|k||i|+|j|} \theta_{k}\theta_{j}\,\hat{t}^{(m-p)}_{i\bar{k}}\hat{t}^{(p)}_{j\bar{k}},\quad \text{for $i,j\in I$ and $m\geq 0$.} \tag{3.4}\]
### Twisted quantum loop superalgebra \(\mathrm{U}^{tw}_{q}\)
In \(\mathrm{U}_{q}(\mathfrak{gl}_{M|2n})\), we define
\[S(u)=T(u)\left(\tilde{T}\left(u^{-1}\right)\right)^{t}=\sum_{i,j\in I}E_{ij} \otimes S_{ij}(u)\quad\text{with $S_{ij}(u)=\sum_{r\geq 0}S^{(r)}_{ij}u^{r}$.}\]
**Definition 3.3**.: The twisted quantum loop superalgebra \(\mathrm{U}^{tw}_{q}\) is an associative superalgebra generated by the coefficients \(S^{(r)}_{ij}\) of the matrix elements \(S_{ij}(u)\) for \(i,j\in I\).
It is clear that
\[S_{ij}^{(r)} =\sum_{k\in I}\sum_{p=0}^{r}(-1)^{|k||j|+|k|}\theta_{k}\theta_{j}T_{ ik}^{(r-p)}\widetilde{T}_{j\bar{k}}^{(p)}, \tag{3.5}\] \[S_{ij}^{(0)} =0,\quad\text{if}\;\;j\in I_{+},i\in I_{-}. \tag{3.6}\]
Set \(Q=P^{i_{1}}\). Note that \(PQ=QP=Q\), and \(P^{i_{1}i_{2}}=P\).
**Lemma 3.4**.: _The following relations hold_
\[\Re_{q}(u,v)=u\,v\Re_{q}(v^{-1},u^{-1}), \tag{3.7}\] \[\Re_{q}(u,v)\left(\Re_{q}(u^{-1},v)\right)^{l_{1}}=\left(\Re_{q}(u ^{-1},v)\right)^{l_{1}}\Re_{q}(u,v),\] (3.8) \[\left(\widetilde{T}^{1}(u)\right)^{l}\left(\Re_{q}^{12}(u,v) \right)^{l_{1}}T^{2}(v)=T^{2}(v)\left(\Re_{q}^{12}(u,v)\right)^{l_{1}}\left( \widetilde{T}^{1}(u)\right)^{l},\] (3.9) \[\Re_{q}^{12}(u,v)\left(\widetilde{T}^{1}(u^{-1})\right)^{l} \left(\widetilde{T}^{2}(v^{-1})\right)^{l}=\left(\widetilde{T}^{2}(v^{-1}) \right)^{l}\left(\widetilde{T}^{1}(u^{-1})\right)^{l}\Re_{q}^{12}(u,v). \tag{3.10}\]
Proof.: Relations (3.7) and (3.8) can be checked by straightforward calculations. Applying the partial transposition \(l_{1}\) to both sides of (2.9), we get relation (3.9). For relation (3.10), we define operator \(\sigma\) given by \(\sigma(f)=P^{12}fP^{12}\) for \(f\in\operatorname{End}V^{\otimes 2}\otimes\operatorname{U}_{q}(\mathfrak{L} \mathfrak{gl}_{M|2n})[[u^{\pm 1},v^{\pm 1}]]\). Apply \(\sigma\) on (2.8) to get
\[P\Re_{q}^{12}(u,v)P\widetilde{T}^{2}(u)\,\widetilde{T}^{1}(v)= \widetilde{T}^{1}(v)\widetilde{T}^{2}(u)P\Re_{q}^{12}(u,v)P. \tag{3.11}\]
Apply \(\iota_{2}\) on (3.11), we have
\[\left(\widetilde{T}^{2}(u)\right)^{l}Q\left(\Re_{q}^{12}(u,v) \right)^{l_{2}}Q\widetilde{T}^{1}(v)=\widetilde{T}^{1}(v)Q\left(\Re_{q}^{12}( u,v)\right)^{l_{2}}Q\left(\widetilde{T}^{2}(u)\right)^{l}. \tag{3.12}\]
Apply \(\sigma\) and \(\iota_{2}\) again on (3.12) and \(PQ=QP=Q\), we obtain
\[P\Re_{q}^{12}(u,v)P\left(\widetilde{T}^{2}(v)\right)^{l}\left( \widetilde{T}^{1}(u)\right)^{l}=\left(\widetilde{T}^{1}(u)\right)^{l}\left( \widetilde{T}^{2}(v)\right)^{l}P\Re_{q}^{12}(u,v)P. \tag{3.13}\]
Finally, apply \(\sigma\) on (3.13) and replace the pair \((u,v)\) with \((v^{-1},u^{-1})\), then we have relation (3.10) from (3.7).
Lemma 3.4 implies that the matrix \(S(u)\) satisfies the quaternary relation,
\[\Re_{q}^{12}(u,v)S^{1}(u)\left(\Re_{q}^{12}\left(u^{-1},v\right) \right)^{l_{1}}S^{2}(v)=S^{2}(v)\left(\Re_{q}^{12}\left(u^{-1},v\right) \right)^{l_{1}}S^{1}(u)\Re_{q}^{12}(u,v). \tag{3.14}\]
_Remark 3.5_.: Let \(O=\sum\limits_{k\in I}E_{k\bar{k}}\) and \(G=\sum\limits_{k\in I}(E_{2k-1,2k}+qE_{2k,2k-1})\). Then the subalgebras generated by the coefficients of the matrix elements of the matrix \(S^{\prime}(u)=(O\otimes 1)\,T(u)(O\otimes 1)\,\left(\widetilde{T}\left(u^{-1} \right)\right)^{l}\) and \(S^{\prime\prime}(u)=(O\otimes 1)\,T(u)(OG\otimes 1)\,\left(\widetilde{T}\left(u^{-1} \right)\right)^{l}\) with the defining relations are isomorphic to \(\operatorname{Y}_{q}^{tw}(\mathfrak{o}_{M})\) if \(n=0\) and \(\operatorname{Y}_{q}^{tw}(\mathfrak{sp}_{2n})\) if \(M=0\), respectively. Please see the definition of twisted \(q\)-Yangians \(\operatorname{Y}_{q}^{tw}(\mathfrak{o}_{M})\) and \(\operatorname{Y}_{q}^{tw}(\mathfrak{sp}_{2n})\) in [25].
_Remark 3.6_.: The twisted quantum loop superalgebra \(\mathrm{U}_{q}^{tw}\) is a left coideal of \(\mathrm{U}_{q}(\mathfrak{L}\mathfrak{g}\mathfrak{l}_{M|2n})\) such that
\[\Delta_{2}\left(S_{ij}^{(r)}\right)=\sum_{x,y\in I}\sum_{p=0}^{r}\sum_{a=0}^{p} (-1)^{|x|(|l|+|x|)+|y|(|j|+|y|)}\theta_{y}\theta_{j}T_{ix}^{(a)}\widetilde{T}_{ \tilde{j}\tilde{y}}^{(p-a)}\otimes S_{xy}^{(r-p)}.\]
Next, we will show that the twisted quantum loop superalgebra \(\mathrm{U}_{q}^{tw}\) has a PBW type basis. Before that, we define an order \("\preceq"\) on dyadic arrays or triple arrays over \(\mathbb{Z}\): \((i_{1},j_{1})\preceq(i_{2},j_{2})\) if and only if one of the following conditions holds: (1) \(i_{1}<i_{2}\), (2)\(i_{1}=i_{2}\) and \(j_{1}\leq j_{2}\); \((r_{1};i_{1},j_{1})\preceq(r_{2};i_{2},j_{2})\) if and only if one of the following conditions holds: (1) \(r_{1}<r_{2}\), (2) \(r_{1}=r_{2}\) and \((i_{1},j_{1})\preceq(i_{2},j_{2})\).
**Lemma 3.7**.: _Let \(\mathbf{B}_{q}\) be the associative superalgebra over \(\mathbb{C}(q)\) generated by elements \(B_{ij}^{(r)}\) for \(i,j\in I\), \(r\in\mathbb{Z}_{\geq 0}\) and with the defining relation (3.14) respected to the matrix \(B(u)\). Here_
\[B(u)=\sum_{i,j\in I}E_{ij}\otimes B_{ij}(u)\ \ with\ \ B_{ij}(u)=\sum_{r\geq 0}B_{ ij}^{(r)}u^{r}.\]
_Define an order on the generators: \(B_{i_{1},j_{1}}^{(r_{1})}\preceq B_{i_{2},j_{2}}^{(r_{2})}\) if and only if \((r_{1};i_{1},j_{1})\preceq(r_{2};i_{2},j_{2})\). Then the ordered monomials with all the exponents are nonnegative integers,_
\[\left(B_{1,1}^{(0)}\right)^{t_{0,1,1}}\left(B_{1,2}^{(0)}\right)^{t_{0,1,2}} \cdots\left(B_{M+N,M+N}^{(r-1)}\right)^{t_{r-1;M+N,M+N}}\left(B_{1,1}^{(r)} \right)^{t_{r;1,1}}\left(B_{1,2}^{(r)}\right)^{t_{r;1,2}}\cdots \tag{3.15}\]
_span the superalgebra \(\mathbf{B}_{q}\). In particular, this statement still holds for \(\mathrm{U}_{q}^{tw}\)._
Proof.: Denote that
\[\alpha_{ijkl}(u,v) =(-1)^{|l||j|+|l||k|+|j||k|}\left(q_{k}^{\delta_{kj}}-u\nu q_{k} ^{-\delta_{kj}}\right)B_{i\tilde{j}}(u)B_{k\tilde{l}}(v)\] \[\quad+(-1)^{(|j|+|k|)|l|}\theta_{j}\theta_{k}\big{(}q-q^{-1} \big{)}\left(\delta_{k<j}+\delta_{k>j}\,u\nu\right)B_{i\tilde{k}}(u)B_{j\tilde {l}}(v),\] \[\beta_{ijkl}(v,u) =(-1)^{|k||l|+|j||k|+|j||l|}\left(q_{k}^{\delta_{kj}}-u\nu q_{k} ^{-\delta_{kj}}\right)B_{i\tilde{j}}(v)B_{k\tilde{l}}(u)\] \[\quad+(-1)^{(|j|+|k|)|l|}\theta_{j}\theta_{k}\big{(}q-q^{-1} \big{)}\left(\delta_{\tilde{k}<\tilde{j}}+\delta_{\tilde{k}>\tilde{j}}\,u\nu \right)B_{i\tilde{k}}(v)B_{j\tilde{l}}(u).\]
Then we can rewrite (3.14) as
\[\begin{split}&\left(uq_{i}^{\delta_{ik}}-vq_{i}^{-\delta_{ik}} \right)\alpha_{ijkl}(u,v)+\varepsilon_{lk;kj}\left(q_{k}-q_{k}^{-1}\right) \left(\delta_{k<i}u+\delta_{k>i}v\right)\alpha_{kjjl}(u,v)\\ =&\left(uq_{j}^{\delta_{jl}}-vq_{j}^{-\delta_{jl}} \right)\beta_{klij}(v,u)+\varepsilon_{kl;lj}\left(q_{l}-q_{l}^{-1}\right) \left(\delta_{j<\tilde{l}}u+\delta_{\tilde{j}>\tilde{l}}v\right)\beta_{kjil}( v,u).\end{split} \tag{3.16}\]
It suffices to show that any monomials in \(\mathbf{B}_{q}\) with the form
\[B_{i_{1},j_{1}}^{(r_{1})}B_{i_{2},j_{2}}^{(r_{2})}\cdots B_{i_{p},j_{p}}^{(r_{p})} \tag{3.17}\]
can be linearly represented by the ordered monomials (3.15). Introduce a filtration on the superalgebra \(\mathbf{B}_{q}\) and set \(\deg B_{ij}^{(r)}=r\). It suffices to prove that the conclusion of this lemma
still holds for the graded algebra \(\mathbf{B}^{\prime}_{q}:=gr\mathbf{B}_{q}\) with respect to this filtration. For convenience, we still denote the image \(\overline{B^{(r)}_{ij}}\) in \(\mathbf{B}^{\prime}_{q}\) by \(B^{(r)}_{ij}\), and we also denote \(B_{ij}(u)\) in the same way. The notations \(\alpha_{ijkl}(u,v)\) and \(\beta_{ijkl}(v,u)\) in relation (3.16) can be replaced in \(\mathbf{B}^{\prime}_{q}\) with
\[\alpha_{ijkl}(u,v) =(-1)^{|l||j|+|l||k|+|j||k|}q_{k}^{-\delta_{kj}}B_{i\bar{j}}(u)B_ {k\bar{l}}(v)\] \[\quad+(-1)^{(|\bar{j}|+|k|)|l|}\theta_{j}\theta_{k}\big{(}q-q^{-1} \big{)}\delta_{k>j}B_{i\bar{k}}(u)B_{j\bar{l}}(v),\] \[\beta_{ijkl}(v,u) =(-1)^{|k||l|+|j||k|+|j||l|}q_{k}^{-\delta_{kj}}B_{i\bar{j}}(v)B_ {k\bar{l}}(u)\] \[\quad+(-1)^{(|\bar{j}|+|k|)|i|}\theta_{j}\theta_{k}\big{(}q-q^{-1} \big{)}\delta_{\bar{k}>\bar{j}}B_{i\bar{k}}(v)B_{j\bar{l}}(u).\]
Furthermore, we consider the filtration on \(\mathbf{B}^{\prime}_{q}\) with \(\deg B^{(r)}_{ij}=i\). The graded algebra \(gr\mathbf{B}^{\prime}_{q}\) is defined by
\[gr\mathbf{B}^{\prime}_{q}=\bigoplus_{p\geq 0}\mathbf{B}^{\prime}_{[p+1]}/ \mathbf{B}^{\prime}_{[p]}\text{ with }\mathbf{B}^{\prime}_{[p]}:=\left\{X\in\mathbf{B}^{ \prime}_{q}|\deg(X)\leq p\right\}.\]
Relation (3.16) in \(\mathbf{B}^{\prime}_{q}\) has the form
\[\begin{split}&(-1)^{|l||j|+|l||k|+|j||k|}q_{k}^{-\delta_{kj}} \left(uq_{i}^{\delta_{ik}}-vq_{i}^{-\delta_{ik}}\right)B_{i\bar{j}}(u)B_{k \bar{l}}(v)+Q_{1}\\ &+\left(q-q^{-1}\right)q_{i}^{-\delta_{ij}}\left(\delta_{k<i}u+ \delta_{k>i}v\right)B_{k\bar{j}}(u)B_{i\bar{l}}(v)+Q_{2}\\ =&(-1)^{|l||j|+|i||l|+|j||l|}q_{i}^{-\delta_{il}} \left(uq_{j}^{\delta_{jl}}-vq_{j}^{-\delta_{jl}}\right)B_{k\bar{l}}(v)B_{i\bar {j}}(u)+Q_{3}\\ &+\varepsilon_{ik;j}l\left(q-q^{-1}\right)q_{i}^{-\delta_{ij}} \left(\delta_{j<\bar{l}}u+\delta_{j>\bar{l}}v\right)B_{k\bar{j}}(v)B_{i\bar{l} }(u)+Q_{4},\end{split} \tag{3.18}\]
where
\[\begin{split}& Q_{1}=-(-1)^{(|\bar{j}|+|k|)|l|}\theta_{j}\theta_{ k}\big{(}q-q^{-1}\big{)}\left(uq_{i}^{\delta_{ik}}-vq_{i}^{-\delta_{ik}} \right)\delta_{k>j}B_{i\bar{k}}(u)B_{j\bar{l}}(v),\\ & Q_{2}=-(-1)^{|l||j|}\varepsilon_{ij;kl}\theta_{i}\theta_{j} \left(q-q^{-1}\right)^{2}\left(\delta_{k<i}u+\delta_{k>i}v\right)\delta_{i>j} B_{k\bar{l}}(u)B_{j\bar{l}}(v),\\ & Q_{3}=-(-1)^{|l||k|+|k||l|}\theta_{i}\theta_{l}\big{(}q-q^{-1} \big{)}\left(uq_{j}^{\delta_{jl}}-vq_{j}^{-\delta_{jl}}\right)\delta_{i>\bar{ l}}B_{k\bar{l}}(v)B_{l\bar{j}}(u),\\ & Q_{4}=-(-1)^{|l||k|+|k||l|+|j||l|}\theta_{i}\theta_{j}\left(q-q ^{-1}\right)^{2}\left(\delta_{j<\bar{l}}u+\delta_{j>\bar{l}}v\right)\delta_{i >\bar{j}}B_{k\bar{l}}(v)B_{j\bar{l}}(u).\end{split}\]
Since the inequality \(\tilde{a}>\tilde{b}\) for \(a,b\in I\) is not equivalent to \(a>b\) or \(a<b\) exactly, we work with \(B_{i\bar{j}}(u)B_{k\bar{l}}(v)\) for the following four cases: (C1) \(i\in I_{\pm}\), \(j,l\in I_{\mp}\); (C2) \(i,j,l\in I_{\pm}\) simultaneously; (C3) \(i,j\in I_{\pm}\), \(l\in I_{\mp}\); (C4) \(i,l\in I_{\pm}\), \(j\in I_{\mp}\). We call that \(B_{i\bar{j}}(u)B_{k\bar{l}}(v)=\sum B^{(r)}_{i\bar{j}}B^{(s)}_{k\bar{l}}u^{r}v^{s}\) satisfies the condition "\(\mathbf{LR}\)" (resp. "\(\mathbf{BLR}\)") in \(\mathbf{B}^{\prime}_{q}\) if for all \(r,s\geq 0\), the monomial \(B^{(r)}_{i\bar{j}}B^{(s)}_{k\bar{l}}\) can be linearly represented by the monomials \(B^{(r_{1})}_{l_{1},j_{1}}B^{(r+s-r_{1})}_{l_{2},j_{2}}\) with degree \(r+s\) and the order \(i_{1}<i_{2}\) (resp. \((i_{1},j_{1})\preceq(i_{2},j_{2})\)).
**Case C1**. In this case, the notations \(\alpha_{ijkl}(u,v)\), \(\beta_{ijkl}(v,u)\) in relation (3.16) can be replaced in \(g\,r\,\mathbf{B}_{q}^{\prime}\) by
\[\alpha_{ijkl}(u,v)=(-1)^{|l||j|+|l||k|+j||k|}\,q_{j}^{-\delta_{jl} }B_{i\bar{j}}(u)B_{k\bar{l}}(v),\] \[\beta_{ijkl}(v,u)=(-1)^{|k||l|+j||k|+j||l|}\,q_{j}^{-\delta_{jl}}B _{i\bar{j}}(v)B_{k\bar{l}}(u).\]
Then we have for \(i>k\)
\[q_{k}^{-\delta_{kj}}B_{i\bar{j}}(u)B_{k\bar{l}}(v)=\varepsilon_{ ij;kl}\frac{uq_{j}^{\delta_{jl}}-vq_{j}^{-\delta_{jl}}}{u-v}q_{i}^{-\delta_{ il}}B_{k\bar{l}}(v)B_{i\bar{j}}(u)\] \[\quad-\varepsilon_{ki;i,j}\,\frac{q_{i}-q_{i}^{-1}}{u-v}q_{i}^{- \delta_{ij}}\,\Big{\{}uB_{k\bar{l}}(u)B_{i\bar{l}}(v)+\varepsilon_{ik;jl}( \delta_{\bar{j}<\bar{l}}u+\delta_{\bar{j}>\bar{l}}v)B_{k\bar{j}}(v)B_{i\bar{l }}(u)\Big{\}}\,.\]
Take \(i=k\) and \(\bar{j}<\bar{l}\), we obtain
\[q_{i}^{-\delta_{il}}B_{i\bar{l}}(v)B_{i\bar{j}}(u)=-\frac{uq-vq^{-1}}{u-v}q_{i }^{-\delta_{ij}}B_{i\bar{j}}(u)B_{i\bar{l}}(v)+\frac{(q_{i}-q_{i}^{-1})u}{u-v} q_{i}^{-\delta_{ij}}B_{i\bar{j}}(v)B_{i\bar{l}}(u).\]
It means that \(B_{i\bar{j}}(u)B_{k\bar{l}}(v)\) with \(|i|\neq|j|\), \(|i|\neq|l|\) satisfies the condition "**BLR**". Next, take \(i=k\) and \(j=l\) simultaneously, we have in \(gr\,\mathbf{B}_{q}^{\prime}\)
\[B_{i\bar{j}}(u)B_{i\bar{j}}(v)+B_{i\bar{j}}(v)B_{i\bar{j}}(u)=0. \tag{3.19}\]
This implies that \([B_{i\bar{j}}^{(r)},B_{i\bar{j}}^{(s)}]=0\) for all \(r,s\geq 0\).
**Case C2**. In this case, \(|i|=|j|=|l|\). Then \(\bar{l}>\bar{j}\) means \(i<j\), and \(\bar{i}>\bar{l}\) means \(i<l\). We can deduce it in \(g\,r\,\mathbf{B}_{q}^{\prime}\) if \(i>j\) and \(i\geq l\). If \(i\leq j\) and \(i\geq l\), the terms \(Q_{1},Q_{2},Q_{3}\) vanish in (3.18), then we can deduce it since \(k<j\). We also can check the situation for \(i\leq j\) and \(i<l\) because the terms \(Q_{1},Q_{2}\) vanish again. If \(i>j\) and \(i<l\), the case \(|k|\neq|i|\) holds from **Case C1**. As for \(|k|=|i|\), take \(i>k\), then we have
\[q_{k}^{-\delta_{kj}}B_{i\bar{j}}(u)B_{k\bar{l}}(v)=B_{k\bar{l}}( v)B_{i\bar{j}}(u)+\frac{q_{i}-q_{i}^{-1}}{u-v}q_{i}^{-\delta_{ij}}\,\Big{\{}vB_{k \bar{j}}(v)B_{i\bar{l}}(u)-uB_{kj}(u)B_{i\bar{l}}(v)\Big{\}}+\theta_{i}\theta _{j}\] \[\times\frac{\big{(}q-q^{-1}\big{)}^{2}\,u}{u-v}B_{k\bar{l}}(u)B_{ j\bar{l}}(v)-\theta_{i}\theta_{l}\,\big{(}q_{i}-q_{i}^{-1}\big{)}\,B_{k\bar{l}}(v)B_{ l\bar{j}}(u)+\theta_{j}\theta_{k}(q_{i}-q_{i}^{-1})\delta_{k>j}B_{i\bar{k}}(u)B_{j \bar{l}}(v).\]
Here \(k\leq j\) means that the last term in the right side vanish, thus the condition "**LR**" holds for \(B_{i\bar{j}}(u)B_{k\bar{l}}(v)\). If \(k>j\), it still holds since \(B_{i\bar{k}}(u)B_{j\bar{l}}(v)\) satisfy the condition "**LR**". Take \(i=k\), we have for \(\bar{j}>\bar{l}\)
\[B_{i\bar{j}}(v)B_{i\bar{l}}(u)= \frac{uq-vq^{-1}}{u-v}B_{i\bar{l}}(u)B_{i\bar{j}}(v)-\frac{\big{(} q_{i}-q_{i}^{-1}\big{)}\,u}{u-v}B_{i\bar{l}}(v)B_{i\bar{j}}(u)\] \[+\theta_{i}\theta_{l}\big{(}q_{i}-q_{i}^{-1}\big{)}\,B_{i\bar{l} }(v)B_{j\bar{l}}(u)-\theta_{i}\theta_{l}\,\big{(}q_{i}-q_{i}^{-1}\big{)}\,\frac{ uq-vq^{-1}}{u-v}B_{i\bar{l}}(u)B_{l\bar{j}}(v),\]
where \(B_{i\bar{i}}(v)B_{j\bar{j}}(u)\) satisfies the condition "**BLR**". Now let \(i=k\) and \(j=l\) again, then we have in the whole \(\mathbf{B}_{q}^{\prime}\)
\[B_{i\bar{j}}(u)B_{i\bar{j}}(v)=B_{i\bar{j}}(v)B_{i\bar{j}}(u)+q_{i}^{\delta_{ij }}\theta_{i}\theta_{j}\left(q_{i}-q_{i}^{-1}\right)\left\{\delta_{i>j}B_{i\bar {i}}(u)B_{j\bar{j}}(v)-\delta_{i<j}B_{i\bar{i}}(v)B_{j\bar{j}}(u)\right\}.\]
If \(i\geq j\), we also have (3.19) in \(gr\mathbf{B}_{q}^{\prime}\). Otherwise, for \(r>s\), we have
\[B_{i\bar{j}}^{(r)}B_{i\bar{j}}^{(s)}=B_{i\bar{j}}^{(s)}B_{i\bar{j}}^{(r)}+q_{i }^{\delta_{ij}}\theta_{i}\theta_{j}\left(q_{i}-q_{i}^{-1}\right)B_{i\bar{i}}^{ (s)}B_{j\bar{j}}^{(r)},\]
where \((s;i,\bar{i})\preceq(r;j,\bar{j})\).
**Case C3**. In this case, \(|i|=|j|\) and \(|i|\neq|l|\). Then \(\bar{i}>\bar{j}\) means \(i<j\) but \(\bar{i}>\bar{l}\) means \(i>l\). If \(i\geq j\), we can give the same statement as **Case C1**. If \(i<j\), take \(i>k\), then one has
\[B_{i\bar{j}}(u)B_{k\bar{i}}(v)= \frac{uq_{j}^{\delta_{jl}}-vq_{j}^{-\delta_{jl}}}{u-v}q_{i}^{- \delta_{il}}B_{k\bar{i}}(v)B_{i\bar{j}}(u)-\frac{\left(q_{i}-q_{i}^{-1}\right) u}{u-v}B_{k\bar{j}}(u)B_{i\bar{l}}(v)\] \[+\frac{q_{k}-q_{k}^{-1}}{u-v}\left(\delta_{\bar{j}<\bar{l}}u+ \delta_{\bar{j}>\bar{l}}v\right)\left\{B_{k\bar{j}}(v)B_{i\bar{l}}(u)-(-1)^{|i |}\theta_{i}\theta_{j}(q-q^{-1})B_{k\bar{l}}(v)B_{j\bar{l}}(u)\right\}\] \[-(-1)^{|i|+|k|}\theta_{i}\theta_{l}\frac{q-q^{-1}}{u-v}\left(uq_{ j}^{\delta_{jl}}-vq_{j}^{-\delta_{jl}}\right)\delta_{i>l}B_{k\bar{i}}(v)B_{l \bar{j}}(u).\]
It satisfies the condition "**LR**" for \(i\leq l\). While \(i>l\), the condition "**LR**" still holds from **Case C1** for \(|k|=|l|\) or **Case C2** for \(|k|\neq|l|\). Taking \(i=k\), one gets
\[q_{i}^{-\delta_{il}}B_{i\bar{l}}(v)B_{i\bar{j}}(u)= \frac{uq-vq^{-1}}{u-v}B_{i\bar{j}}(u)B_{i\bar{l}}(v)+\theta_{i} \theta_{l}\left(q-q^{-1}\right)\delta_{i>l}B_{i\bar{l}}(v)B_{l\bar{j}}(u)\] \[+\frac{\left(q-q^{-1}\right)u}{u-v}\left\{\theta_{i}\theta_{j} \left(q-q^{-1}\right)B_{i\bar{l}}(v)B_{j\bar{l}}(u)-(-1)^{|i|}B_{i\bar{j}}(v) B_{i\bar{l}}(u)\right\},\]
where \(B_{i\bar{l}}(v)B_{l\bar{j}}(u)\) satisfies the condition "**BLR**" from **Case C1**.
**Case C4**. In this case, \(|i|=|l|\) and \(|i|\neq|j|\). Then \(\bar{i}>\bar{l}\) means \(i<l\) but \(\bar{i}>\bar{j}\) means \(i>j\). So we can deduce it from **Case C2** and **Case C3**.
Therefore, the monomials (3.17) with the order \((r_{1};i_{1},j_{1})\preceq(r_{2};i_{2},j_{2})\preceq\cdots\preceq(r_{p};i_{p}, j_{p})\) span the graded algebra \(\mathbf{B}_{q}^{\prime}\), which completes this proof.
Next, we set \(\lambda_{ij}^{(r)}=\left(q_{i}-q_{i}^{-1}\right)^{-1}S_{ij}^{(r)}\) except for \(r=0\), \(i=j\); set \(\lambda_{ii}^{(0)}=(-1)^{|i|}(q-1)^{-1}\left(S_{ii}^{(0)}-1\right)\). Let \(\mathbf{U}_{\mathbb{A}}^{tw}\) be the \(\mathbb{A}\)-sub-superalgebra of \(\mathbf{U}_{q}^{tw}\) generated by all \(\lambda_{i,j}^{(r)}\). Since all of generators \(\lambda_{ij}^{(r)}\) are included in \(\mathbf{U}_{\mathbb{A}}\), \(\mathbf{U}_{\mathbb{A}}^{tw}\) is also an \(\mathbb{A}\)-sub-superalgebra of \(\mathbf{U}_{\mathbb{A}}\).
**Proposition 3.8**.: _Let \(\mathbf{B}_{q}\) be the (abstract) associative superalgebra generated by \(B_{ij}^{(r)}\) with relations (3.6) and (3.14). Let \(\mathbf{B}_{\mathbb{A}}\) be the \(\mathbb{A}\)-sub-superalgebra of \(\mathbf{B}_{q}\) generated by the elements_
\[b_{i\bar{l}}^{(0)}=(-1)^{|i|}\frac{B_{i\bar{l}}^{(0)}-1}{q-1},\quad b_{i\bar{l }}^{(r)}=\frac{B_{i\bar{j}}^{(r)}}{q_{i}-q_{i}^{-1}}\]
_with \(r>0\) or \(i\neq j\). Then \(\mathbf{B}_{\mathbb{A}}\) with the relation_
\[b_{ii}^{(0)}\equiv-b_{\bar{i}\bar{i}}^{(0)}\quad modulo\;(q-1) \mathbf{B}_{\mathbb{A}} \tag{3.20}\]
_is isomorphic to \(\mathrm{U}_{\mathbb{A}}^{tw}\)._
Proof.: The mapping \(\omega:b_{ij}^{(r)}\mapsto\lambda_{ij}^{(r)}\) from \(\mathbf{B}_{\mathbb{A}}\) to \(\mathrm{U}_{\mathbb{A}}^{tw}\) is an epimorphism clearly.
Consider another filtration on \(\mathrm{U}_{\mathbb{A}}\) by setting \(\deg\lambda_{ij}^{(r)}=\deg\widetilde{\lambda}_{ij}^{(r)}=r\) and \(\deg(q-q^{-1})=1\). The graded algebra \(gr_{2}\mathrm{U}_{\mathbb{A}}\) is defined by
\[gr_{2}\mathrm{U}_{\mathbb{A}}=\bigoplus_{p=0}^{\infty}\mathsf{F} _{[p]}^{\prime}/\mathsf{F}_{[p+1]}^{\prime}\;\text{with}\;\mathsf{F}_{[p]}^{ \prime}:=\left\{X\in\mathrm{U}_{\mathbb{A}}|\deg(X)\geq p\right\}.\]
The image of \(\lambda_{ij}^{(r)}\) in \(gr_{2}\mathrm{U}_{\mathbb{A}}\) satisfies
\[\overline{\lambda_{ij}^{(r)}}=\overline{\gamma_{ij}^{(r)}}+(-1)^{|i|j|+|i|} \theta_{i}\theta_{j}\overline{\widetilde{\gamma}_{\bar{j}\bar{i}}^{(r)}}.\]
Then we have in the universal enveloping superalgebra \(\mathrm{U}(\mathcal{Q}\mathfrak{gl}_{M|2n})\)
\[\psi^{\prime}\left(\lambda_{ij}^{(r)}\right) =E_{ij}x^{-r}-(-1)^{|i|j|+|i|}\theta_{i}\theta_{j}E_{\bar{j}\bar{i }}x^{r}\quad\text{for}\;r>0,\] \[\psi^{\prime}\left(\lambda_{ij}^{(0)}\right) =E_{ij}+\theta_{i}E_{\bar{j}\bar{i}}\quad\text{for}\;j\in I_{+},i \in I_{-},\] \[\psi^{\prime}\left(\lambda_{ij}^{(0)}\right) =E_{ij}\quad\text{for}\;i<j,\;\;\text{and}\;\;i,j\in I_{\pm},\] \[\psi^{\prime}\left(\lambda_{ij}^{(0)}\right) =-(-1)^{|i|j|+|i|}\theta_{i}\theta_{j}E_{\bar{j}\bar{i}}\quad \text{for}\;i>j,\;\;\text{and}\;\;i,j\in I_{\pm},\] \[\psi^{\prime}\left(\lambda_{ii}^{(0)}\right) =E_{i\bar{i}}-E_{\bar{i}\bar{i}}=-\psi^{\prime}\left(\lambda_{ \bar{i}\bar{i}}^{(0)}\right),\]
where \(\psi^{\prime}\) is the morphism mentioned in Section 2.3. Denote the image of \(\gamma_{ii}^{(0)}\), \(\widetilde{\gamma}_{ii}^{(0)}\) in \(\mathrm{U}_{\mathbb{A}}^{tw}\) by \(\underline{\gamma}_{i\bar{i}}^{(0)},\widetilde{\underline{\gamma}}_{i\bar{i}}^ {(0)}\), respectively. Since
\[\gamma_{ii}^{(0)}+\widetilde{\gamma}_{ii}^{(0)}=(-1)^{|i|}(q-1)( \gamma_{ii}^{(0)})^{2}\left\{(q-1)\widetilde{\gamma}_{ii}^{(0)}+1\right\}\in(q -1)\mathrm{U}_{\mathbb{A}},\]
we have
\[\lambda_{ii}^{(0)}\equiv\underline{\gamma}_{ii}^{(0)}+\widetilde{ \underline{\gamma}}_{\bar{i}\bar{i}}^{(0)}\equiv-\lambda_{\bar{i}\bar{i}}^{(0) }\quad\text{modulo}\;\;(q-1)\mathrm{U}_{\mathbb{A}}^{tw}.\]
Let \(gr_{2}\mathrm{U}_{\mathbb{A}}^{tw}\) be the induced graded algebra on \(\mathrm{U}_{\mathbb{A}}^{tw}\).
We define the graded algebra \(gr\mathbf{B}_{\mathbb{A}}\) on \(\mathbf{B}_{\mathbb{A}}\) with degree \(\deg b_{ij}^{(r)}=r\) and \(\deg\big{(}q-q^{-1}\big{)}=1\) similarly. It is enough to show that the induced morphism \(\overline{\omega}\) between \(gr\mathbf{B}_{\mathbb{A}}\) and \(gr_{2}\mathrm{U}_{\mathbb{A}}^{tw}\) is injective. From relation (3.16) in \(b_{ij}^{(r)}\), we take \(i=k\), \(j=l\) and obtain that
\[b_{i\bar{j}}(u)b_{i\bar{j}}(v)\equiv(-1)^{|i|+|j|}b_{i\bar{j}}(v)b_{i\bar{j}}(u )\quad\text{modulo}\;\;(q-1)\mathbf{B}_{\mathbb{A}},\]
which implies that \(\left(b_{ij}^{(r)}\right)^{2}=0\) for any \(|i|+|j|=1\) and non-negative integer \(r\). Then by Lemma 3.7, all of monomials
\[\left(q-q^{-1}\right)^{r_{0}}\left(b_{i_{1},j_{1}}^{(r_{1})}\right)^{t_{r_{1}:i _{1},j_{1}}}\left(b_{i_{1},j_{2}}^{(r_{2})}\right)^{t_{r_{2}:i_{2},j_{2}}}\cdots \left(b_{i_{k},j_{k}}^{(r_{k})}\right)^{t_{r_{k}:i_{k},j_{k}}}\]
with the order \((r_{1};i_{1},j_{1})\preceq(r_{2};i_{2},j_{2})\preceq\cdots\preceq(r_{k};i_{k}, j_{k})\) and with the restrictions (3.6), (3.20) and
\[t_{r_{p};i_{p},j_{p}}\in\mathbb{Z}_{\geq 0}\ \ \text{for}\ \ |i_{p}|+|j_{p}|=0,\quad t_{r_{p};i_{p},j_{p}}\in\{0,1\}\ \ \text{for}\ \ |i_{p}|+|j_{p}|=1\]
span the superalgebra \(\mathbf{B}_{\mathbb{A}}\). For simplicity, we call that form monomials by the restricted ordered monomials in \(b_{ij}^{(r)}\). This statement is also true for \(\mathrm{U}_{\mathbb{A}}^{tw}\).
Now we argue in the component \(\mathsf{F}_{[p]}^{\prime}/\mathsf{F}_{[p+1]}^{\prime}\). Suppose that
\[\sum_{\ell,\iota,J}c_{\ell;\iota,J}\cdot\overline{\mathsf{b}_{\ell;\iota,J}}=0,\]
where \(c_{\ell;\iota,J}\in\mathbb{A}\), and \(\overline{\mathsf{b}_{\ell;\iota,J}}\) is a restricted ordered monomial in \(\overline{b_{ij}^{(r)}}\) with respect to the sequences \(\iota=(i_{s})\), \(J=(j_{s})\) and
\[\ell=(r_{0},r_{1},\cdots,r_{1},r_{2}\cdots,r_{2},\cdots,r_{k},\cdots,r_{k}) \quad\text{with }r_{0}+\sum_{s}r_{s}\cdot t_{r_{s};i_{s},j_{s}}=p.\]
But all of coefficient \(c_{\ell;\iota,J}\) vanish under \(\overline{\omega}\) since the restricted ordered monomials in the elements \(\psi^{\prime}(\lambda_{ij}^{(r)})\) are linearly independent in \(\mathrm{U}(\mathfrak{LGgl}_{M|2n})\). This proves the injectivity of \(\overline{\omega}\).
**Corollary 3.9**.: _The restricted ordered monomials in generators \(\lambda_{i,j}^{(r)}\) denoted in the above proof constitute a PBW type basis of \(\mathrm{U}_{\mathbb{A}}^{tw}\)._
**Corollary 3.10**.: _Under the hypotheses of Proposition 3.8, \(\mathbf{B}_{q}\) with the defining relations (3.6), (3.14) and (3.20) is isomorphic to \(\mathrm{U}_{q}^{tw}\). Moreover, the ordered monomials (3.15) in generators_
\[S_{ij}^{(r)}\ \ \text{for}\ \ r>0,\ \ S_{ij}^{(0)}\ \ \text{for}\ \ i,j\in I_{\pm},i\neq j\ \ \text{or}\ \ i\in I_{+},j\in I_{-},\ \text{and}\ \ S_{ii}^{(0)}\ \ \text{for}\ \ i<\tilde{\iota}\]
_with the exponents \(t_{r;i,j}\in\{0,1\}\) for \(|i|+|j|=1\) constitute a PBW type basis of \(\mathrm{U}_{q}^{tw}\)._
### From twisted quantum loop superalgebra to twisted super Yangian
This subsection is devote to establish the relationship between the twisted quantum loop superalgebra and twisted super Yangian of ortho-symmplectic Lie superalgebra. Indeed, we will prove that the restriction of \(\varphi\), in Theorem 2.10 to the twisted super Yangian \(\mathrm{Y}_{\tilde{n}}^{tw}(\mathfrak{osp}_{M|2n})\) is isomorphic to the graded algebra \(\overline{V}^{tw}\) derived from \(\mathrm{U}_{\mathbb{A}}^{tw}\).
We proceed our work to introduce the elements \(\Lambda^{(r,m)}_{ij}\) for \(r,m\geq 0\). Set \(\Lambda^{(r,0)}_{ij}=\lambda^{(r)}_{ij}\) for \(r>0\) except if \(r=0\), \(i\in I_{-}\) and \(j\in I_{+}\), set \(\Lambda^{(0,0)}_{ij}=-(-1)^{|i||j|+|i|}\theta_{i}\theta_{j}\lambda^{(0)}_{\bar{j} \bar{i}}\); and set recursively \(\Lambda^{(r,m+1)}_{ij}=\Lambda^{(r+1,m)}_{ij}-\Lambda^{(r,m)}_{ij}\) for \(m\geq 0\). Then for \(r\geq 1\)
\[\Lambda^{(r,m)}_{ij}= \left(q-q^{-1}\right)\sum_{k\in I}\sum_{p=1}^{r-1}(-1)^{|k||j|} \theta_{k}\theta_{j}\Gamma^{(r-p,m)}_{ik}\widetilde{\Gamma}^{(p,0)}_{\bar{j} \bar{k}}\] \[+\left(q-q^{-1}\right)\sum_{k\in I}\sum_{p=1}^{m}(-1)^{|k||j|} \theta_{k}\theta_{j}\Gamma^{(1,p-1)}_{ik}\widetilde{\Gamma}^{(r,m-p)}_{\bar{j} \bar{k}}\] \[+\left(q-q^{-1}\right)\sum_{k\geq i}(-1)^{|k||j|}\theta_{k} \theta_{j}\left(\frac{1}{1+q^{-1}}\right)^{\delta_{ik}}\Gamma^{(0,0)}_{ik} \widetilde{\Gamma}^{(r,m)}_{\bar{j}\bar{k}}\] \[+\left(q-q^{-1}\right)\sum_{\bar{k}\equiv\bar{j}}(-1)^{|k||j|} \theta_{k}\theta_{j}\left(\frac{1}{1+q^{-1}}\right)^{\delta_{jk}}\Gamma^{(r,m )}_{ik}\widetilde{\Gamma}^{(0,0)}_{\bar{j}\bar{k}}\] \[+\sum_{k\in I}(-1)^{|k||j|+|k|}\theta_{k}\theta_{j}\left\{\delta_ {ik}\widetilde{\Gamma}^{(r,m)}_{\bar{j}\bar{k}}+\delta_{jk}\Gamma^{(r,m)}_{ik} \right\}.\]
The elements \(\Lambda^{(r,m)}_{i,j}\) are included in \(\mathsf{F}_{[m]}\) for every \(m\geq 0\) and the first, third and forth terms vanish in \(\mathsf{F}_{[m]}/\mathsf{F}_{[m+1]}\). The following lemma is a straightforward consequence of the statement as above.
**Lemma 3.11**.: _It holds for \(r\geq 1\) and \(m\geq 0\),_
\[\Lambda^{(r,m)}_{ij}\equiv\left(q-q^{-1}\right)\sum_{k\in I}\sum_{p=1}^{m}(-1) ^{|k||j|}\theta_{k}\theta_{j}\Gamma^{(0,p-1)}_{ik}\widetilde{\Gamma}^{(0,m-p)} _{\bar{j}\bar{k}}+(-1)^{|i||j|+|i|}\theta_{i}\theta_{j}\widetilde{\Gamma}^{(0, m)}_{\bar{j}\bar{i}}+\Gamma^{(0,m)}_{ij}\]
_modulo \(\mathsf{F}_{[m+1]}\)._
**Theorem 3.12**.: _For \(m\geq 0\), we have \(\varphi\left(\hat{s}^{(m+1)}_{ij}\right)=(-1)^{|j|}\overline{\Lambda^{(1,m)}_{ ij}}\in\mathsf{F}_{[m]}/\mathsf{F}_{[m+1]}\), where \(\varphi\) is the isomorphism mentioned in Theorem 2.10._
Proof.: By relation (3.4) in \(\mathsf{Y}_{\hbar}(\mathfrak{gl}_{M|2n})\) and Lemma 3.11, we have \(\varphi\left(\hat{s}^{(m+1)}_{i,j}\right)\in\mathsf{F}_{[}m]\) and
\[\varphi\left(\hat{s}^{(m+1)}_{ij}\right)=\hbar\sum_{k\in I}\sum_{ p=1}^{m}(-1)^{p}(-1)^{|k||i|+|j|}\theta_{k}\theta_{j}\varphi\left(\hat{t}^{(m+1-p)}_{ \bar{i}\bar{k}}\right)\varphi\left(\hat{t}^{(p)}_{\bar{j}\bar{k}}\right)\] \[\qquad\qquad\qquad+(-1)^{m+1}(-1)^{|i|+|j|}\theta_{i}\theta_{j} \varphi\left(\hat{t}^{(m+1)}_{\bar{j}\bar{i}}\right)+(-1)^{|l||j|+|j|}\varphi \left(\hat{t}^{(m+1)}_{ij}\right)\] \[\equiv\left(q-q^{-1}\right)\sum_{k\in I}\sum_{p=1}^{m}(-1)^{p}( -1)^{|k||j|+|j|}\theta_{k}\theta_{j}\Gamma^{(0,m-p)}_{ik}\Gamma^{(0,p-1)}_{ \bar{j}\bar{k}}\] \[\qquad\qquad\qquad+(-1)^{m+1}(-1)^{|i||j|+|i|+|j|}\theta_{i} \theta_{j}\Gamma^{(0,m)}_{\bar{j}\bar{j}}+(-1)^{|j|}\Gamma^{(0,m)}_{ij}\] \[\equiv(-1)^{|j|}\Lambda^{(1,m)}_{ij}\]
modulo \(\mathsf{F}_{[m+1]}\).
Let \(\mathsf{F}^{tw}_{[m]}\) be the span of the form
\[(q-q^{-1})^{m_{0}}\Lambda^{(1,m_{1})}_{i_{1},j_{1}}\Lambda^{(1,m_{2})}_{i_{2},j_{ 2}}\ldots\Lambda^{(1,m_{k})}_{i_{k},j_{k}},\]
where \(m_{0}+\cdots+m_{k}\geq m\). Denote \(\overline{V}^{tw}=\bigoplus_{m=0}^{\infty}\mathsf{F}^{tw}_{[m]}/\mathsf{F}^{ tw}_{[m+1]}\). Then \(\Lambda^{(r,m)}_{ij}\equiv\Lambda^{(1,m)}_{ij}\) still holds if modulo \(\mathsf{F}^{tw}_{[m+1]}\) which follows from \(\Lambda^{(r,m)}_{ij}=\sum_{i=1}^{r-1}{r-1\choose i}\Lambda^{(1,m+i)}_{ij}+ \Lambda^{(1,m)}_{ij}\in\mathsf{F}^{tw}_{[m]}\). Let \(\varphi^{tw}\) be the morphism that maps \(\hat{s}^{(m+1)}_{ij}\) into \((-1)^{|j|}\Lambda^{(1,m)}_{ij}\in\mathsf{F}^{tw}_{[m]}/\mathsf{F}^{tw}_{[m+1]}\). In \(\overline{V}^{tw}\), set \(\hbar=\overline{q-q^{-1}}\in\mathsf{F}^{tw}_{[1]}/\mathsf{F}^{tw}_{[2]}\). We can identity \(\varphi^{tw}\) with the restriction of \(\varphi\) on \(\Upsilon^{tw}_{\hbar}(\mathfrak{osp}_{M|2n})\).
Let \(\mu_{1}\) be the embedding of \(\Upsilon^{tw}_{\hbar}(\mathfrak{osp}_{M|2n})\) into \(\Upsilon_{\hbar}(\mathfrak{gl}_{M|2n})\) defined by relation (3.4). The embedding \(\mu_{2}\) can be defined by Lemma 3.11 similarly. As a consequence, we have the following commutative diagram
that is, \(\mu_{2}\cdot\varphi^{tw}=\varphi\cdot\mu_{1}\). Therefore we have the main result as follows.
**Theorem 3.13**.: _The morphism \(\varphi^{tw}\) is a superalgebraic isomorphism between \(\Upsilon^{tw}_{\hbar}(\mathfrak{osp}_{M|2n})\) and the graded algebra \(\overline{V}^{tw}\)._
## Acknowledgments
Y. Wang is partially supported by the National Natural Science Foundation of China (Nos. 11901146 and 12071026), and the Fundamental Research Funds for the Central Universities JZ2021HGTB0124. H. Zhang is partially supported by the support of the National Natural Science Foundation of China (No. 12271332), and Shanghai Natural Science Foundation grant 22ZR1424600.
|
2306.10034 | Unlocking Insights into Business Trajectories with Transformer-based
Spatio-temporal Data Analysis | The world of business is constantly evolving and staying ahead of the curve
requires a deep understanding of market trends and performance. This article
addresses this requirement by modeling business trajectories using news
articles data. | Muhammad Arslan, Christophe Cruz | 2023-06-07T20:19:25Z | http://arxiv.org/abs/2306.10034v1 | # Unlocking Insights into Business Trajectories with Transformer-based Spatio-temporal Data Analysis
###### Abstract.
_The world of business is constantly evolving and staying ahead of the curve requires a deep understanding of market trends and performance. With the increasing availability of text data, it has become possible to gain insights into the health and trajectory of businesses and help decision-makers to make informed decisions. This work is based on the application of the Transformer-based spatio-temporal data analysis for analyzing vast amounts of business-related text data. The proposed method processes news articles that were retrieved from multiple sources from 2017 to 2021 inclusive. It categorizes them into different business categories (e.g. manufacturing, healthcare, construction, etc.) based on the company's interest. A company's taxonomy is used to understand their domains of interest. It then identifies the main topics in each article, which give us information about the events (e.g. launch of a company, regulatory changes, etc.) and their spatio-temporal evolution over time. The insights generated from this analysis can be used by businesses to make informed decisions, by investors to identify opportunities and assess risk, and by policymakers to understand the impact of regulations and economic policies on the business community._
Keywords:_MULTI-LABEL TEXT CLASSIFICATION, TOPIC MODELING, TRANSFORMERS, BUSINESS TRAJECTORIES
## 1 Introduction
Transformer-based spatio-temporal data analysis is a cutting-edge approach that leverages the power of machine learning and natural language processing to analyze vast amounts of business-related text data (Fan et al. 2022). This approach allows us to not only analyze the performance of businesses over time but also understand how trends and performance vary across geographic territories. By combining data analysis with the latest advancements in natural language processing, we can gain a comprehensive view of business trends (Brasoveanu and Andonie 2020). It offers a powerful tool for unlocking insights into business trajectories, providing valuable information for businesses, investors, and policymakers.
To perform business data analysis, we need to develop a news data analyzer (Alawadh et al. 2023). A news data analyzer refers to a system that processes and analyzes news articles to extract relevant information and insights (Lau et al. 2021). The goal of a news data analyzer is typically to gain a better understanding of the events and trends in the news and to make sense of the large amounts of news data that are generated daily. A news data analyzer can use various techniques and algorithms (Ashtiani and Raahmei 2023), such as Natural Language Processing (NLP), Machine Learning (ML), and Data Mining, to process news articles and extract valuable information such as topics, etc. The analyzer can then present this information in a meaningful and useful way, such as through visualizations. The specific features and capabilities of a news data analyzer can vary depending on the use case and requirements, but the overall goal is to help users make sense of the news data and gain valuable insights and information.
In our case, a news data analyzer is a system that combines transformer-based text classification (Cunha et al. 2022) and dynamic topic models (Grootendorst 2022) and classifies news articles into different business categories. Business categories refer to different types of businesses (e.g., healthcare, transportation, agriculture, construction, and real estate) that exist in the market. For this study, the domains were shortlisted based on the company's interest. After categorizing the business news, it identifies the main topics in each text. In the context of business, many different topics can be covered in news articles. For instance, company news and performance, which deals with updates on the performance and financial results of individual companies, as well as news on mergers and acquisitions, partnerships, and other business developments. The insights generated by understanding the spatio-temporal evolution of these topics will help to make informed decisions.
This paper is organized as follows. Section 2 reviews the background of transformer-based text classification and dynamic topic models. Section 3 introduces the proposed method. The discussion is mentioned in Section 4 and the conclusion is described in Section 5.
## 2 Background
Text classification is a task in natural language processing (NLP) where the goal is to assign predefined labels to a given text based on its content (Kowsari et al., 2019). The traditional approach to text classification was based on hand-engineered features, such as the presence of certain keywords, and simple machine learning algorithms like Naive Bayes (Webb et al., 2010). With the advent of deep learning, the focus shifted towards end-to-end models that could automatically extract useful features from the raw text. One of the most successful such models is the Transformer, introduced in 2017 (Rothman, 2021).
Transformers are a type of neural network architecture that use self-attention mechanisms to process input sequences in a parallel manner (Rothman, 2021). In the context of NLP, they have proven to be highly effective at capturing long-range dependencies and semantic relationships between words in a sentence, which is essential for accurate text classification. Transformers have been applied to various NLP tasks, including text classification (Bhardwaj et al., 2021). In text classification, the input text can be assigned one or multiple labels (Liu et al., 2021). If a text can be assigned multiple labels, the task is known as multi-label text classification. Transformers can be applied to both single-label and multi-label text classification tasks. In the case of multi-label text classification, the output of the model is a binary vector, with each dimension corresponding to a label (Liu et al., 2021). The value of each dimension is either 0 or 1, indicating whether the text belongs to that label. To train a Transformer for multi-label text classification, a binary cross-entropy loss is used (Hande et al., 2021). The model is trained to predict the probability of each label, and the binary cross-entropy loss measures the difference between the true label and the predicted probability.
Dynamic topic models (DTMs) are a type of generative probabilistic model that can be used to analyze and track the evolution of topics in large collections of text documents over time (Grootendorst, 2022). In traditional topic models, such as Latent Dirichlet Allocation (LDA), topics are treated as fixed latent variables that generate the words in the documents (Blei et al., 2003). However, in dynamic topic models (e.g., BERTopic), topics are assumed to evolve over time, reflecting the changing interests and themes of the underlying text corpus. DTMs typically represent each document as a mixture of topics, and each topic is modeled as a probability distribution over words (Grootendorst, 2022). The model then assigns topics to the words in each document, and the topic distributions for each document are used to infer the topic evolution over time. DTMs are a flexible and powerful tool for analyzing text data over time and can be used to identify trends, track the spread of ideas and information, and understand how topics change and evolve (Yao and Wang, 2022). They have been applied to a wide range of text-based applications (Savin and Konop, 2022), including social media analysis, news analysis, and scientific literature analysis (Sestino and De Mauro, 2022).
In the existing literature, the intersection of transformer-based models and dynamic topic models (DTMs) has not been thoroughly investigated in the domain of business analysis. This research aims to fill this gap by exploring the benefits of
combining these two approaches for business analysis. The objective of this study is to show the efficacy of combining transformer-based models and DTMs for multi-label text classification and topic extraction, and to demonstrate the potential applications of this combination in the realm of business analysis.
## 3 Proposed Method
The proposed news data analyzer consists of four key stages:
1. Text preprocessing involves cleaning, transforming, and preparing text data for spatio-temporal analysis.
2. Text classification involves categorizing text data into predefined categories based on their content and characteristics.
3. Dynamic topic modeling involves identifying and tracking the evolution of topics over time within a text corpus.
4. Data visualization is the final stage, where the results of the previous stages are presented in an intuitive and graphical manner to facilitate understanding and interpretation of the business trajectories and their evolution over time and geographic regions.
### Text preprocessing
It is a crucial step in the NLP pipeline that involves cleaning and transforming raw data into a format suitable for analysis. The goal of preprocessing is to improve the quality and reliability of the data and to facilitate the analysis by reducing the amount of noise and irrelevant information. The choice of preprocessing techniques depends on the specific dataset and the problem being solved, and it can have a significant impact on the performance of the final model. For our case, a dataset of 26,509 news articles is used. This dataset is manually labelled using 1,800 different categories based on the company's taxonomy. An example is shown in Fig. 1. The text is classified with labels. These labels correspond to the labels of the taxonomy. The news articles were preprocessed, and a corpus of around 8,128,981 words is obtained after preprocessing the news articles. The system implements several preprocessing functions for text data in the English language. It starts by importing the required libraries including NLTK, RE, and String. It first removes punctuation and digits from the text using regular expressions. Then, it converts the text to lowercase, tokenizes the text into words, removes stop words and duplicates, and removes small words. The final step is to join the preprocessed words into a single text to feed the classification model.
### Multi-label text classification
As each text is labelled with multiple labels (see Fig. 1) in the dataset, the case of multi-label classification is followed. After preprocessing the text, a transformer-based multi-label text classification model is executed. The model uses a multi-layer neural network architecture and is trained using the binary cross-entropy loss function and the Adam optimizer. The code imports several libraries, including Numpy and Pandas for data processing and manipulation, and Keras, a high-level deep learning library, for building and training the model. The model starts by loading the preprocessed text data using Numpy. This data is then tokenized using the Tokenizer class from Keras. Later, the tokenized text data is converted into numerical sequences. The sequences are then padded to make all sequences the same length. This is done so that all the input data has the same shape, which is required for training deep learning models. The data is then split into training and validation sets using Numpy slicing. The split index is calculated as 80% of the total number of sequences.
Next, the deep learning model is defined using the Keras functional API. The model consists of several layers, including an input layer, an embedding layer, a global max pooling layer, a dense layer with a Rectified Linear Unit (ReLU) activation function, and a dropout layer. The dropout layer helps to prevent overfitting of the model by randomly dropping out a fraction of the neurons during each training step. The final layer is a dense layer with a sigmoid activation function, which produces binary output values. The model is then compiled, where the loss function, optimizer, and evaluation metrics are specified. The loss function used is binary cross-entropy, which is suitable for binary classification problems. The optimizer used is the Adam optimizer with a learning rate of 0.001. The evaluation metric used is accuracy and precision. Finally, the model is trained using the _fit_ method. The training data is used along with several other parameters such as batch size and number of epochs. The validation data is also used to evaluate the performance of the model.
### Dynamic topic modeling
To use BERTopic, a transformer-based dynamic topic modeling approach, we first convert the preprocessed news text into numerical data using the "sentence
Figure 1: Text labelled using the taxonomy of the company.
transformers" python package. This package converts text data into 512-dimensional vectors, providing document-level embeddings. We set the language to English since our corpus is in English. During training of the BERTopic model, topic probabilities are calculated. Once training is completed, the model outputs 66 business topics, each represented by the 10 most probable words (see Fig. 2).
To visualize the evolution of topics in news articles over time, we use the "_topic_model.topics_over_time_" function, which requires the preprocessed news text corpus (docs), acquired topics (topics), timestamps for each article, global tuning for averaging topic representation with global topic representation, evolution tuning for averaging topic representation with the previous time's representation, and the number of bins for timestamps (nr_bins). The input values result in a plot (Fig. 3) showing the evolution of business topics over the past 5 years.
### Visualizations of business trajectories
The process of constructing business trajectory visualizations involved first performing the multi-label classification of news text. Once the text is categorized and labelled, it is important to extract the topics for understanding different types of business events. For instance, the launch of the solar energy industry represents a significant development in the field of renewable energy. This business event signifies the introduction of a new player in the market, offering innovative
Figure 3: The evolution of different topics over time.
Figure 2: Different words representing different topics.
solutions for harnessing the power of the sun. The event is typically categorized under topic 11 of renewable energy and can be identified using keywords such as "solar," "power," "energy," "park," and others that are relevant to the industry (see Fig. 2).
It is crucial to analyze the volume (i.e., number) of news articles related to the launch of the solar energy industry. This information can provide valuable insights into the origin and evolution of the business event. By examining the number of articles published over different time periods, we can determine the level of public interest and media coverage surrounding the event. Additionally, tracking the geographic regions that the articles pertain to can give us an idea of the location and spread of the industry's development. This information can help us to identify the key players and driving forces behind the growth of the solar energy industry, providing valuable insights into the market and its potential for future growth. This information regarding the volume of news articles (see Fig. 4) was then utilized to create a business trajectory visualization (see Fig. 5) to show how the business topic evolved over time across different geographic regions for a better understanding of the overall business event.
Figure 4: Number of articles published on a topic.
## 4 Discussion
When using Transformers for multi-label text classification, there are some important considerations observed during this study. Most importantly, it is about class imbalance and label correlations. It is raised because of a huge number of labels (1800 in our case) and a small number of text examples of each label in a dataset. Class imbalance refers to the situation where some labels are much more frequent than others and can lead to a model that is biased towards the more frequent labels. To address the class imbalance, various techniques can be used, such as oversampling the minority class or using weighted loss functions. Label correlations refer to the situation where the presence of one label is often predictive of the presence of another label. This can be addressed by using techniques such as label powerset or label embedding, which represent the labels as a combination or embedding of multiple binary dimensions, respectively. However, these aspects concerning class imbalance and label correlations are not dealt with in this study.
Apart from that, visualizing business trajectories over time and geographic regions can provide several benefits (Arslan and Cruz 2023), which are:
1. _Understanding trends_: Visualizing the trajectory of a business over time helps to identify trends and patterns in its development, which can be useful for decision-making and planning.
Figure 5: Business trajectory across geographic regions
2. _Spotting opportunities and challenges_: By seeing the business trajectory over time and space, it becomes easier to spot opportunities and challenges that the business may face. This information can be used to make informed decisions.
3. _Improved communication_: Visualizing business trajectories can help improve communication and collaboration among stakeholders, as it provides a clear and concise way to present information.
4. _Increased awareness_: By creating visual representations of business trajectories, it can increase awareness and understanding of the business among stakeholders.
5. _Enhanced analysis_: Visualizing the business trajectory over time and space allows for a more in-depth analysis of the business, as it provides a comprehensive view of its development and performance.
## 5 Conclusion
The application of Transformer-based spatio-temporal data analysis to business-related text data has proven to be an effective way to gain valuable insights into the health and trajectory of businesses. This method processes news articles and categorizes them into different business categories based on a company's interest. Later, the topics are extracted to understand different types of businesses over time. Visualizing hot and cold topics (referring to the current popularity and level of interest in a particular category) as business trajectories can greatly enhance and streamline business processes. By keeping track of these hot and cold topics, businesses can stay informed about the current state of the market and make informed decisions that align with current trends and consumer interests. This not only helps to identify potential opportunities for growth and innovation but also helps to mitigate any potential risks or challenges that may arise. Additionally, having a clear understanding of the current trends and topics can also inform and shape the direction of a company's marketing and communication strategies, allowing them to better connect with their target audience and stay relevant in the market.
## Acknowledgements
The authors thank the French company FirstECO for providing the taxonomy and the French government for the plan France Relance funding.
|
2303.15792 | SDAT: Sub-Dataset Alternation Training for Improved Image Demosaicing | Image demosaicing is an important step in the image processing pipeline for
digital cameras. In data centric approaches, such as deep learning, the
distribution of the dataset used for training can impose a bias on the
networks' outcome. For example, in natural images most patches are smooth, and
high-content patches are much rarer. This can lead to a bias in the performance
of demosaicing algorithms. Most deep learning approaches address this challenge
by utilizing specific losses or designing special network architectures. We
propose a novel approach, SDAT, Sub-Dataset Alternation Training, that tackles
the problem from a training protocol perspective. SDAT is comprised of two
essential phases. In the initial phase, we employ a method to create
sub-datasets from the entire dataset, each inducing a distinct bias. The
subsequent phase involves an alternating training process, which uses the
derived sub-datasets in addition to training also on the entire dataset. SDAT
can be applied regardless of the chosen architecture as demonstrated by various
experiments we conducted for the demosaicing task. The experiments are
performed across a range of architecture sizes and types, namely CNNs and
transformers. We show improved performance in all cases. We are also able to
achieve state-of-the-art results on three highly popular image demosaicing
benchmarks. | Yuval Becker, Raz Z. Nossek, Tomer Peleg | 2023-03-28T07:57:34Z | http://arxiv.org/abs/2303.15792v2 | Make the Most Out of Your Net: Alternating Between Canonical and Hard Datasets for Improved Image Demosaicing
###### Abstract
Image demosaicing is an important step in the image processing pipeline for digital cameras, and it is one of the many tasks within the field of image restoration. A well-known characteristic of natural images is that most patches are smooth, while high-content patches like textures or repetitive patterns are much rarer, which results in a long-tailed distribution. This distribution can create an inductive bias when training machine learning algorithms for image restoration tasks and for image demosaicing in particular. There have been many different approaches to address this challenge, such as utilizing specific losses or designing special network architectures. What makes our work is unique in that it tackles the problem from a training protocol perspective. Our proposed training regime consists of two key steps. The first step is a data-mining stage where sub-categories are created and then refined through an elimination process to only retain the most helpful sub-categories. The second step is a cyclic training process where the neural network is trained on both the mined sub-categories and the original dataset. We have conducted various experiments to demonstrate the effectiveness of our training method for the image demosaicing task. Our results show that this method outperforms standard training across a range of architecture sizes and types, including CNNs and Transformers. Moreover, we are able to achieve state-of-the-art results with a significantly smaller neural network, compared to previous state-of-the-art methods.
## 1 Introduction
The task of image demosaicing is a crucial step in digital photography and refers to the process of reconstructing a full-resolution RGB color image from incomplete data obtained from the use of a color filter array (CFA), such as the Bayer pattern of GRBG. In digital cameras, CFA samples only a fraction of the image information, which makes the task of demosaicing complex and challenging. This task is part of a larger group of image restoration problems, such as denoising and deblurring, which all aim to recover a high-quality image from degraded input data. Image restoration problems are usually considered ill-posed, in the sense that it is usually not possible to determine a unique solution that can accurately reconstruct the original image. This is particularly true for image demosaicing due to the fact that the red, green, and blue color channels are sampled at different locations and rates, which can cause severe aliasing issues.
In recent years, the use of convolutional neural networks
Figure 1: PSNR results of our approach (pink stars) applied on various architectures and compared to leading previous works [7, 25, 36, 39, 42, 38] over Kodak [15] dataset. Using the suggested training method we achieve better results than all other networks of the same amount of parameters. In addition we achieve state-of-the-art results while having almost an order of magnitude less parameters than the previous.
(CNNs) has shown significant success in addressing image restoration problems [16, 39, 42] in general, and for image demosaicing in particular [7, 12, 17, 22, 23, 32]. However, it is well known that inductive bias, which refers to the assumptions and prior knowledge that the model brings to the learning process, is necessary for the model to generalize [21]. But, in some cases, inductive bias can have the opposite effect and negatively impact the model's ability to generalize. This is true for deep neural networks as well and has been an extensive line of study, _e.g._[6, 26], just to mention a few. The bias can arise from various factors such as the network architecture, the training dataset, and others. Our focus in this study is on image restoration tasks, where a common inductive bias is the well observed phenomenon that natural images are mostly smooth, in the sense that nearby pixels tend to have similar values [10, 43]. This is also known as the notion that for natural images, patches typically exhibit a long-tailed distribution, where only a small portion of sample patches are considered as extremely difficult inputs (Zhang _et al_. [41] provide a survey regarding handling the long tail problem for various deep learning tasks). In the case of image demosaicing, this can lead the output of a model in regions that do not demonstrate this behavior to be incorrect in the formation
Figure 2: Qualitative comparison of our method compared to other top methods RNAN and RSTCANet. The RNAN model has 9M parameters, while RSTCANet presents 3 different model sizes: B, S, and L, having 0.9M, 3.1M, and 7.1M parameters respectively. We use our suggested training scheme to train the B model. It is easy to see that our model is able to achieve the best restoration result while having the least amount of parameters.
of common artifacts, such as zippers and moire. To address this challenge, previous research has proposed various solutions, such as mining examples of structures that are prone to artifacts [7], using guided custom losses [17], or employing custom architectures tailored specifically for the task [17, 19].
In this paper, we approach this problem from a different angle and propose a novel training method that aims to overcome the limitations of previous methods. Initially, the method identifies difficult patch samples in the original training dataset and divides them into sub-categories.Then, the model is trained in an approach we call cyclic training, using the sub-categories to guide the learning process, in a non-standard scheme, alternating between the entire dataset and the generated sub-categories. In addition, there is a recent trend of developing low-capacity models (less than 50k parameters) for edge devices to perform image demosaicing [19, 25, 34]. We show that our method is able to effectively utilize the model's capacity and surpass recent relevant works across all benchmarks using fewer number of parameters, showcasing a more efficient solution for low-capacity devices. In addition, we do not limit ourselves to low capacity models or to CNN architectures as we apply our method and demonstrate improved performance over standard capacity models (approx. 1M parameters and higher). We test our method on an architecture proposed by Xing and Egiazarian [36], which is based on the Swin transformer of Liu [18]. We achieve state-of-the-art (SOTA) results on several benchmarks while using a 10x smaller model from the latest SOTA as can be seen in Figure 1. To further highlight our training method we show that we are able to achieve our SOTA results while training on a significant lower amount of data compared to other methods. Thus we demonstrate the effectiveness of our proposal in cases where data is scarce.
We summarize our main contributions as follows:
1. We introduce a novel training method that is able to explore the parameter space more effectively than standard training methods, thus decreasing inductive bias induced by the data.
2. We evaluate our training scheme over different model sizes and different types of architectures, showing great improvement all around and achieving SOTA results over several benchmarks.
A comparison of visual results can be found in Figure 2. More qualitative results can be found in the supplementary material.
## 2 Related Work
Image demosaicing aims to restore missing information from the under-sampled CFA of the camera sensor, as for in most cases each pixel only captures a single color. This is a well studied problem with numerous methods suggested for solving it. The majority of methods focus on the well known Bayer pattern, which captures only one of the following colors: red, green, or blue; at each pixel. Initially most methods were model-based methods, depending on various image priors to help handle challenging areas (_i.e_. areas in the image that are not flat). Such priors could be searching for edges in the image and trying to avoid interpolating across them [15]. Another example could be leveraging the correlations between the different color channels. In some cases methods based on this prior initially interpolate the green channel, as it is sampled twice the frequency than the red and blue channels, and the green interpolated channel is used thereafter as a guidance map for the rest [4, 37]. An additional common prior is relying on the self similarity properties (_e.g_. [28]) of natural images, thus completing the lost data from other similar patches in the image [3, 40].
Recently as deep learning methods have become more and more popular, there has been an active line of study dedicated for performing image demosaicing alone, _e.g_. [30, 33, 36], while in other cases it is joint with other tasks (such as image denoisng), _e.g_. [17, 23, 35]. Most of the joint methods still train a network and analyze its performance for the image demosaicing task alone. In [7] the authors designed a joint demosaicing and denoising network using CNNs and constructed a dataset containing solely challenging patches they mined from online data, based on demosaic artifacts. They then went on to train and test the network on cases without the presence of noise. Tan [33] proposed a CNN model that first uses bi-linear interpolation for generating the initial image and then throws away the input mosaic image. Given the initial image as the input, the model has two stages for demosaicing. The first stage estimates green and red/blue channels separately while the second stage estimates three channels jointly. The idea to reconstruct the green channel separately or reconstructing it first to use later on as a guidance map is also used by several others, such as Cui [5], and Liu [17]. Other works compared the effects of convolution kernels of different sizes on the reconstruction, _e.g_. the work by Syu [30], which concluded the larger the size of the convolution kernel, the higher the reconstruction accuracy. Inspired by [30], Gou [8], propose to adapt the inception block of [31] to reduce computation while still having a large receptive field. Another recent popular line of study is suggesting a general neural network architecture and training it separately on various image restoration tasks (_i.e_. each tasks requires a separate model with different weights), such as [16, 38, 39, 42]. Zhang [42] suggest in [42] the RNAN architecture, which is a CNN network, has achieved SOTA results on the popular Kodak bench
mark [15]. Xing and Egiazarian [36] suggest slight modifications to the SwinIR architecture of [16] and demonstrate their performance for the demosacing tasks achieving SOTA results on other benchmarks [9, 20, 40].
## 3 Method
As natural images consist mainly of smooth patches [10, 14, 43], after performing initial training, the model is able to predict good results on most areas, however it struggles with more complex areas, _e.g_. edges, repetitive patterns, etc. These challenging patches are part of the tail of the patch distribution in natural images. This challenge arises because the model converges to a local minimum with a strong inductive bias induced by the data. In order to alleviate this bias while still training on the entire dataset, we introduce the following method. Our method consists of two main steps: (1) constructing datasets of sub-categories and (2) utilizing a novel training scheme we denote as cyclic training. In the first step, we search for challenging sub-categories from the overall dataset, _i.e_. patches that are poorly reconstructed by the model that was trained over the entire dataset. We then perform an elimination process to keep only the sub-categories that will have a positive impact on the network results after our novel optimization scheme used in the second step. In the case of demosaicing we found these are areas that suffer from artifacts such as moire, zipper, etc. In the second step, we introduce an optimization scheme that alternates between the selected sub-categories as well as the original dataset to improve generalization and achieve better results.
### Dataset Creation
**Generation of Sub-Categories** It is difficult to predict a priori, _i.e_. before the model is trained, which areas in the image will be challenging for the model to predict correctly. Therefore, as an initial step we train our model on the entire dataset. We will denote the initial predicted patch \(\hat{P}_{init}\) and the patch in the ground-truth RGB image as \(P\), both of size \(H_{p}\times W_{p}\times 3\). We then perform a mining stage. In this stage, we identify different sub-categories where the network struggles and create a pool of sub-categories to train on. The following custom metrics are used for this stage:
1. _Zipper metric_ - this metric \(d_{zip}\) is used to find zipper artifacts near edges in the image. First we calculate a mask \(M_{NE}\) to identify non edge areas as follows \[M_{NE}:=\left|C(\hat{P}_{init})-C\left(P\right)\right|<\varepsilon_{1},\] (1) where \(C\) is some kind of edge detector, _e.g_. Canny, and \(\varepsilon_{1}\) is a predefined threshold. Then we are able to calculate \[d_{zip}:=\left\|\right|\left|\nabla\hat{P}_{init}\right|-\left|\nabla P\right| \left|\odot M_{NE}\right\|_{1},\] (2) where \(\left|\nabla\right|=\left|\frac{\partial}{\partial x}\right|+\left|\frac{ \partial}{\partial y}\right|\), and \(\odot\) is the element-wise product.
2. _Grid metric_ - this metric \(d_{grid}\) is used to find grid-like artifacts in flat areas. \[d_{grid}:=\|2\cdot\left|\nabla\hat{P}_{init}\right|\odot\\ (\sigma(\frac{\alpha}{\left|\nabla P\right|})-0.5)\odot(\left| \nabla\hat{P}_{init}\right|>\varepsilon_{2})\|_{1},\] (3) where \(\sigma\) is the Sigmoid function, \(\alpha>0\) is a parameter that scales the values of the Sigmoid, and \(\varepsilon_{2}\) is another predefined threshold.
3. \(L_{1}\)_distance_ - a simple photometric distance. \[d_{L_{1}}:=\|\hat{P}_{init}-P\|_{1}\] (4)
Figure 3: Examples of patches that are found via the different metrics. The top row depicts the ground truth patches, and the bottom the prediction of the network after vanilla training. (a) A grid patch, (b) a patch with zipper artifact, (c) a patch with high \(L1\) distance, (d) A patch with high edge distance, and (e) a patch with high perceptual distance.
4. _Perceptual metric_[13] - this metric \(d_{VGG/i.j}\) is used for capturing the perceptual distance between the source and predicted patches. \[d_{VGG/i.j}:=\|\phi_{i,j}(\hat{P}_{init})-\phi_{i,j}(P)\|_{2}^{2},\] (5) where \(\phi_{i,j}\) indicates the feature map obtained by the j-th convolution (after activation) before the i-th max-pooling layer within the pre-trained VGG19 network [29].
5. _Edge distance_ - This metric \(d_{edge}\) is used to find distortions around edges \[d_{edge}:=\|C(\hat{P}_{init})-C\left(P\right)\|_{1}.\] (6)
Examples of the patches identified by the custom metrics presented above can be seen in Figure 3. Note, some sub-categories are generated by changing the threshold of the metric such as in Equation (3).
**Selecting Sub-Categories** Following the collection of sub-categories, we perform an additional step where we select sub-categories, that training on them, hinder the generalization of the model on the entire dataset. This procedure is illustrated in Figure 4 and is composed from the following two steps:
1. _Searching for inverse-correlation_: We conduct a brief training procedure over each sub-category, starting from the pre-trained model that was trained over the entire dataset. During each training session, we evaluate the validation loss for each of the different sub-categories, including the original dataset, throughout the training process. By observing a sub-category validation loss that is negatively correlated with the original dataset validation loss, in the sense of Spearman's rank correlation coefficient, we identify a sub-category that the model struggles to generalize using the bias induced by the original dataset. The model must discard its inductive bias to achieve convergence for that specific sub-category.
2. _Merging sub-categories_: Sub-categories with a positive correlation response between their validation losses (in the same sense as in step 1) are combined into a single sub-category. That is, training over one of the sub-categories does not harm the model's generalization over the other sub-category.
Pseudo code of the process can be found in the supplementary material.
Figure 4: Our sub categories elimination process. (a) shows the first step, where each graph represents a training session over a mined category (x axis represent the number of epochs, y axis represents the validation error). The green line represents the validation error of the original dataset, and the red line represents the validation error of a sub-category. Categories with strong negative correlation convergence compared to the original dataset are selected and highlighted in light blue. Part (b) depicts an example of the second step. Each sub-category’s validation error is compared with the validation error of the other sub-categories. The objective is to merge the sub-categories that demonstrate positive correlation convergence, resulting in a similar inductive bias being induced in the model. These sub-categories are highlighted in light blue.
### Cyclic Training
Once the sub-categories identification process is finished we can proceed to the step that we call cyclic training. This training method, as depicted in Figure 5, involves alternating between a sub-category, that was mined from the previous stage, and the entire dataset during the training process. This alternation takes place every set number of epochs. To ensure stability during the training, every dataset alternation we use a learning rate (LR) scheduler that starts with a low value and gradually increases to a higher value of LR. When performing the training with each LR we also look at the validation loss for each of the different sub-categories, and then choose the model that had the lowest average loss across the validations to be the model we use for the next alternation.
**Dataset Alternation** The dataset alternation stage in our training process involves training on only one specific sub-category of the dataset per iteration, as the mined categories are typically harder to converge, as supported by [24], and as we also observed empirically.
The probability to find a local minimum, given a weights space position for complex functions, in our case mined sub-categories, compared to the original dataset is much lower. Therefore, we first train over a sub-category till convergence, then alternate to train over the original dataset given the previous obtained weight space position of the previous iteration. Additionally, when training on the original dataset, simple functions are converged first and complex functions are converged later as demonstrated in prior research [24, 2]. Our models also exhibited a similar trend, initially generalizing over simple elements but having difficulty in converging over more complex ones. The initial convergence resulted in a strong inductive bias (in our case smooth patches), hindering further convergence over complex functions. To address this issue, we alternate between a mined sub-category dataset and the original dataset during training. That way, we encourage the model to converge at the same rate over both datasets, resulting in a solution with reduced inductive bias. Even though every second alternation our model trains over the entire dataset, the mined sub-categories were constructed in a way that their average validation loss penalizes solutions with strong inductive biases, as shown in Figure 4. By doing so, we are able to effectively train the model using the entire dataset.
**Learning Rate Scheduler** Our method involves a non-standard learning rate protocol. Shifting between datasets significantly impacts the gradients at the current weight space position, altering the structure of the loss weight landscape and requiring readjustment of the learning rate. We begin with a low learning rate and gradually increase it. As mentioned above, and as depict in Figure 5, only model weights that achieved the lowest validation loss across all categories are carried over to the next alternation, filtering out unstable solutions that might arise from unsuitable lr.
## 4 Experiments
In order to evaluate the effectiveness of our proposed method, we conducted experiments on the image demosaicing task using a CNN with a varying number of parameters. We compared our method with a standard training approach across different model sizes. Furthermore, we compared our results with existing works that focus on
Figure 5: An illustration of our cyclic training procedure. Each cycle consists of two training phases. The first consists training over a specific sub-category for several epochs, while the second, consists of training over the entire dataset. Each phase is initialized by the model’s weights that achieved the lowest validation loss across all sub-categories in the previous phase. Every cycle a different sub-category is selected. The number of cycles depends on the number of the categories and architecture.
the use of low capacity models for the image demosaicing task. In addition to a standard CNN architecture, we evaluated our method using a transformer-based architecture. Results were compared using four well established demosaicing benchmarks: Kodak[15], McMaster (MCM) [40], Urban100 [9], and CBSD68 [20].
**Training dataset and settings** For training we used the DIV2K [1] dataset. To further show the strength of our proposed training method instead of training on the entire dataset, as others, we randomly chose only 70 (out of the 800 available) images, to train all our models. We used the Adam optimizer [11] with \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), and \(\epsilon=1\times 10^{-8}\) combined with our custom scheduler as explained in Section 3.2. We used crops of 64 \(\times\) 64 for the input and a batch size of 32. We used a random flip and transpose over both image axes during training. All models were trained using the \(L_{1}\) norm as the loss function.
**Model Architecture** Our experiments were conducted using two different architectures. One based on CNNs and the other transformer-based. The architecture for the CNN models has a similar structure to the overall structure of DemosaicNet suggested by Gharbi _et al_. in [7], with slight differences. We replaced each of the two convolution layers with 3 Inverted Linear bottlenecks (ILB) [27]. We obtain different architecture sizes by adjusting the expansion size parameters of the ILB. For the transformer architecture we implemented RSTCANet [36], which is based on SwinIR [16] architecture. We applied our method over two variants from [36], RSTCANet-B and RSTCANet-S with 0.9M and 3.1M parameters respectively.
### Comparison of Solutions
**Comparison between leading solutions for demosaicing task**. As can be seen in Table 1, our suggested method achieves state-of-the-art results on three key benchmarks with roughly two to three times fewer parameters than current state-of-the-art models. Additionally, our 0.9M parameter model outperforms previous state-of-the-art models on two benchmarks, despite being approximately x10 smaller in size.
**Low capacity models for demosaicing task**. We compare our method with recent studies that focus on achieving high performance for image demosaicing using low capacity models. In [25] Ramakrishnan _et al_. employs Neural Architecture Search (NAS) and optimization techniques, while in [34] Wang _et al_. propose a custom UNet-based architecture. Both approaches evaluate their solutions across different model capacities (_i.e_. number of model parameters) using established benchmarks. As indicated in Table 2, our training method consistently achieves the best results across all benchmarks while having the lowest number of parameters in each sub-range of parameters.
### Ablations
**Comparing different training methods.** To verify the contribution of our suggested training method the following ablation study compares our method effectiveness versus other training methods: (1) standard training over the entire dataset, (2) standard training over hard mined examples only (similar to [7]), (3) standard training over a uniform distribution, i.e. we compose a new dataset where the probability to sample a training example from the general
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Params.} & \multicolumn{2}{c|}{McMaster18} & \multicolumn{2}{c|}{Kodak24} & \multicolumn{2}{c|}{CBSD68} & \multicolumn{2}{c|}{Urban100} \\ \cline{3-10} & & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM \\ \hline \hline IRCNN [39] & 1.5M & 37.84 & 0.9885 & 40.65 & 0.9915 & 40.31 & 0.9924 & 37.03 & 0.9864 \\ DRUNet[38] & 11M & 39.40 & 0.9914 & 42.30 & 0.9944 & 42.33 & 0.9955 & 39.22 & 0.9906 \\ RNAN [42] & 9M & 39.71 & 0.9725 & 43.09 & 0.9900 & 42.50 & 0.9929 & 39.75 & 0.9848 \\ xing [35] & 3.5M & 38.85 & 0.9904 & 42.23 & 0.9947 & - & - & 38.34 & 0.9895 \\ RSTCANet-B & 0.9M & 38.89 & 0.9902 & 42.11 & 0.9948 & 41.74 & 0.9954 & 38.52 & 0.9906 \\ RSTCANet-S & 3.1M & 39.58 & 0.9910 & 42.61 & 0.9951 & 42.36 & 0.9958 & 39.69 & 0.9924 \\ RSTCANet-L [36] & 7.1M & **39.91** & **0.9916** & 42.74 & 0.9952 & 42.47 & 0.9960 & 40.07 & 0.9931 \\ \hline RSTCANet-B **Ours** & 0.9M & 39.65 & 0.98 & 43.10 & 0.9957 & 42.54 & 0.9961 & 39.88 & 0.9915 \\ RSTCANet-S **Ours** & 3.1M & 39.72 & 0.990 & **43.13** & **0.9958** & **42.82** & **0.9963** & **40.14** & **0.993** \\ \hline \end{tabular}
\end{table}
Table 1: Quantitative results for image demosaicing. We compare using PSNR and SSIM. Best results are in bold, and second best marked with underline.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Parameter & Method & Params. & KODAK & MCM & URBAN \\ \hline \hline \multirow{2}{*}{9K-12K} & ark lab [25] & 10K & 40.4 & 36.9 & - \\ & wang [34] & 11.7K & 40.8 & 36.75 & 36.4 \\ & **Ours** & 9.5K & **41.39** & **37.02** & **36.79** \\ \hline \multirow{2}{*}{16K-20K} & ark lab & 20K & 40.9 & 37.6 & - \\ & **Ours** & 16K & **42.05** & **37.80** & **37.52** \\ \hline \multirow{2}{*}{80K-200K} & ark lab & 200K & 41.2 & 38.2 & - \\ & wang & 183K & 41.72 & 37.91 & 37.7 \\ \cline{1-1} & **Ours** & 84K & **42.38** & **38.52** & **38.01** \\ \hline \end{tabular}
\end{table}
Table 2: PSNR results for image demosaicing for compact networks. Best results are in bold.
dataset and from the sub-categories is equal, and (4) performing our methods, but discarding the general dataset alternation phase. In all training types the sub-category metrics were utilized to select 5% of the crops with the highest error, with a crop size of 64 \(\times\) 64. The training procedure for the standard training methods (1)-(3) was similar, with an initial learning rate of \(5\times 10^{-4}\), that was reduced by half after every 100K iterations. The models were trained for a total of 2M iterations. As indicated in Table 3, our method surpasses all other training regimes, by a considerable margin. For standard training we conducted a wider experiment covering a wide range of model sizes using the same training procedure described above. The results are shown in Figure 6. It is easy to see how we are able to improve the networks' performance between 1-3.5[dB] depending on the initial model size (the smaller the network, the higher the performance boost) and the evaluation dataset.
**Comparison with a different number of categories** As outlined in Section 3.2, our approach involves switching between sub-categories tailored for each architecture, as detailed in the Section 3.1. In this experiment, we assess the effectiveness of our training method by randomly selecting one and two categories. The results, as shown in Table 4, demonstrate that the number of categories included in the alternation process has a significant impact on the performance outcomes.
## 5 Conclusion
In this paper, we addressed the challenge of inductive bias in image demosaic task and proposed a novel training method to improve the generalization of a model. Our method involves two main steps: defining and identifying sub-categories beneficial to model convergence and an alternating training scheme. It demonstrated an improved performance over standard training methods and achieved state-of-the-art results on several benchmarks using smaller models. Our suggestion also effectively utilized the model's capacity, surpassing recent relevant works across all benchmarks using fewer parameters. This result is crucial for edge devices, where the trend is towards low-capacity models. The method's success demonstrates the importance of considering the dataset structure in optimizing a model's training process. Our findings suggest that our proposed method can be applied to other image restoration tasks and can be used as a trigger for further research in this field.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Params.} & \multicolumn{2}{c|}{1 category} & \multicolumn{2}{c|}{2 categories} & \multicolumn{2}{c|}{All categories} \\ \cline{2-7} & Kodak & MCM & Kodak & MCM & Kodak & MCM \\ \hline
16K & 39.22 & 36.21 & 40.50 & 36.54 & 42.05 & 37.80 \\ \hline
176K & 40.50 & 37.25 & 41.25 & 37.77 & 42.41 & 38.67 \\ \hline \end{tabular}
\end{table}
Table 4: PSNR results over Kodak and MCM datasets using a different number of sub-categories for training. The number of categories chosen by the algorithm for the 16K model and the 176K model are 6 and 5 respectively.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Params.} & \multicolumn{2}{c|}{Standard training} & \multicolumn{2}{c|}{Mined training} & \multicolumn{2}{c|}{Uniform} & \multicolumn{2}{c|}{Cyclic training} & \multicolumn{2}{c|}{Ours} \\ \cline{2-10} & Kodak & MCM & Kodak & MCM & Kodak & MCM & Kodak & MCM & Kodak & MCM \\ \hline
16K & 39.20 & 35.90 & 40.20 & 36.40 & 40.10 & 37.50 & 39.70 & 37.50 & **42.05** & **37.80** \\ \hline
176K & 40.55 & 37.20 & 40.90 & 37.50 & 41.10 & 38.20 & 41.00 & 38.00 & **42.41** & **38.67** \\ \hline \end{tabular}
\end{table}
Table 3: PSNR results of various training methods compared to ours over the Kodak and MCM datasets. We evaluated each training method using two architectures consisting 16K and 176K parameters.
Figure 6: our training method compared to a standard training over the (a) KODAK and (b) MCM datasets. The results of our training method are in pink stars and the results of regular training are in blue. |
2303.05581 | Open World Classification with Adaptive Negative Samples | Open world classification is a task in natural language processing with key
practical relevance and impact. Since the open or {\em unknown} category data
only manifests in the inference phase, finding a model with a suitable decision
boundary accommodating for the identification of known classes and
discrimination of the open category is challenging. The performance of existing
models is limited by the lack of effective open category data during the
training stage or the lack of a good mechanism to learn appropriate decision
boundaries. We propose an approach based on \underline{a}daptive
\underline{n}egative \underline{s}amples (ANS) designed to generate effective
synthetic open category samples in the training stage and without requiring any
prior knowledge or external datasets. Empirically, we find a significant
advantage in using auxiliary one-versus-rest binary classifiers, which
effectively utilize the generated negative samples and avoid the complex
threshold-seeking stage in previous works. Extensive experiments on three
benchmark datasets show that ANS achieves significant improvements over
state-of-the-art methods. | Ke Bai, Guoyin Wang, Jiwei Li, Sunghyun Park, Sungjin Lee, Puyang Xu, Ricardo Henao, Lawrence Carin | 2023-03-09T21:12:46Z | http://arxiv.org/abs/2303.05581v1 | # Open World Classification with Adaptive Negative Samples
###### Abstract
Open world classification is a task in natural language processing with key practical relevance and impact. Since the open or _unknown_ category data only manifests in the inference phase, finding a model with a suitable decision boundary accommodating for the identification of known classes and discrimination of the open category is challenging. The performance of existing models is limited by the lack of effective open category data during the training stage or the lack of a good mechanism to learn appropriate decision boundaries. We propose an approach based on adaptive negative samples (ANS) designed to generate effective synthetic open category samples in the training stage and without requiring any prior knowledge or external datasets. Empirically, we find a significant advantage in using auxiliary one-versus-rest binary classifiers, which effectively utilize the generated negative samples and avoid the complex threshold-seeking stage in previous works. Extensive experiments on three benchmark datasets show that ANS achieves significant improvements over state-of-the-art methods.
## 1 Introduction
Standard supervised classification assumes that all categories expected in the testing phase have been fully observed while training, _i.e._, every sample is assigned to one _known_ category as illustrated in Figure 1(a). This may not always be satisfied in practical applications, such as dialogue intention classification, where new intents are expected to emerge. Consequently, it is desirable to have a classifier capable of discriminating whether a given sample belongs to a known or an unknown category, _e.g._, the red samples in Figure 1(a). This problem can be understood as a \((C+1)\) classification problem, where \(C\) is the number of known categories and the additional category is reserved for _open_ (unknown) samples. This scenario is also known as multi-class open set recognition (Scheirer et al., 2014).
To discriminate the known from the open samples during inference, it is necessary to create a clear classification boundary that separates the known from the open category. However, the lack of open category samples during training makes this problem challenging. Current research in this setting mainly focuses on two directions. The first direction mainly estimates a tighter decision boundary between known classes to allow for the possibility of the open category. Existing works in this direction include the Local Outlier Factor (LOF) (Breunig et al., 2000; Zhou et al., 2022; Zeng et al., 2021), Deep Open Classification (DOC) (Shu et al., 2017) and Adaptive Decision Boundary (ADB) (Zhang et al., 2021). LOF and ADB calibrate the decision boundary in the feature space while DOC does it in the probability space.
The second direction deals with learning a better feature representation to make the boundary-seeking problem easier. In this direction, Deep
Figure 1: Illustration of previous methods and our proposed one. \(C0\), \(C1\), and \(C2\) are three categories. The boundary is used to discriminate known and open (unknown) categories. (a) Boundary learned by supervised learning. (b) Optimal decision boundary. (c) ADB method which has a closed decision boundary, but may capture irrelevant points. (d) Proposed ANS method.
Unk Lin and Xu (2019) and SEG Yan et al. (2020) added constraints to the feature space, SCL Zeng et al. (2021) and Zhou et al. (2022) fine-tuned the feature extractor backbone with contrastive learning. Zhan et al. (2021) considered introducing open samples from other datasets as negative, and Shu et al. (2021) generated samples with contradictory meanings using a large pretrained model. The latter two deliver large performance gains.
These improvements demonstrate the significance of negative samples in determining the boundary between the known and open categories. To accomplish the same in the absence of additional datasets or knowledge, we propose a novel negative-sample constraint and employ a gradient-based method to generate pseudo open category samples. As shown in Figure 1(d), negative samples are _adaptively_ generated for each category to closely bound each category.
Given the generated negative samples, we then empirically find that using auxiliary one-_versus_-rest binary classifiers can better capture the boundary between the known and the open category, relative to a \((C+1)\)-way classifier Zhan et al. (2021), where all the open categories, possibly distributed in multiple modes or arbitrarily scattered over the feature space, are categorized into one class.
Specifically, we first learn a \(C\)-category classifier on known category data. Then for each known category, we learn an auxiliary binary classifier, treating this category as positive and others as negative. During inference, one sample is recognized as open if all the binary classifiers predict it as negative, thus not belonging to any known category
Our main contributions are summarized below:
* We propose a novel adaptive negative-sample-generation method for open-world classification problems without the need for external data or prior knowledge of the open categories. Moreover, negative samples can be added to existing methods and yield performance gains.
* We show that synthesized negative samples combined with auxiliary one-versus-rest binary classifiers facilitate learning better decision boundaries and requires no tuning (calibration) on the open category threshold.
* We conduct extensive experiments to show that our approach significantly improves over previous state-of-the-art methods.
## 2 Related Work
**Boundary Calibration** The classical local outlier factor (LOF) Breunig et al. (2000) method is a custom metric that calculates the local density deviation of a data point from its neighbors. However, there is not a principled rule on how to choose the outlier detection threshold when using LOF. Zeng et al. (2021); Zhou et al. (2022) added open category data into the validation set to estimate or grid-search the proper threshold. So motivated, Bendale and Boult (2016) fit the output logits of the classifier to a Weibull distribution, but still use a validation set that contains the open category to select the confidence threshold. Further, Shu et al. (2017) employed one-versus-rest binary classifiers and then calculates the threshold over the confidence score space by fitting it to a Gaussian distribution. This method is limited by the often inaccurate (uncalibrated) predictive confidence learned by the neural network Guo et al. (2017). Adaptive decision boundary Zhang et al. (2021), illustrated in Figure 1(c), was recently proposed to learn bounded spherical regions for known categories to contain known class samples. Though this post-processing approach achieves state-of-the-art performance, it still suffers from the issue that the tight-bounded spheres may not exist or cannot be well-defined in representation space. Due to the fact that high-dimensional data representations usually lie on a low-dimensional manifold Pless and Souvenir (2009), a sphere defined in a Euclidean space can be restrictive as a decision boundary. Moreover, the issue can be more severe if certain categories follow multimodal or skewed distributions.
**Representation Learning** DeepUnk Lin and Xu (2019) trains the feature extractor with Large Margin Cosine Loss Wang et al. (2018). SEG Yan et al. (2020) assumes that the known features follow the mixture Gaussian distribution. Zeng et al. (2021) and Zhou et al. (2022) applied supervised contrastive learning Chen et al. (2020) and further improve the representation quality by using \(k\)-nearest positive samples and negative samples collected from the memory buffer of MOCO He et al. (2020). Getting a better representation trained with known category data only is complementary to our work, since a better pretrained backbone can further improve our results. Recent works found that it may be problematic to learn features solely based on the known classes; thus, it is crucial to pro
wide samples from unknown classes during training. Specifically, Zhan et al. (2021) creates negative samples with mixup of training data and examples from an external dataset. Shu et al. (2021) generates open class samples using BART Lewis et al. (2019) and external text entailment information.
## 3 Methodology
Problem DefinitionSuppose we are given a training dataset \(\mathcal{D}=\{(\mathbf{x}_{1},y_{1}),(\mathbf{x}_{2},y_{2}),\ldots,\)\((\mathbf{x}_{N},y_{N})\}\) consisting of \(N\) examples, where \(\mathbf{x}_{i}\) is an input text sequence and \(y_{i}\) its corresponding label, which belongs to a predefined set \(\mathcal{L}=\{l_{1},l_{2},\ldots,l_{C}\}\) with \(C\) categories, thus \(y_{i}\in\mathcal{L},\forall i\in[N]\), where \([N]\coloneqq[1,\ldots,N]\). In this paper, we use \([\cdot]\) to represent index sequence. In an open world classification setting, the goal is to learn a model which categorizes a test instance to either one of the predefined \(C\) categories or as an open category. In practice, the open category is denoted as a unique category \(l_{0}\).
### Known Category Classification
Following the setting suggested in Lin and Xu (2019); Zhang et al. (2021), we use BERT Devlin et al. (2018) as our feature extractor \(f_{\psi_{enc}}(\cdot)\). For each input text sequence \(\mathbf{x}\), where \(\mathbf{x}\) is represented as a sequence of tokens \([t_{1},t_{2},\ldots,t_{n}]\), we take the average of features \(z_{t_{i}}\) of each token \(t_{i}\) extracted from the BERT output layer as the sentence representation \(\mathbf{z}\). - The training of \(f_{\psi_{enc}}(\cdot)\) is formulated as a multi-category classification problem by minimizing the cross-entropy loss \(\mathcal{L}^{cls}\):
\[\mathcal{L}^{cls}(\psi_{enc},\psi_{cls})= \tag{1}\] \[-\sum_{i\in[N]}\log\frac{\exp(f_{\psi_{cls}}^{y_{i}}(\mathbf{z}_{i})) }{\sum_{j=1}^{C}\exp(f_{\psi_{cls}}^{j}(\mathbf{z}_{i}))},\]
where \(f_{\psi_{cls}}(\cdot)\) is a classifier that takes \(\mathbf{z}\) as input and the output dimension is the number of known categories \(C\). \(f_{\psi_{cls}}^{j}(\mathbf{z})\) represents the output logit for the _j_-th category. A well-trained feature extractor \(f_{\psi_{enc}}(\cdot)\) and a high-quality classifier \(f_{\psi_{cls}}(\cdot)\) can extract representative features of each category and ensure good performance on the classification of known categories during the inference stage.
### Open Category Recognition
Once the classifier for the known categories is available, the task is to recognize samples from the open category _versus_ the ones from known categories. As mentioned above, directly using the known category classifier \(f_{\psi_{cls}}(\cdot)\) can result in poor performance Hendrycks and Gimpel (2016), while using a \((C+1)\) category classifier setting is complicated due to the need to find proper samples to obtain a suitable decision boundary for the open category Scheirer et al. (2012); Liang et al. (2018); Shu et al. (2021). In this work, building upon ideas from one-class classification Scholkopf et al. (2001); Ruff et al. (2018) and one-\(vs\)-all support vector machines Rifkin and Klautau (2004),
Figure 2: Illustration of the proposed ANS algorithm. (a) The training stage is divided into two blocks. Top: known class classification described in Sec. 3.1. Bottom: one-_versus_-rest classification (Sec. 3.2) on Category 1 as an example. The negative samples are from two sources, namely, known from other classes (in red) and synthesized samples (in yellow). (b) Inference arm described at the end of Section 3.2.
we propose a one-_versus_-rest framework via training simple binary classifiers for each predefined category. Based on this framework, we then introduce an effective open-sample generation approach to train these classifiers in Section 3.3.
We build an auxiliary one-_versus_-rest binary classifier for each known category, and take \(m\)-th category as an example to illustrate. Given a text sequence \(\mathbf{x}\), we use the BERT pretrained with classification loss as the feature extractor \(f_{\psi^{enc}}(\cdot)\) to extract features \(\mathbf{z}\in\mathbb{R}^{d}\) to be fed to the binary classifiers, where \(d\) is the dimension of the feature space. Each category is provided with a binary classifier denoted as \(g_{\theta^{cls}_{m}}(\mathbf{z}):\mathbb{R}^{d}\rightarrow\mathbb{R}\), such that if \(g_{\theta^{cls}_{m}}(\mathbf{z})>0\) then the input text \(\mathbf{x}\) belongs to the \(m\)-th category or vice versa belongs to any of the other categories. We parameterize the entire binary classification framework for the \(m\)-th category as \(\theta_{m}=(\psi^{enc},\theta^{cls}_{m})\).
To learn each binary classifier \(g_{\theta^{cls}_{m}}(\cdot)\) from training data \(\mathcal{D}\), we first construct a _positive_ set \(\{\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{N_{m}}\}\) using data points with label \(l_{m}\) from \(\mathcal{D}\) and a _negative_ set \(\{\hat{\mathbf{x}}_{1},\hat{\mathbf{x}}_{2},\ldots,\hat{\mathbf{x}}_{N-N_{m}}\}\) by data points not in category \(l_{m}\) but also from \(\mathcal{D}\). The total number of samples within category \(m\) is \(N_{m}\), and \(N-N_{m}\) is the number of remaining samples in \(\mathcal{D}\). Each binary classifier is optimized by minimizing the binary cross-entropy loss function \(\mathcal{L}^{rest}\):
\[\mathcal{L}^{rest}(\theta^{cls}_{m})=\sum_{i\in[N_{m}]}\log(1+ \exp(-g_{\theta_{m}}(\mathbf{x}_{i})))\] \[+\sum_{i\in[N-N_{m}]}\log(1+\exp(g_{\theta_{m}}(\hat{\mathbf{x}}_{i}) )). \tag{2}\]
During the inference phase, we have
\[\hat{y}=\begin{cases}\text{open,}&\text{ if }g_{\theta_{m}}(\mathbf{x}_{i})<0, \forall m\in[C];\\ \text{known,}&\text{ otherwise.}\end{cases}\]
We assign a testing example to the open category \(l_{0}\) if it is detected as unknown in all \(C\) binary classifiers. Otherwise, we pass the example to the known classifier to categorize the known categories. The entire inference pipeline is illustrated in Figure 2(b).
### Adaptive Negative Sampling (ANS)
In practice, it is problematic to learn a good binary classifier with only the aforementioned negative samples from \(\mathcal{D}\). The sample space of the open category is complicated, considering that new-category samples could originate from different topics and sources, relative to the known classes. So motivated, some existing methods introduce additional external data as negative samples.
To alleviate the issue associated with the lack of real negative samples, we propose to synthesize negative samples \(\tilde{\mathbf{x}}\). Considering that it is hard to create actual open-category text examples, we choose to draw _virtual_ text examples \(\tilde{\mathbf{z}}\) in the feature space Miyato et al. (2016); Zhu et al. (2019). Compared with the token space of text, the feature space is typically smoother Bengio et al. (2013), which makes it convenient to calculate gradients Wang et al. (2021).
For all known samples in a category \(l_{m}\), points that are away from these samples can be recognized as _negative_ to classifier \(g_{\theta^{cls}_{m}}(\cdot)\). The entire feature space \(\mathbb{R}^{d}\) contains essentially an uncountable set of such points, among which we are mostly interested in those near the known samples. Consequently, capturing these points will be helpful to characterize the decision boundary.
To give a mathematical description of these points, we assume that data usually lies on a low-dimensional manifold in a high-dimensional space Pless and Souvenir (2009). The low-dimensional manifold can be viewed as a local-Euclidean space, thus we can use the Euclidean metric to measure distances locally for each known data \(\mathbf{z}_{i}\). Under this assumption, the set of pseudo negative samples \(\mathcal{N}_{i}(r)\) for \(\mathbf{z}_{i}\), which we call _adaptive synthesized open set_, can be described as follows,
\[\mathcal{N}_{i}(r)=:\Big{\{}\tilde{\mathbf{z}}: r\leq\|\tilde{\mathbf{z}}-\mathbf{z}_{j}\|_{2};\|\tilde{\mathbf{z}}-\mathbf{z}_{i}\|_{2} \leq\gamma\cdot r,\] \[\forall j:y_{j}=m\Big{\}}, \tag{3}\]
where \(r\) is the distance radius and \(\gamma>1\) is a hyperparameter. Note that each known sample \(\mathbf{z}_{i}\) has an associated adaptive synthesized open set. As defined above, this set is subject to two inequalities. The first keeps synthesized samples away from all known samples within category \(m\). The second implies that the synthesized samples should not be too far from the known samples. An intuitive geometric interpretation is that when \(j=i\), the space implied by these two constraints is a spherical shell with inner radius \(r\) and outer radius \(\gamma\cdot r\).
To get the radius \(r\), we first calculate the covariance matrix \(\Sigma\) of \(\mathbf{z}\) using known samples from category \(m\) and choose \(r\), s.t. \(r\leq\sqrt{2\operatorname{Tr}(\Sigma)}\) and \(\gamma r\geq\sqrt{2\operatorname{Tr}(\Sigma)}\). This is under the consideration
that \(\sqrt{2\operatorname{Tr}(\Sigma)}\) is the average Euclidean distance between random samples drawn from a distribution with covariance matrix \(\Sigma\).
The estimation is supported by the following proposition,
**Proposition 1**: _The expectation of the euclidean distance between random points sampled from distribution with covariance matrix \(\Sigma\) is smaller than \(\sqrt{2\operatorname{Tr}(\Sigma)}\), i.e._
\[\mathbb{E}_{\mathbf{x},\mathbf{y}\sim\mathcal{D}(\mathbf{\mu},\Sigma)}\sqrt{\|\mathbf{x}-\mathbf{y }\|^{2}}\leq\sqrt{2\operatorname{Tr}\Sigma} \tag{4}\]
The proof can be found in supplementary. In our experiments, we fix \(\gamma=2\) and \(r=8\). The choice of \(r\) is relevant to the covariance matrix of the features in representation space. The detailed justification for our selection is provided in Appendix A.3. Ablation studies (Figure 3) show that model performance is not very sensitive to the chosen \(r\).
Binary Classification with ANSCording to Equation 3, each sample from a known category \(m\) contributes an adaptive synthesized set of open samples \(\mathcal{N}_{i}(r)\). The classifier \(g_{\theta_{m}^{cls}}(\cdot)\) is expected to discriminate them as negative. The corresponding objective function is the binary cross-entropy loss,
\[\mathcal{L}^{syn}(\theta_{m}^{cls})=\sum_{i\in[N_{m}]}\log(1+\exp(g_{\theta_{ m}^{cls}}(\tilde{\mathbf{z}}_{i}))),\]
where \(\tilde{\mathbf{z}}_{i}\) is sampled from \(\mathcal{N}_{i}(r)\) and \(N_{m}\) is the total number of known samples with category \(l_{m}\). However, there exist uncountably many points in \(\mathcal{N}_{i}(r)\). Randomly sampling one example from \(\mathcal{N}_{i}(r)\) is not effective. Alternatively, we choose the most representative ones that are hard for the classifier to classify it as _negative_. Consistent with this intuition, the \(\max(\cdot)\) operator is added to select the most challenging synthetic open sample distributed in \(\mathcal{N}_{i}(r)\).
\[\mathcal{L}^{syn}(\theta_{m}^{cls})= \tag{5}\] \[\sum_{i\in[N_{m}]}\max_{\tilde{\mathbf{z}}_{i}\in\mathcal{N}_{i}(r)} \log(1+\exp(g_{\theta_{m}^{cls}}(\tilde{\mathbf{z}}_{i}))).\]
Finally, the complete loss accounting for open recognition is summarized as:
\[\mathcal{L}^{open}=\mathcal{L}^{rest}+\lambda\mathcal{L}^{syn}, \tag{6}\]
where \(\lambda\) is a regularization hyperparameter.
Directly minimizing the objective function in Equation 6 subject to the constraint in Equation 3 is challenging. In the experiments, we adopt the projected gradient descend-ascend technique (Goyal et al., 2020) to solve this problem.
```
1:Input: Training data \(\mathcal{D}=\mathbf{x}_{1},\mathbf{x}_{2},\cdots,\mathbf{x}_{n},\hat{\mathbf{x}}_{1},\cdots, \hat{\mathbf{x}}_{n}\). Parameters of current binary classifier \(\theta_{m}\).
2:Hyper-Parameters: Radius \(r\), step-size \(\eta\), number of gradient steps \(k\)
3:for Batch number \(B=1,\cdots,n_{0}\)do
4:\(X_{B}(\tilde{X}_{B})\): Collect a batch of positive (negative) samples.
5: Calculate loss \(\mathcal{L}_{1}=\mathcal{L}^{real}(X_{B},\hat{X}_{B})\) using Eq. 2
6: Calculate the feature \(\mathbf{z}_{B}\) over the positive samples.
7:Adaptive Negative Sampling:
8: sample \(\mathbf{\epsilon}\sim\mathbb{N}_{0}(0,4:diag(\Sigma_{\mathbf{z}}))\)
9:for\(i=1,\cdots,k\)do
10: Calculate loss \(\ell(\mathbf{z}_{B}+\mathbf{\epsilon})\) using Eq. 5
11:\(\mathbf{\epsilon}=\mathbf{\epsilon}+\eta\frac{\nabla_{\mathbf{z}_{i}}\ell(\mathbf{z}_{B}+\mathbf{ \epsilon})}{\|\nabla_{\mathbf{z}_{i}}\ell(\mathbf{z}_{B}+\mathbf{\epsilon})\|}\)\(\triangleright\) Gradient Ascend
12:endfor
13: Calculate \(\alpha\) using Eq.7
14:\(\mathbf{\epsilon}=\frac{\alpha}{\|\mathbf{\epsilon}\|}\cdot\mathbf{\epsilon}\)
15: Calculate loss \(\mathcal{L}_{2}=\mathcal{L}^{syn}_{\theta_{m}^{cls}}(\mathbf{z}_{B}+\mathbf{\epsilon})\) using Eq. 5
16:\(\theta_{m}=\theta_{m}-\nabla_{\theta_{m}}(\mathcal{L}_{1}+\mathcal{L}_{2})\)\(\triangleright\) Gradient Descend
17:endfor
```
**Algorithm 1** Adaptive Negative Sampling.
Projected Gradient Descend-AscentWe use gradient descent to minimize the open recognition loss \(\mathcal{L}^{open}\) and gradient ascent to find the hardest synthetic negative samples \(\tilde{\mathbf{z}}\). The detailed steps are summarized in Algorithm 1. As illustrated in Figure 2(c), the sample \(\tilde{\mathbf{z}}_{i}^{\prime}=\tilde{\mathbf{z}}_{i}+\epsilon\) directly derived from gradient ascent (line 11 of Algorithm 1) might be out of the constraint area \(\mathcal{N}_{i}(r)\). We then project to the closest \(\tilde{\mathbf{z}}_{i}\) within the constraint such that \(\tilde{\mathbf{z}}_{i}=\arg\min_{\mathbf{u}}\|\tilde{\mathbf{z}}_{i}^{\prime}-\mathbf{u}\|^{2 },\forall\ \ \mathbf{u}\in\mathcal{N}_{i}(r)\)(Boyd et al., 2004). Unfortunately, direct search within \(\mathcal{N}_{i}(r)\) defined in Equation 3 requires complex computation over entire training data \(\mathcal{D}\). Based on our assumption that the training samples lie on a low-dimensional manifold and the empirical observation that \(\tilde{\mathbf{z}}_{i}^{\prime}\) is always closest to the corresponding positive point \(\mathbf{z}_{i}\) relative to other positive points, \(\mathcal{N}_{i}(r)\) can be further relaxed to the sphere shell around sample \(\mathbf{z}_{i}\): \(\mathcal{N}_{i}(r)=\{\tilde{\mathbf{z}}:r\leq\|\tilde{\mathbf{z}}-\mathbf{z}_{i}\|_{2}\leq \gamma\cdot r\}\). We can then directly find the synthetic negative sample via a projection along the radius direction, \(\tilde{\mathbf{z}}_{i}=\tilde{\mathbf{z}}_{i}^{\prime}+\alpha\frac{\tilde{\mathbf{z}}_{i}^{ \prime}-\tilde{\mathbf{z}}_{i}}{\|\tilde{\mathbf{z}}_{i}^{\prime}-\tilde{\mathbf{z}}_{i}\|}\), where \(\alpha\) is adjusted to guarantee \(\tilde{\mathbf{z}}_{i}\in\mathcal{N}_{i}(r)\):
\[\alpha=\begin{cases}1,&\text{if }r\leq\|\tilde{\mathbf{z}}_{i}-\mathbf{z}_{i}\|_{2} \leq\gamma\cdot r\\ \frac{r}{\|\tilde{\mathbf{z}}_{i}-\mathbf{z}_{i}\|_{2}},&\text{if }\gamma\cdot r\leq\| \tilde{\mathbf{z}}_{i}-\mathbf{z}_{i}\|_{2}\\ \frac{r}{\|\tilde{\mathbf{z}}_{i}-\mathbf{z}_{i}\|_{2}},&\text{if }\|\tilde{\mathbf{z}}_{i}-\mathbf{z}_{i}\|_{2} \leq r\end{cases}\]
In the future, we would like to consider relaxing these constraints by only considering the nearest \(k\) points instead of all the points within a category.
## 4 Experiments
We conduct experiments on three datasets: Banking, CLINC and Stackoverflow. Details and examples of the datasets are found in Appendix A.4.
Task DesignWe apply the same settings as Shu et al. (2017) and Lin and Xu (2019). For each dataset, we sample \(25\%\), \(50\%\), \(75\%\) categories randomly and treat them as the _known category_. Any other categories out of the known categories are grouped into the _open category_. In the training and validation set, only samples within the known category are kept. All the samples in the testing set are retained, and the label of samples belonging to open categories is set to \(l_{0}\). Importantly, samples from the open category are never exposed to the model in the training and validation process.
Evaluation MetricsThe model needs to identify the samples with the open category, as well as classify the known samples correctly. Following Shu et al. (2017); Zhang et al. (2021), we use accuracy and macro F1 as our evaluation metrics. The accuracy measures the overall performance, considering that open-world classification can be treated as a \(C+1\) classification problem. F1 is a binary classification metric mainly used for evaluating the performance of open category recognition. The F1-score reported in this paper is the mean of the macro F1-score per category (including the open category), where the positive category is the corresponding one and negatives are all the other categories. F1-known is the average over F1-score of all known categories. F1-open is the F1-score of the open category.
Experimental settingWe use the BERT-base-uncased model to initialize the feature extractor \(f_{\phi_{enc}}\) and freeze the first ten layers of BERT during training. Note that all results are the mean of ten trials with different random seeds. Other experimental details are included in Appendix A.5.
### Results
Table 1 compares our approach with previous state-of-the-art methods using accuracy and F1. Our implementation is based on Zhang et al. (2021).
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{\%} & \multicolumn{3}{c}{Banking} & \multicolumn{2}{c}{CLINC} & \multicolumn{2}{c}{Stackoverflow} \\ \cline{2-7} & Methods & Accuracy & F1-score & Accuracy & F1-score & Accuracy & F1-score \\ \hline \multirow{7}{*}{25} & MSP & 43.67 & 50.09 & 47.02 & 47.62 & 28.67 & 37.85 \\ & DOC & 56.99 & 58.03 & 74.97 & 66.37 & 42.74 & 47.73 \\ & OpenMax & 49.94 & 54.14 & 68.50 & 61.99 & 40.28 & 45.98 \\ & DeepUnk & 64.21 & 61.36 & 81.43 & 71.16 & 47.84 & 52.05 \\ & ADB & 78.85 & 71.62 & 87.59 & 77.19 & 86.72 & 80.83 \\ & SelfSup* & 74.11 & 69.93 & 88.44 & 80.73 & 68.74 & 65.64 \\ & Ours & **83.93** & **76.15** & **92.64** & **84.81** & **90.88** & **84.52** \\ \hline \multirow{7}{*}{50} & MSP & 59.73 & 71.18 & 62.96 & 70.41 & 52.42 & 63.01 \\ & DOC & 64.81 & 73.12 & 77.16 & 78.26 & 52.53 & 62.84 \\ \cline{1-1} & OpenMax & 65.31 & 74.24 & 80.11 & 80.56 & 60.35 & 68.18 \\ \cline{1-1} & DeepUnk & 72.73 & 77.53 & 83.35 & 82.16 & 58.98 & 68.01 \\ \cline{1-1} & ADB & 78.86 & 80.90 & 86.54 & 85.05 & **86.40** & 85.83 \\ \cline{1-1} & SelfSup* & 72.69 & 79.21 & 88.33 & 86.67 & 75.08 & 78.55 \\ \cline{1-1} & Ours & **81.97** & **83.29** & **90.23** & **88.01** & 86.08 & **85.90** \\ \hline \multirow{7}{*}{75} & MSP & 75.89 & 83.60 & 74.07 & 82.38 & 72.17 & 77.95 \\ \cline{1-1} & DOC & 76.77 & 83.34 & 78.73 & 83.59 & 68.91 & 75.06 \\ \cline{1-1} & OpenMax & 77.45 & 84.07 & 76.80 & 73.16 & 74.42 & 79.78 \\ \cline{1-1} & DeepUnk & 78.52 & 84.31 & 83.71 & 86.23 & 72.33 & 78.28 \\ \cline{1-1} & ADB & 81.08 & 85.96 & 86.32 & 88.53 & 82.78 & 85.99 \\ \cline{1-1} & SelfSup* & 81.07 & **86.98** & 88.08 & 89.43 & 81.71 & 85.85 \\ \cline{1-1} & Ours & **82.49** & 86.92 & **88.96** & **89.97** & **84.40** & **87.49** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results of open world classification on three datasets with different known class proportions. * indicates the use of extra datasets during the training. The first five results of the baseline model are from Zhang et al. (2021). The results for SelfSup are from Zhan et al. (2021).
The baselines include threshold finding methods, MSP Hendrycks and Gimpel (2016), DOC Shu et al. (2017), OpenMax Bendale and Boult (2015), ADB Zhang et al. (2021); and feature learning methods, DeepUnk Lin and Xu (2019); and negative data generation method SelfSup Zhan et al. (2021). We did not include results from ODIST Shu et al. (2021) because their method relies on MNLI-pretrained BART, which is not currently public and their model performance drops dramatically if not coupled with ADB. Note that SelfSup uses additional datasets, without which the accuracy on 50% CLINC drops from 88.33 to 83.12.
Our approach performs better than most previous methods, even better than the method using additional datasets, with the greatest improvement on CLINC. This is in accordance with SelfSup Zhan et al. (2021), which also benefits the most on CLINC by adding negative samples. This implies that our synthesized negative samples are of high quality and could possibly be used as extra datasets in other methods.
The average performance gain in these three datasets decreases as the known category ratio increases, _i.e_., compared to the strongest baseline ADB, our accuracy improvements in the three datasets are \(5.08,3.11,1.42\) under the setting of \(25\%,50\%,75\%\). With more known categories available, the more diverse the known negative samples will be, allowing the model to better capture the boundaries of the positive known categories while reducing the impact of synthetic samples.
The comparison with baselines on F1-open and F1-known can be found in Appendix A.3.
### Discussion
Synthesized Negative is Beneficial for a Variety of StructuresTo investigate the contribution of the synthesized samples and the structure of one-_versus_-rest, we performed experiments adding the synthesized samples to two well-known baselines, MSP Hendrycks and Gimpel (2016) and ADB Zhang et al. (2021) as shown in Table 3. Specifically, the C-way classifier in MSP and ADB is replaced by a (\(C+1\))-way classifier, with an extra head for the synthesized negative samples. See Appendix A.7 for details.
We observe that performance increases on all baselines with synthesized negative samples. The synthesized samples behave like data augmentation, leading to better representation of positive samples.
Further, synthesized samples benefit one-_versus_-rest the most. The difference, we believe, stems from the model's flexibility on boundary learning. The open category may contain sentences with various themes, making it difficult for a single head of a (\(C+1\))-way classifier to catch them all. This one-_versus_-rest flexibility comes at the cost of more classifier parameters. However, compared to the huge feature extractor BERT, the number of additional parameters is relatively small. Distillation techniques can be used to build a smaller model if
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline \% & Methods & Acc & F1 & F1-open & F1-known \\ \hline \multirow{4}{*}{25} & Baseline (\(\lambda=0\)) & 57.05 & 53.52 & 63.83 & 52.84 \\ & + Gaussian Noise & 90.37 & 82.09 & 93.75 & 81.78 \\ & + Projection & 92.02 & 83.99 & 94.91 & 83.71 \\ & + Ascond (Ours) & 92.32 & 84.34 & 95.11 & 84.05 \\ \hline \multirow{4}{*}{50} & Baseline (\(\lambda=0\)) & 64.60 & 71.65 & 60.28 & 71.80 \\ & + Gaussian Noise & 88.01 & 86.90 & 89.82 & 86.86 \\ & + Projection & 90.22 & 88.18 & 92.01 & 88.12 \\ & + Ascond (Ours) & 90.23 & 88.22 & 92.02 & 88.17 \\ \hline \multirow{4}{*}{75} & Baseline (\(\lambda=0\)) & 76.17 & 83.63 & 63.23 & 83.81 \\ & + Gaussian Noise & 88.67 & 90.45 & 86.71 & 90.48 \\ \cline{1-1} & + Projection & 88.89 & 89.95 & 87.52 & 89.97 \\ \cline{1-1} & + Ascond (Ours) & 88.96 & 89.97 & 87.62 & 90.00 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation study on the negative samples generation. The results are conducted on CLINC with known classes proportion 25%, 50% and 75%. Baseline represents the experiment with no synthesized negative samples under the one-_versus_-rest framework.
Figure 3: Ablation study on the radius \(r\) used in adaptive negative sampling process under the setting with 50% known categories.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Methods & Add Neg. & Acc & F1 & F1-open & F1-known \\ \hline \multirow{3}{*}{MSP} & & 44.46 & 51.95 & 43.22 & 52.41 \\ & ✓ & 57.07 & 58.92 & 61.40 & 58.79 \\ \hline \multirow{3}{*}{ADB} & & 78.39 & 71.53 & 84.18 & 70.86 \\ & ✓ & 79.71 & 73.01 & 85.18 & 72.12 \\ \hline \multirow{3}{*}{One-vs} & & 51.33 & 41.40 & 50.83 & 43.54 \\ -Rest & ✓ & 80.11 & 73.35 & 85.57 & 72.71 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance comparison when the synthesized negative data are added to different baselines. The experiments are conducted on Banking with 25% known categories.
necessary, for instance, where there are thousands of known categories.
Adaptive Negatives Samples GenerationOur adaptive negative sample generation consists of three modules (a) adding Gaussian noise to the original samples (line \(8\) in Algorithm 1). (b) gradient ascent (line \(10\sim 11\)) (c) projection (line \(13\sim 14\)). We add each module to the baseline in turn to study their importance. The baseline experiment uses the vanilla one-_versus_-rest framework described in Section 3.2, without the use of synthesized negative samples. Experiments are conducted on CLINC as shown in Table 2.
The following describes our findings from each experiment: (\(i\)) Adding samples with noise as negative alleviates the overconfidence problem of the classifier and improves the results significantly. The noise level needs to be designed carefully since small noise blurs the distinction between known and open, while large noise is ineffective.
(\(ii\)) Constraining synthesized samples to \(\mathcal{N}(r)\) improves performance by keeping synthesized samples from being too close or too far away from positive known samples.
(\(iii\)) Adding a gradient ascent step further enhances performance. The improvement over the previous step is marginal. Our hypothesis is that the calculated gradient could be noisy, since the noise we add is isotropic and may be inconsistent with (outside of) the manifold of the original data.
Radius \(r\) AnalysisIn the adaptive negative sample generation process, the radius \(r\) and multiplier \(\gamma\) are two hyperparameters that determine the upper and lower bounds of the distance between the synthesized sample and the known sample. To investigate the impact of radius, we fix \(\gamma\) to 2 and increase the \(r\) from \(1\) to \(256\). Note that \(8\) is our default setting.
As illustrated in Figure 3, the performance gradually drops when the radius \(r\) increases, because the influence of the synthesized negative examples reduces as the distance between them and the positive samples grows. When the radius \(r\) decreases, the classifier may be more likely to incorrectly categorize positive as negative because the synthesized negative samples are closer to the known positives, resulting in a decrease in accuracy and F1 score on the banking and CLINC datasets. However, the performance on Stackoverflow improves. We hypothesize that there is a better data-adaptive way to estimate the radius \(r\) to improve the performance even further, for example, using \(k\) nearest neighbor instead of all the data in a category. We leave this as interesting future direction.
In summary, we observe that the performance is affected by the radius, but comparable results can be obtained for a wide value range. They are all better than the vanilla one-_versus_-rest baseline, which lacks the generated negative samples. The accuracy of baselines on Banking, CLINC and Stackoverflow is \(58.09\), \(71.80\) and \(64.58\), respectively.
VisualizationFigure 4 shows the \(t\)-SNE representation of the features extracted from the second hidden layer of one-_versus_-rest classifier \(g_{m}^{cls}\). Randomly chosen three known categories, each corresponds to a one-_versus_-rest classifier, yield three figures. The known positive/negative samples (blue) are clustered because the features are extracted from a pretrained \(C\)-way classifier Section 3.1. Open samples (pink) are scattered, some of which overlap with the known positives (see the middle figure). Our synthesized negatives work as expected; they are close to the known positives
Figure 4: \(t\)-SNE plots of the feature extracted from the testing set of CLINC with 50% known categories. Each panel corresponds to a one-_versus_-rest classifier \(g_{g_{m}^{cls}}\) with different known category \(m\) acting as the positive. Each panel’s lower right corner has a square that enlarges the known positive data region to show the effectiveness of the synthesized negative samples. (Best viewed in color).
and bridge the gap between the positive and other known categories.
## 5 Conclusions
We have introduced ANS, a pseudo open category sample generation approach for open-world classification problems. The generation process is free from extra datasets or prior knowledge. The synthesized samples are effective for improving existing methods. Combined with one-_versus_-rest framework, significant improvements are observed on three benchmarks. The gradient-based negative sample generation proposed can be applied to other NLP tasks such as out-of-scope discovery, which we leave as future work.
## 6 Limitations
First, both in terms of the number of training samples and the length of sentences, all the datasets that we employ in this study are relatively small and short. It may not scale well for long texts. Second, further work should be done on the model's ability to handle increasingly complicated data. The training samples from three benchmark datasets might be too sample; many input sentences even include the category descriptions, as shown in Tabel A.1.
## 7 Acknowledgement
This research was supported by DARPA, DOE, NIH, ONR and NSF.
|
2305.19226 | DFT analysis and demonstration of enhanced clamped Electro-Optic tensor
by strain engineering in PZT | We report $\approx$400\% enhancement in PZT Pockels coefficient on DFT
simulation of lattice strain due to phonon mode softening.The simulation showed
a relation between the rumpling and the Pockels coefficient divergence that
happens at -8\% and 25\% strain developed in PZT film.The simulation was
verified experimentally by RF sputter deposited PZT film on Pt/SiO$_2$/Si
layer.The strain developed in PZT varied from -0.04\% for film annealed at
530\degree C to -0.21\% for 600\degree C annealing temperature.The strain was
insensitive to RF power with a value of -0.13\% for power varying between
70-130 W. Pockels coefficient enhancement was experimentally confirmed by Si
Mach Zehnder interferometer loaded with PZT and probed with the co-planar
electrode.An enhancement of $\approx$300\% in Pockels coefficient was observed
from 2-8 pm/V with strain increasing from -0.04\% to -0.21\%. To the best of
our knowledge, this is the first time study and demonstration of strain
engineering on Pockels coefficient of PZT using DFT simulation, film
deposition, and photonic device fabrication. | Suraj, Shankar Kumar Selvaraja | 2023-05-30T17:18:56Z | http://arxiv.org/abs/2305.19226v1 | DFT analysis and demonstration of enhanced clamped Electro-Optic tensor by strain engineering in PZT
###### Abstract
We report \(\approx\)400% enhancement in PZT Pockels coefficient on DFT simulation of lattice strain due to phonon mode softening.The simulation showed a relation between the rumpling and the Pockels coefficient divergence that happens at -8% and 25% strain developed in PZT film. The simulation was verified experimentally by RF sputter deposited PZT film on P/SiO\({}_{2}\)/Si layer. The strain developed in PZT varied from -0.04% for film annealed at 530\({}^{\circ}\)C to -0.21% for 600\({}^{\circ}\)C annealing temperature. The strain was insensitive to RF power with a value of -0.13% for power varying between 70-130 W. Pockels coefficient enhancement was experimentally confirmed by Si Mach-Zehnder interferometer (MZI) loaded with PZT and probed with the co-planar electrode. An enhancement of \(\approx\)300% in Pockels coefficient was observed from 2-8 pm/V with strain increasing from -0.04% to -0.21%. To the best of our knowledge, this is the first time study and demonstration of strain engineering on Pockels coefficient of PZT using DFT simulation, film deposition, and photonic device fabrication.
## 1 Introduction
Electro-optic (EO) effect in a material is the change in the effective refractive index upon application of the electric field. This effect has been exploited for various applications such as electro-optic modulators, sensors, etc. Perovskites with empirical formula as ABO\({}_{3}\) exhibit a large EO tensor and are a choice of materials for pure phase modulation at high frequency without optical losses at 1550 nm (communication wavelength)[1, 2, 3, 4, 5] as well as non-linear photonic applications[6, 7, 8]. The lossless property comes from a large bandgap of perovskites. Lithium Niobate (LNO) has been the most popular perovskite material being commercially used for intensity modulation. Recently, other perovskites such as Lead Zirconium Titanate (PZT) and Barium Titanate (BTO), which exhibit Pockels coefficient of at least an order higher than LNO, have attracted attention in optics. PZT has an ABO\({}_{3}\) structure with "Pb" corresponding to A (lattice vertices) and B corresponding to alternating "Zr" and "Ti" atoms at the body center of the tetragonal lattice with "O" atom at the face center. The non-centrosymmetric structure of PZT makes it suitable for optics and opto-MEMS with a high piezoelectric coefficient (d\({}_{33}\)\(>\) 1000 pm/V) and electro-optic (E-O) effect (r\(>\) 100 pm/V). The figure of merit that governs the efficiency of an electro-optic modulator based on perovskites are V\({}_{\pi}\)L and the Pockels coefficient. There have been experimental papers on PZT-based EO modulators that focus on the film crystallinity and photonic device design, but not much on the strain effects that develop in PZT due to interface lattice mismatch between PZT and buffer. PZT is integrated on Si with the help of a buffer layer that introduces either a tensile or a compressive strain on the PZT film. There have been theoretical works on Pockels coefficient calculation using DFT[9, 10, 11, 12, 13, 14, 15] and enhancement due to mode softening resulting from the strained lattice wherein the EO tensor is observed to increase sharply at the phonon mode softening strain value[16, 10, 17, 18, 19, 20]. There are reports on strain generation in PZT[21]. Since the PZT film deposition on Si involves a buffer layer, a theoretical estimate would enable us to vary the deposition parameter of the buffer layers, PZT film deposition, and substrate pre-treatment to attain a strain value that gives the Pockels coefficient divergence due to phonon mode softening. There are various deposition methods for PZT such as sol-gel, MOCVD, MBE, Evaporation, PLD, and Sputter[22, 23, 24, 25, 26, 27, 28, 29, 30]. We choose RF Sputter deposition due to its scalability, stoichiometry control, and fast deposition rate to corroborate our theoretical model. Parameters that could potentially affect the strain developed in the deposited PZT film are RF power, and ex-situ anneal.
Here we present the DFT simulation of strain engineering using first principles and Landau-Ginzburg-Devonshire theory and its effect on rumpling and E-O tensor. We confirm the simulation finding by using RF sputter-deposited PZT film. We calculate the lattice parameter and strain developed in the film using Bragg's law and the Williamson-Hall method from the XRD spectra of the deposited PZT film on Pt layer. The findings from the XRD data are used to fabricate Si MZI loaded with PZT and characterized for EO behavior. We confirm the strain effect with \(\approx\)300% increase in EO shift on strain engineering. To the best of our knowledge, this is the first work on PZT that gives a theoretical estimation as well as experimental demonstration, using film as well as photonic device fabrication, of the effect of strain engineering of PZT film on EO tensor. The paper starts with the theoretical background for DFT simulation followed by the simulation results. It is followed by a section discussing film deposition and concludes with photonic device fabrication and discussion.
## 2 Theoretical discussion
The liner dependency of dielectric tensor \(\epsilon\) on the applied electric field and linear electro-optic(E-O) tensor \(r_{ijk}\) is given by eq.1. The clamped E-O tensor, with no macroscopic strain \(\eta\) from the applied electric field, is a combination of electric and ionic contribution to the tensor given by eq.2 and eq.3.
\[\Delta(\epsilon^{-1})_{ij}=\sum_{k}r_{ijk}E_{k} \tag{1}\]
\[r_{ijk}^{el}=-\frac{8\pi}{n_{i}^{2}n_{j}^{2}}\chi_{ijl}^{(2)}|_{l=k} \tag{2}\]
\[r_{ijk}^{el}=-\frac{4\pi}{\sqrt{\Omega}n_{i}^{2}n_{j}^{2}}\sum_{m}\frac{ \alpha_{ij}^{m}p_{m,k}}{\omega_{m}^{2}} \tag{3}\]
\(\alpha\) denotes the Raman susceptibility due to mode m with \(\omega\), \(\Omega\) and \(p_{mk}\) denoting the transverse optic phonon frequency, unit cell volume and transverse optic mode polarities respectively \(.\alpha\) is related to \(\chi^{(1)}\) as in eq.4 while \(p_{mk}\) is linked to infrared intensities as in eq.5.
Figure 1: (a)Simulation to determine the cut-off kinetic energy, Effect of variation of lattice strain on (b) lattice parameter a,b, and c, (c) rumpling in (001),(100) and (010) plane,(d) dielectric constant,(e) phonon mode softening and (f) r\({}_{ij}\) coefficient.
\[\alpha_{ij}^{m}=\sqrt{\Omega}\sum_{\kappa,\beta}\frac{\partial\chi_{ij}^{(1)}}{ \partial\tau_{\kappa,\beta}}u_{m}(\kappa\beta) \tag{4}\]
\[p_{m,k}=\sum_{\kappa,\beta}Z_{\kappa,\kappa\beta}^{*}u_{m}(\kappa\beta) \tag{5}\]
The net clamped E-O tensor is given by eq.6
\[r_{ijk}^{\eta}=r_{ijk}^{el}+r_{ijk}^{ion} \tag{6}\]
The net unclamped E-O tensor is given by eq.7 wherein the contribution of the elasto-optic coefficients \(p_{ij\alpha\beta}\) and the piezoelectric strain coefficients \(d_{\gamma\alpha\beta}\) is taken into account.
\[r_{ijk}^{\sigma}=r_{ijk}^{\eta}+\sum_{\mu,\nu=1}^{3}p_{ij\mu\nu}d_{\kappa\mu\nu} \tag{7}\]
Figure 2: (a)FESEM image of the fabricated PZT film at 560\({}^{\circ}\)C, 70 W RF power; XRD spectra of the deposited PZT film at varying (b) anneal temperature and (c) Deposition power;(d) variation of lattice parameter extracted from XRD spectra.
Landau-Ginzburg-Devonshire theory explains the relationship of large Pockels coefficient and mode softening given as eq.8.
\[F(P,T)=F_{0}+\frac{1}{2}\alpha P^{2}+\frac{1}{4}\beta P^{4} \tag{8}\]
Here, \(F_{0}\),\(\alpha\), and \(\beta\) are energy density at zero polarization, temperature, and strain-dependent coefficients respectively with the electric field given by eq.9.
\[E=\frac{\partial F}{\partial P}=\alpha P+\beta P^{3} \tag{9}\]
The linear susceptibility is the derivative of polarization with respect to the electric field given by eq.10 and dielectric permittivity by eq.11 with \(\chi(2)\) being the second derivative of polarization with respect to the electric field given in eq.12.
\[\chi^{(1)}=\frac{\partial P}{\partial E}=\left(\frac{\partial E}{\partial P} \right)^{-1}=\frac{1}{\alpha+3\beta P^{2}} \tag{10}\]
\[\epsilon=1+\chi^{(1)} \tag{11}\]
\[\chi^{(2)}=\frac{\partial^{2}P}{\partial E^{2}}=-\frac{\partial^{2}E}{ \partial P^{2}}\left(\frac{\partial P}{\partial E}\right)^{3}=-6\beta P(\chi^ {(1)})^{3} \tag{12}\]
The dielectric tensor is related to the Pockels effect as eq.13 with the Pockel's tensor following the divergence shown by \(\chi^{(1)}\).
\[rE=\Delta\left(\frac{1}{\epsilon}\right)\approx\frac{-\Delta\epsilon}{ \epsilon^{2}}=-\frac{-\chi^{(2)}E}{(1+\chi^{(1)})^{2}}=\frac{3\beta P(\chi^{(1 )})^{3}E}{(1+\chi^{(1)})^{2}} \tag{13}\]
As was observed in work done by Xiao-hong et.al [1] the relative permittivity is related to the stoichiometry of Zr:Ti and near to morphotropic phase boundary the permittivity \(\epsilon\) increases abruptly which is orientation dependent.
## 3 DFT Simulation
We perform DFT simulation of PZT molecule using "MIT Atomic-Scale Modeling Toolkit"[31]. The calculation was done using the Monkhorst-Pack grid with GGA (Perdew-Burke-Ernzerhof) approximation using norm-conserving pseudopotential to calculate the phonon frequencies and eigenvectors, along with the calculation of strain and EO tensor. We use the valence electron configuration of 6s\({}^{2}\)5d\({}^{10}\)6p\({}^{2}\) for Pb 3s\({}^{2}\)4p\({}^{6}\)5s\({}^{2}\)4d\({}^{2}\) for Zr 3s\({}^{2}\)9d\({}^{6}\)4s\({}^{2}\)3d\({}^{2}\) for Ti, and 2s\({}^{2}\)2p\({}^{4}\) for O. As seen in Fig.1(a), energy starts to asymptote beyond 60 Ry giving an accurate result with reduced simulation time and hence is chosen for future calculation. Fig.1(b) and (c) show a cross-over between lattice parameters "a" and "b" at \(\approx\)5% strain while rumpling goes to zero for strain \(<\) -8% and \(>\)25% along \(<\)100\(>\) and \(<\)010\(>\) respectively. We observe a direct correlation between the rumpling, dielectric-tensor, and E-O tensor that diverges at the same values of strain (\(\approx\)400% increase in r\({}_{ij}\) at -8% strain) as seen in Fig.1(d),(e) and (f). The effect of mode softening can be seen from Fig.1(e) where the frequency of phonon vibration drops at -8% leading to an increase in the EO tensor due to \(\frac{1}{\epsilon^{2}}\) relation leading to \(\approx\)400% increase in the EO tensor from the unstrained state of PZT molecule as seen in Fig.1(f). The values shown in Fig.1(f) are an approximation as rumpling is not considered in all the directions giving an indication of the enhancement that can be achieved with strain engineering. To experimentally verify the effect of strain PZT was RF sputter deposited on Pt that has an \(\approx\) -3% lattice mismatch with PZT.
## 4 Experimental
### PZT film deposition
RF sputtering was used to deposit PZT on Pt/SiO\({}_{2}\)/Si stack. Pt layer was predominantly (111) oriented. Strain variation of the deposited PZT was done by varying the deposition RF power from 70 to 130 W with an argon flow rate maintained at 30 sccm. An ex-situ anneal temperature varying from 500 to 800\({}^{\circ}\)C in an air ambiance was used to get perovskite
phase. Fig.2(a) shows the FESEM image of the annealed PZT sample at 560\({}^{\circ}\)C. Fig.2(c) and (d) shows the XRD spectra for the deposited PZT at varying temperature and deposition power that was used to calculate strain developed in the film using Williamson-Hall method given by eq.14.
\[\beta_{T}\cos(\theta)=\epsilon(4\sin(\theta))+\frac{K\lambda}{D} \tag{14}\]
where \(\beta_{T}\) is the total broadening of the peak,\(\epsilon\) is strain and D is the crystallite size. The lattice parameter is extracted from the XRD spectra using eq.15 for a tetragonal lattice.
\[\frac{1}{d^{2}}=\frac{h^{2}+k^{2}}{a^{2}}+\frac{l^{2}}{c^{2}} \tag{15}\]
where "d" is interplanar spacing, "a", "c" are the lattice constants, and (hkl) are the miller indices. "d" is calculated using Bragg's law given by
\[d=\frac{n\lambda}{2\sin(\theta)} \tag{16}\]
where \(\theta\) is the peak positions in XRD spectra.
Table.1 shows that the annealing temperature controls the strain predominantly as compared to the RF deposition power used with the film most strained at 530\({}^{\circ}\)C. The trend in the simulated lattice parameter variation as shown in Fig.1(b)
\begin{table}
\begin{tabular}{c c c} \hline RF Power & Anneal Temperature & Strain(\%) \\ \hline
70 W & 530\({}^{\circ}\)C & -0.21 \\
70 W & 560\({}^{\circ}\)C & -0.14 \\
70 W & 600\({}^{\circ}\)C & -0.04 \\
100 W & 560\({}^{\circ}\)C & -0.13 \\
130 W & 560\({}^{\circ}\)C & -0.13 \\ \hline \end{tabular}
\end{table}
Table 1: **Strain calculation for deposited PZT film**
Figure 3: Process flow used for Si MZI loaded with PZT EO modulator fabrication.
Figure 4: (a) FESEM and cross-section schematic of the fabricated device;(b) P-V loop of the deposited PZT film annealed at 530 and 560\({}^{\circ}\)C; EO shift on voltage application for (c) 600\({}^{\circ}\)C with -0.04% strain and (d) 530\({}^{\circ}\)C with -0.21% strain;(e) variation of measured Pockels coefficient with strain.
matches with the experimental observation shown in Fig.2(d) with the shift in the "a" parameter being attributed to lattice relaxation allowed in simulation while the cell structure is assumed fixed for experimental calculation.
### Photonic device fabrication and characterization
The experimental confirmation of the Pockels coefficient enhancement was done by making Si Mach-Zhender Interferometer(MZI) loaded with PZT along one arm with the probing done using co-planar electrodes. Fig.3 shows the process flow of the fabricated MZI device. A 90 nm shallow etch Si MZI is fabricated on an SOI substrate. It is followed by DC sputter deposition of Ti/Pt layer with a thickness of 20/70 nm. It is annealed in air ambiance at 650\({}^{\circ}\)C to obtain a predominantly (111) oriented Pt layer. PZT is RF sputter deposited at 70 W RF power with a 30 sccm argon flow rate and source-target distance of 6 cm with a deposition time of 2Hr. The deposited PZT film is air annealed at varying temperatures of 530\({}^{\circ}\)C, 560\({}^{\circ}\)C, and 600\({}^{\circ}\)C. Fig.4(a) shows the FESEM image and the schematic of the fabricated device with Si acting as the waveguiding medium and PZT as an optically active material. Fig.4(b) shows the polarization-Voltage(P-V) curve for the deposited film at 530\({}^{\circ}\)C and 560\({}^{\circ}\)C with 530\({}^{\circ}\)C annealed film showing higher polarization than film annealed at 560\({}^{\circ}\)C. Fig.4(c) and (d) correspond to PZT annealed at 600 and 530\({}^{\circ}\)C respectively with a DC optical spectrum shift of 2 pm/V and 8 pm/V respectively. This corresponds to an enhancement of electro-optic shift of \(\approx\)300% corresponding to strain change from -0.04% to -0.21% from Table.1 as seen in Fig.4(e).
## 5 Conclusion
We report \(\approx\)400% enhancement in PZT PZT Pockels coefficient on DFT simulation of lattice strain due to phonon mode softening. The simulation showed a relation between the rumpling and the Pockels coefficient divergence at a strain of -8% and 25%. The simulation was verified experimentally by RF sputter deposited PZT film on Pt/SiO\({}_{2}\)/Si layer. The strain developed in PZT varied from -0.04% for film annealed at 530\({}^{\circ}\)C to -0.21% for 600\({}^{\circ}\)C annealing temperature. The strain was insensitive to RF power with a value of -0.13% for power varying between 70-130 W. Pockels coefficient enhancement was experimentally confirmed by Si Mach-Zehnder interferometer (MZI) loaded with PZT and probed with the co-planar electrode. An enhancement of \(\approx\)300% in Pockels coefficient was observed from 2-8 pm/V with strain increasing from -0.04% to -0.21%. To the best of our knowledge, this is the first time studying and demonstration of strain engineering on Pockels coefficient of PZT using DFT simulation, film deposition, and photonic device fabrication.
## Acknowledgments
SKS thanks Professor Ramakrishna Rao chair fellowship.
## Disclosures
The authors declare no conflicts of interest.
## Data availability
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
|
2306.16445 | Double Kerr-Schild spacetimes and the Newman-Penrose map | The Newman-Penrose map, which is closely related to the classical double
copy, associates certain exact solutions of Einstein's equations with self-dual
solutions of the vacuum Maxwell equations. Here we initiate an extension of the
Newman-Penrose map to a broader class of spacetimes. As an example, we apply
principles from the Newman-Penrose map to associate a self-dual gauge field to
the Kerr-Taub-NUT-(A)dS spacetime and we show that the result agrees with
previously studied examples of classical double copies. The corresponding field
strength exhibits a discrete electric-magnetic duality that is distinct from
its (Hodge star) self-dual property. | Kara Farnsworth, Michael Graesser, Gabriel Herczeg | 2023-06-28T18:00:01Z | http://arxiv.org/abs/2306.16445v3 | Double Kerr-Schild spacetimes and the Newman-Penrose map
###### Abstract
The Newman-Penrose map, which is closely related to the classical double copy, associates certain exact solutions of Einstein's equations with self-dual solutions of the vacuum Maxwell equations. Here we initiate an extension of the Newman-Penrose map to a broader class of spacetimes. As an example, we apply principles from the Newman-Penrose map to associate a self-dual gauge field to the Kerr-Taub-NUT-(A)dS spacetime and we show that the result agrees with previously studied examples of classical double copies. The corresponding field strength exhibits a discrete electric-magnetic duality that is distinct from its (Hodge star) self-dual property.
###### Contents
* 1 Introduction
* 2 Newman-Penrose map for single Kerr-Schild spacetimes
* 3 Newman-Penrose map for double Kerr-Schild spacetimes
* 4 Kerr-Taub-NUT-(A)dS metric
* 4.1 Scalar functions
* 4.2 Self-dual gauge field
* 4.3 Electric-magnetic duality
* 5 Discussion
Introduction
The double copy is a family of correspondences between gravitational and gauge theories with origins in string theory [1] that can be summarized schematically as
\[\text{gravity}=\text{gauge}\otimes\text{gauge}.\]
Years after the initial discovery that closed string amplitudes can be factorized into a product of open string amplitudes, a new manifestation of the double copy was discovered in the form of color-kinematics duality, where a kinematic identity analogous to the Jacobi identity for color factors was introduced to argue that the Kawai-Lewellen-Tye (KLT) relations of [1] are equivalent to a "diagram-by-diagram numerator'squaring' relation with gauge theory" [2]. This relationship greatly simplifies calculations in gravity, and has provided useful insight into gauge theory calculations. It has been proven at tree level for Yang-Mills theory and Einstein gravity amplitudes [3], and in various limits and cases with extra symmetry at higher loop level [4, 5, 6, 7, 8, 9], however a general exact equivalence at all orders has yet to be found. For a recent review on the subject of the double copy, see [10].
In an effort to understand how general this double copy relationship is, a proliferation of similar approaches has been initiated in recent years, particularly relating _exact_ non-perturbative classical solutions of gauge theory and gravity [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]. The first classical "Kerr-Schild" double copy was introduced in [11], where stationary, vacuum solutions of Einstein's equations of Kerr-Schild type were mapped to solutions of the vacuum Maxwell's equations. Since then, many other classical double copies have been proposed, including one called the Newman-Penrose map, which we introduced in [12] and elaborated upon in [13] using the language of twistors.
The Newman-Penrose map associates a self-dual solution of the vacuum Maxwell equations with a solution of the Einstein equations in Kerr-Schild form. Unlike other versions of the double copy, including the Kerr-Schild double copy, this map generically applies to gravitational solutions that are not necessarily stationary or vacuum. When other versions of the double copy and the Newman-Penrose map are both defined, they agree in all known examples, although a general proof of their equivalence has not been conclusively established.
In this note, we apply concepts from the Newman-Penrose map in an exploratory manner to a family of spacetimes for which the Newman-Penrose map is not well defined a priori. In particular, we consider the Kerr-Taub-NUT-(A)dS family, which is Petrov type D, and which can be given a double Kerr-Schild form after analytic continuation to Kleinian signature [90]. A straightforward application of the Newman-Penrose map to each of the null, geodesic and shear-free Kerr-Schild vectors leads to a complex self-dual gauge field whose real part agrees with previous double copy studies of the same metric. The resulting field strength is shown to exhibit an electric-magnetic duality, that acts to relate the real (imaginary) parts of the electric and magnetic fields to each other, and consequently is a symmetry of the solution that is distinct from its (Hodge star) self-dual property.
## 2 Newman-Penrose map for single Kerr-Schild spacetimes
In this section we give a concise overview of the Newman-Penrose map [12, 13] for single Kerr-Schild spacetimes in a general metric signature. We focus on the basic ingredients of the map that must be generalized in order to accommodate more complicated spacetimes.
The Newman-Penrose map consists of two main components:
* \(\Phi\): a scalar function, harmonic with respect to the background metric, which arises in a particular choice of null tetrad that is suitable for a generic1 Kerr-Schild spacetime. Footnote 1: By ‘generic,’ we mean that a tetrad of the form (3) exists whenever the Kerr-Schild vector is shear-free, geodesic and expanding. This includes nearly all known vacuum Kerr-Schild spacetimes with the exception of the PP wave spacetimes which are Petrov type N and have a non-expanding congruence.
* \(\hat{k}\): a differential spin-raising operator.
From these elements we can construct the gauge field
\[A=\hat{k}\Phi. \tag{1}\]
When \(\Phi\) is harmonic with respect to the flat background metric, the field strength \(F=dA\) defined by the gauge field defined in (1) is self-dual, and as a consequence automatically satisfies the vacuum Maxwell equations in flat spacetime.
The original formulation of the Newman-Penrose map applies to single Kerr-Schild spacetimes with a flat background
\[g_{\mu\nu}=g^{(0)}_{\mu\nu}+V\ell_{\mu}\ell_{\nu}, \tag{2}\]
where \(\ell_{\mu}\) is taken to be null with respect to both the full (\(g_{\mu\nu}\)) and background (\(g^{(0)}_{\mu\nu}\)) metrics. In Lorentzian signature, if the stress tensor satisfies \(T_{\mu\nu}\ell^{\mu}\ell^{\nu}=0\) (which includes vacuum solutions of Einstein's equations with or without a cosmological constant), then \(\ell^{\mu}\) is also geodesic with respect to both the full and background metrics (see theorem (32.1) of [91]). Moreover, when the vacuum Einstein equations are satisfied, it can be shown that \(g_{\mu\nu}\) is algebraically special and \(\ell^{\mu}\) is a repeated principle null direction of the Weyl tensor [92], so the Goldberg Sachs theorem ensures \(\ell^{\mu}\) is also shear-free [93]. Here, we wish to consider more general spacetimes, which may have an arbitrary metric signature, or even be complex, and which may be sourced by more general stress tensors. However, we will nevertheless assume that \(\ell^{\mu}\) is geodesic and shear-free. Any such spacetime admits a null tetrad of the form
\[\begin{split}\ell&=du+\tilde{\Phi}d\zeta+\Phi d \tilde{\zeta}+\Phi\tilde{\Phi}dv\,,\\ n&=dv+\frac{1}{2}V\ell,\\ m&=-(\tilde{\Phi}dv+d\tilde{\zeta})\\ \tilde{m}&=-(\Phi dv+d\zeta)\end{split} \tag{3}\]
where \(\ell=\ell_{\mu}dx^{\mu}\), etc. \(u,v,\zeta,\tilde{\zeta}\) are null coordinates2, and \(V\), \(\Phi\) and \(\tilde{\Phi}\) are scalar functions. In
Lorentzian signature, the tilded and untilded quantities are related by complex conjugation, however in general signature they are independent.
The full metric can then be written
\[g_{\mu\nu} =n_{\mu}\ell_{\nu}+\ell_{\mu}n_{\nu}-m_{\mu}\tilde{m}_{\nu}-\tilde{ m}_{\mu}m_{\nu} \tag{5}\] \[=g^{(0)}_{\mu\nu}+V\ell_{\mu}\ell_{\nu} \tag{6}\]
with \(g^{(0)}_{\mu\nu}dx^{\mu}dx^{\nu}=2(dudv-d\zeta d\tilde{\zeta})\). For vacuum spacetimes, \(V\) is determined by \(\Phi\) and \(\tilde{\Phi}\) and the mass \(M\), so \(\Phi\) and \(\tilde{\Phi}\) completely determine a vacuum solution up to the constant \(M\).
The shear-free and geodesic conditions imply that \(\Phi\) satisfies the following non-linear partial differential equations:
\[\left(\Phi\partial_{\zeta}-\partial_{v}\right)\Phi=0,\qquad(\Phi\partial_{u}- \partial_{\tilde{\zeta}})\Phi=0. \tag{7}\]
Importantly, these equations also imply that \(\Phi\) is harmonic with respect to the flat background metric
\[\Box_{0}\Phi=(\partial_{u}\partial_{v}-\partial_{\zeta}\partial_{\tilde{ \zeta}})\Phi=0. \tag{8}\]
This is also the (linearized) equation of motion for the zeroth copy biadjoint scalar field. Pushing this intuition further, we can construct a gauge field by acting on \(\Phi\) with the spin-raising operator
\[\hat{k}=-\frac{Q}{2\pi\epsilon_{0}}(dv\,\partial_{\zeta}+d\tilde{\zeta}\, \partial_{u}). \tag{9}\]
The gauge field defined by
\[A_{\rm NP}=\hat{k}\Phi \tag{10}\]
automatically satisfies the self-dual vacuum Maxwell equations when \(\Phi\) is harmonic with respect to the flat background metric. With this construction, we can define a map from any single Kerr-Schild metric with an expanding, shear-free, null geodesic congruence to a self-dual solution of the flat space vacuum Maxwell equations. This includes all vacuum Kerr-Schild spacetimes with an expanding repeated principle direction. However, unlike the Kerr-Schild double copy, the map also applies to non-vacuum and non-stationary spacetimes obeying these conditions. While there is no general proof of the equivalence of the Newman-Penrose map and the Kerr-Schild double copy, they have been shown to be equivalent where both formalisms apply [12].
## 3 Newman-Penrose map for double Kerr-Schild spacetimes
To explore the generality of this framework, we attempt to apply the map to both double Kerr-Schild spacetimes and solutions in non-flat (maximally symmetric) backgrounds:
\[g_{\mu\nu}=\bar{g}_{\mu\nu}+\phi k_{\mu}k_{\nu}+\psi\ell_{\mu}\ell_{\nu}. \tag{11}\]
Here \(\bar{g}_{\mu\nu}\) is maximally symmetric, and we will write it in a manifestly conformally flat form for ease of comparison with the single Kerr-Schild case. This metric is in double Kerr-Schild form, so we have two scalar fields \(\phi\) and \(\psi\), as well as two vectors \(k_{\mu}\) and \(\ell_{\mu}\) which are assumed to each be null, geodesic, and shear-free with respect to the background metric, from which it follows that
they are null, shear-free and geodesic with respect to the full metric. The vectors \(k_{\mu}\) and \(\ell_{\mu}\) are also assumed to be orthogonal, which cannot be accomplished for real metrics in Lorentzian signature. Hence, our discussion of double Kerr-Schild metrics will be primarily in Kleinian signature unless otherwise noted, although it applies equally well to complex spacetimes where metric signature is not a well-defined notion.
In Kleinian signature, the background metric (which can be de Sitter (dS), anti-de Sitter (AdS), or flat) can be written in null coordinates as [94]
\[\bar{g}_{\mu\nu}dx^{\mu}dx^{\nu}=\frac{2}{\Delta^{2}}\left(dUdV-dXdY\right) \tag{12}\]
with \(\Delta=1-\frac{\lambda}{2}(UV-XY)\) and the sign of \(\lambda\) corresponding to dS (positive), AdS (negative) or zero (flat). In this signature, the coordinates \(X\) and \(Y\) are real and are not complex conjugates of one another, but they become conjugate upon analytic continuation back to Lorentzian signature.
In these coordinates, our null vectors can be normalized so that they take the form
\[k = dU+\tilde{\Phi}_{k}\,dX+\Phi_{k}\,dY+\Phi_{k}\,\tilde{\Phi}_{k} \,dV \tag{13}\] \[\ell = dU+\tilde{\Phi}_{\ell}\,dX+\Phi_{\ell}\,dY+\Phi_{\ell}\,\tilde{ \Phi}_{\ell}\,dV. \tag{14}\]
In analogy with the single Kerr-Schild case, the (real) scalar functions \(\Phi_{k}\) and \(\Phi_{\ell}\) will be used to construct the gauge fields associated with the vectors \(k\) and \(\ell\).
We now also have two spin-raising operators
\[\hat{k}_{e}=\frac{-Q_{e}}{2\pi\epsilon_{0}}\left(dV\partial_{X}+dY\partial_{U} \right),\quad\hat{k}_{m}=\frac{-Q_{m}}{2\pi\mu_{0}}\left(dV\partial_{X}+dY \partial_{U}\right). \tag{15}\]
The operator \(\hat{k}_{e}\) is identical to \(\hat{k}\) for single Kerr-Schild metrics and the operator \(\hat{k}_{m}\) is the magnetic analog. The total gauge field obtained from this double Kerr-Schild metric via the Newman-Penrose map is then the linear combination
\[A_{\rm NP}=\hat{k}_{e}\Phi_{\ell}+\hat{k}_{m}\Phi_{k}. \tag{16}\]
As was the case for single Kerr-Schild metrics, the self-dual gauge field (16) satisfies the vacuum Maxwell equations by construction, provided both \(\Phi_{k}\) and \(\Phi_{\ell}\) are harmonic with respect to the flat metric. Note that since the wave operator is not conformally invariant, \(\Phi_{k}\) and \(\Phi_{\ell}\) are _not_ solutions of the wave equation on an (A)dS background. However, since the vacuum Maxwell equations _are_ conformally invariant, the gauge field (16) is a solution on both the flat and maximally symmetric backgrounds. Generalizing this formalism to double Kerr-Schild metrics and maximally symmetric backgrounds then amounts to understanding when this harmonic condition holds.
Although we are not able to prove that the functions \(\Phi_{\ell}\) and \(\Phi_{k}\) are harmonic in general double Kerr-Schild spacetimes, for the Kerr-Taub-NUT-(A)dS family of solutions, the Newman-Penrose map outlined here can be constructed explicitly in the affirmative. This is the subject of the proceeding section, and here we state the main results. The function \(\Phi_{\ell}\) is shown to satisfy the following non-linear PDEs
\[\left(\Phi_{\ell}\partial_{X}-\partial_{V}\right)\Phi_{\ell}=0,\quad\left( \Phi_{\ell}\partial_{U}-\partial_{Y}\right)\Phi_{\ell}=0 \tag{17}\]
which are the same as the non-linear PDEs (7) but for the null coordinates of the curved spacetime. This result is not surprising, but also non-trivial, at least for these coordinates. It is not surprising
because the (A)dS background is conformally flat, and the geodesic and shear-free properties of \(k\) and \(\ell\) are conformally invariant. Hence, both essential properties of \(k\) and \(\ell\) required to define the Newman-Penrose map are preserved in passing to a maximally symmetric background. On the other hand, the explicit form for \(\Phi_{\ell}\) has a complicated dependence on both the cosmological constant and null coordinates, such that satisfying (17) is not trivial.
It follows from (17) that \(\Phi_{\ell}\) is harmonic with respect to the flat metric, and consequently the field strength obtained from the gauge potential \(A_{\ell}=\hat{k}\Phi_{\ell}\) is automatically self-dual and satisfies the vacuum Maxwell equations in both flat and conformally flat spacetimes. The real part of the gauge field \(\Re(A_{\ell})\) is shown to be gauge-equivalent to the original double copy prescription for obtaining a gauge field from the Kerr-Schild form of the metric.
As shown below, a similar story holds for \(\tilde{\Phi}_{\ell}\), \(\tilde{\Phi}_{k}\), and \(\Phi_{k}\) - they satisfy similar non-linear PDEs and are therefore each harmonic with respect to the flat background metric. For double Kerr-Schild spacetimes, _a priori_ there could exist multiple Newman-Penrose maps using different combinations of \(\Phi_{\ell}\), \(\tilde{\Phi}_{\ell}\), \(\tilde{\Phi}_{k}\), and \(\Phi_{k}\), however for this family of spacetimes these four functions are sufficiently related to one other that the possible different self-dual gauge fields obtained from acting with \(\tilde{k}_{e}\) and \(\hat{k}_{m}\) are equivalent.
## 4 Kerr-Taub-NUT-(A)dS metric
A particular solution that admits a double Kerr-Schild form containing many well known solutions as various limits is the Kerr-Taub-NUT-(A)dS metric. This solution can be written in a double Kerr-Schild form that is real-valued in \((2,2)\) signature after analytic continuation.
In Plebanski coordinates it is given as [90]
\[ds^{2}=d\bar{s}^{2}-\frac{2Mq}{q^{2}-p^{2}}\left[d\tilde{\tau}+p^{2}d\tilde{ \sigma}\right]^{2}-\frac{2Np}{q^{2}-p^{2}}\left[d\tilde{\tau}+q^{2}d\tilde{ \sigma}\right]^{2}. \tag{18}\]
Here \(M\) is the Schwarzschild mass, \(N\) is the NUT charge, and
\[d\bar{s}^{2}= \frac{\bar{\Delta}_{p}}{q^{2}-p^{2}}\left[d\tilde{\tau}+q^{2}d \tilde{\sigma}\right]^{2}+\frac{\bar{\Delta}_{q}}{q^{2}-p^{2}}\left[d\tilde{ \tau}+p^{2}d\tilde{\sigma}\right]^{2}+2\left[d\tilde{\tau}+q^{2}d\tilde{\sigma }\right]dp+2\left[d\tilde{\tau}+p^{2}d\tilde{\sigma}\right]dq \tag{19}\]
is the background (A)dS metric. Here \(\bar{\Delta}_{p}=\gamma-\epsilon p^{2}+\lambda p^{4}\) and \(\bar{\Delta}_{q}=-\gamma+\epsilon q^{2}-\lambda q^{4}\) with \(\lambda\) related to the scalar curvature via \(R=12\lambda\), and \(\gamma\) and \(\epsilon\) will be related to the rotational parameter \(a\).
In these coordinates the two orthogonal Kerr-Schild vectors take especially simple forms,
\[\ell =d\tilde{\tau}+p^{2}d\tilde{\sigma} \tag{20}\] \[k =d\tilde{\tau}+q^{2}d\tilde{\sigma} \tag{21}\]
which hereafter are referred to as the "mass" and "NUT" vectors, respectively.
To guide our intuition, we first transform this to twisted spheroidal coordinates using
\[dq =dr\] \[dp =d(a\cos\theta)\] \[d\tilde{\tau} =\frac{1}{(1-\lambda a^{2})}\left(dt-\frac{1}{(1-\lambda r^{2})} dr+ad\varphi\right)-\frac{a\cos^{2}\theta}{(1-\lambda a^{2}\cos^{2}\theta)\sin \theta}d\theta \tag{22}\] \[d\tilde{\sigma} =\frac{1}{(1-\lambda a^{2})}\left(-\lambda dt+\frac{\lambda}{(1- \lambda r^{2})}dr-\frac{1}{a}d\varphi\right)+\frac{1}{(1-\lambda a^{2}\cos^{ 2}\theta)a\sin\theta}d\theta.\]
With the choices \(\gamma=a^{2}\) and \(\epsilon=1+\lambda a^{2}\) the background metric becomes
\[\begin{split} d\bar{s}^{2}&=\frac{(1-\lambda r^{2})(1- \lambda a^{2}\cos^{2}\theta)}{(1-\lambda a^{2})}dt^{2}-\frac{(1-\lambda a^{2} \cos^{2}\theta)}{(1-\lambda r^{2})(1-\lambda a^{2})}dr^{2}\\ &\quad-\frac{(r^{2}-a^{2}\cos^{2}\theta)}{(1-\lambda a^{2}\cos^{ 2}\theta)}d\theta^{2}+\frac{(r^{2}-a^{2})\sin^{2}\theta}{(1-\lambda a^{2})}d \varphi^{2}+2\frac{a\sin^{2}\theta}{(1-\lambda a^{2})}d\varphi dr.\end{split} \tag{23}\]
After analytic continuation to Lorentzian signature (\(a\rightarrow-ia,\ \varphi\to i\varphi\)), this metric reduces to known results in particular limits: taking \(\lambda\to 0\) gives the Minkowski metric in twisted spheroidal coordinates used in [12], and taking \(a\to 0\) gives the (A)dS metric in the familiar static spheroidal coordinates. The parameter \(a\) is associated with the angular momentum of the black hole, as in the case of the Kerr metric. In these coordinates, the mass vector \(\ell\) and NUT vector \(k\) become
\[\ell =\frac{(1-\lambda a^{2}\cos^{2}\theta)}{(1-\lambda a^{2})}dt- \frac{(1-\lambda a^{2}\cos^{2}\theta)}{(1-\lambda r^{2})(1-\lambda a^{2})}dr+ \frac{a\sin^{2}\theta}{(1-\lambda a^{2})}d\varphi \tag{24}\] \[k =\frac{(1-\lambda r^{2})}{(1-\lambda a^{2})}dt-\frac{1}{(1- \lambda a^{2})}dr+\frac{(r^{2}-a^{2}\cos^{2}\theta)}{a\sin\theta(1-\lambda a^ {2}\cos^{2}\theta)}d\theta-\frac{(r^{2}-a^{2})}{a(1-\lambda a^{2})}d\varphi. \tag{25}\]
Once again we see that in the limit \(\lambda\to 0\), \(\ell\) matches with the Schwarzschild (\(a=0\)) or Kerr (\(a\neq 0\)) vectors used in [12] after analytic continuation. For the vector \(k\), the \(a\to 0\) limit is singular, however the functions \(\Phi_{k}\) and \(\tilde{\Phi}_{k}\) are not, as shown below.
### Scalar functions
In order to extract the scalar functions analogous to \(\Phi\) in the single Kerr-Schild case, we perform a second coordinate transformation
\[\begin{split} U&=\frac{\Delta}{\sqrt{2}}\bigg{[} \frac{1}{\sqrt{\lambda}}\frac{\sqrt{1-\lambda r^{2}}}{\sqrt{1-\lambda a^{2}}} \sqrt{1-\lambda a^{2}\cos^{2}\theta}\sinh(\sqrt{\lambda}t)-r\cos\theta\bigg{]} \\ V&=\frac{\Delta}{\sqrt{2}}\bigg{[}\frac{1}{\sqrt{ \lambda}}\frac{\sqrt{1-\lambda r^{2}}}{\sqrt{1-\lambda a^{2}}}\sqrt{1-\lambda a ^{2}\cos^{2}\theta}\sinh(\sqrt{\lambda}t)+r\cos\theta\bigg{]}\\ X&=\frac{\Delta}{\sqrt{2}}\frac{1}{\sqrt{(1- \lambda a^{2})}}(r-a)e^{\varphi}\sin\theta\\ Y&=\frac{\Delta}{\sqrt{2}}\frac{1}{\sqrt{(1- \lambda a^{2})}}(r+a)e^{-\varphi}\sin\theta\end{split} \tag{26}\]
with \(\Delta=1-\frac{\lambda}{2}(UV-XY)\) and \(r\) defined by the equation
\[\frac{(V-U)^{2}}{2r^{2}}+\frac{2XY}{r^{2}-a^{2}}(1-\lambda a^{2})=\Delta^{2}. \tag{27}\]
The limit \(\lambda\to 0\) recovers the flat spacetime transformation to null coordinates (4). In these null coordinates, the maximally symmetric background metric becomes
\[d\bar{s}^{2}=\frac{2}{\Delta^{2}}\left(dUdV-dXdY\right) \tag{28}\]
which is conformally flat and reduces to the flat space metric in null coordinates for \(\lambda\to 0\).
The mass and NUT vectors written in these coordinates are
\[\ell = \mathcal{N}_{\ell}\left[dU+\tilde{\Phi}_{\ell}dX+\Phi_{\ell}dY+ \Phi_{\ell}\tilde{\Phi}_{\ell}dV\right] \tag{29}\] \[k = \mathcal{N}_{k}\left[dU+\tilde{\Phi}_{k}dX+\Phi_{k}dY+\Phi_{k} \tilde{\Phi}_{k}dV\right] \tag{30}\]
with normalization factors
\[\mathcal{N}_{\ell} = \frac{(1+\cos\theta)}{2\Delta(1-\lambda r^{2})}\left[\sqrt{2}- \lambda rV-\frac{\lambda\Delta}{\sqrt{2}}(1-\cos\theta)\frac{(r^{2}-a^{2})}{1- \lambda a^{2}}\right] \tag{31}\] \[\mathcal{N}_{k} = \frac{(r+a)}{2a\Delta(1-\lambda a^{2}\cos^{2}\theta)}\left(\sqrt {2}-\lambda aV\cos\theta-\frac{\lambda\Delta}{\sqrt{2}}(1-\cos^{2}\theta) \frac{a(r-a)}{(1-\lambda a^{2})}\right) \tag{32}\]
and scalar functions
\[\Phi_{\ell} = \frac{\left[\frac{1}{\sqrt{2}}\lambda(r-a)(U+V+\sqrt{2}\Delta r) -2(1-\lambda ar)\right]}{2\mathcal{N}_{\ell}\Delta^{2}(r-a)(1-\lambda r^{2})}X \tag{33}\] \[\tilde{\Phi}_{\ell} = \frac{\left[\frac{1}{\sqrt{2}}\lambda(r+a)(U+V+\sqrt{2}\Delta r) -2(1+\lambda ar)\right]}{2\mathcal{N}_{\ell}\Delta^{2}(r+a)(1-\lambda r^{2})}Y \tag{34}\]
\[\Phi_{k} = \frac{\left[\frac{1}{\sqrt{2}}\lambda a(1-\cos\theta)\left(U+V+ \sqrt{2}\Delta a\cos\theta\right)+2(1-\lambda a^{2}\cos\theta)\right]}{2 \mathcal{N}_{k}\Delta^{2}a(1-\lambda a^{2}\cos^{2}\theta)(1-\cos\theta)}X \tag{35}\] \[\tilde{\Phi}_{k} = \frac{\left[\frac{1}{\sqrt{2}}\lambda a(1+\cos\theta)\left(U+V+ \sqrt{2}\Delta a\cos\theta\right)-2(1+\lambda a^{2}\cos\theta)\right]}{2 \mathcal{N}_{k}\Delta^{2}a(1-\lambda a^{2}\cos^{2}\theta)(1+\cos\theta)}Y. \tag{36}\]
It can be shown by a direct computation that \(\Phi_{\ell}\) satisfies the non-linear PDEs (17).
These four functions are not independent, but are all related to each other by a sequence of discrete transformations. Although not obvious from the form written above, the functions \(\tilde{\Phi}_{\ell}\) and \(\tilde{\Phi}_{k}\) are identical. By inspection, we also have
\[\tilde{\Phi}_{\ell}(a,U,V,X,Y) = \Phi_{\ell}(-a,U,V,Y,X), \tag{37}\] \[\tilde{\Phi}_{k}(a,U,V,X,Y) = -\Phi_{k}(a,-U,-V,Y,X),\]
or in spheroidal coordinates
\[\tilde{\Phi}_{\ell}(a,t,r,\cos\theta,\varphi) = \Phi_{\ell}(-a,t,r,\cos\theta,-\varphi), \tag{38}\] \[\tilde{\Phi}_{k}(a,t,r,\cos\theta,\varphi) = \Phi_{k}(a,-t,-r,\cos\theta,-\varphi).\]
The functions \(\Phi_{\ell}\) and \(\Phi_{k}\) which we will use for the Newman-Penrose map are then related by
\[\Phi_{\ell}(a,t,r,\cos\theta,\varphi)=\Phi_{k}(-a,-t,-r,\cos\theta,\varphi). \tag{39}\]
It then follows that \(\Phi_{k}\) satisfies the same non-linear PDEs (17) as \(\Phi_{\ell}\), while \(\tilde{\Phi}_{\ell}=\tilde{\Phi}_{k}\) satisfies the non-linear PDEs obtained from (17) by interchanging \(X\) with \(Y\). Each are therefore harmonic with
respect to the flat metric, and for these double Kerr-Schild spacetimes the Newman-Penrose map can be defined. This means that the gauge field constructed from these scalar functions via the Newman-Penrose map will automatically be a self-dual solution of the vacuum Maxwell equations in both flat and conformally flat backgrounds. The relation (39) also implies that the two parts of the gauge field are related via
\[A_{\ell}(Q_{e},a,t,r,\cos\theta,\varphi)=A_{k}(Q_{m},-a,-t,-r,\cos\theta,\varphi) \tag{40}\]
with \(A_{\ell}=\hat{k}_{e}\Phi_{\ell}\) and \(A_{k}=\hat{k}_{m}\Phi_{k}\).
### Self-dual gauge field
To obtain the self-dual gauge field from this metric via the Newman-Penrose map, we act the operator \(\hat{k}\) (with appropriate charges) defined in (15) on the functions \(\Phi_{k}\) and \(\Phi_{\ell}\) from the previous section, i.e.
\[A_{\text{NP}}=\hat{k}_{e}\hat{\Phi}_{\ell}+\hat{k}_{m}\hat{\Phi}_{k}. \tag{41}\]
We will set \(\epsilon_{0}=\mu_{0}=1\) from now on for ease of notation. In practice, we compute \(A_{\ell}\) and use the relationship (40) to find \(A_{k}\).
In order to compare with known results, we then analytically continue the gauge field to Lorentzian signature using \(a\rightarrow-ia,\ \varphi\to i\varphi,\ Q_{m}\to iQ_{m}\). Our total gauge field in Lorentzian signature is
\[\begin{split} A_{\text{NP}}=&\ \frac{(Q_{e}+iQ_{m})}{4\pi(r+ia\cos\theta)}\left[(1+\lambda iar\cos \theta)\,dt+dr+(r\cos\theta+ia)id\varphi\right]\\ &+A_{\text{NP, gauge}}\end{split} \tag{42}\]
with the pure gauge term
\[\begin{split} A_{\text{NP, gauge}}=&\ \frac{(Q_{e}+iQ_{m})}{2\pi}\frac{\sqrt{2}}{4r \Omega}\frac{(1-\lambda r^{2})}{(1+\lambda a^{2})}\bigg{(}2(1+\lambda a^{2})+ i\lambda\Delta a(r+ia)(1-\cos\theta)\bigg{)}\\ &\ \ \ \ \ \times\bigg{[}dt-\frac{1}{(1-\lambda r^{2})}dr-\frac{ia }{(1+\lambda a^{2}\cos^{2}\theta)}d(\cos\theta)\bigg{]}\\ &-\frac{(Q_{e}+iQ_{m})}{4\pi r}\left[dt-ia\frac{1}{(1+\lambda a^ {2}\cos^{2}\theta)}d(\cos\theta)\right]\\ &-\frac{(Q_{e}+iQ_{m})}{2\pi}\frac{1}{2}id\varphi-\frac{(Q_{e}+iQ _{m})}{2\pi}\frac{d\Delta}{(2-\Delta)}\\ &-\frac{(Q_{e}+iQ_{m})}{2\pi}\left[\frac{\sinh(\sqrt{\lambda}t) }{\cosh(\sqrt{\lambda}t)}\sqrt{\lambda}dt\right]+\frac{(Q_{e}+iQ_{m})}{2\pi} \frac{1}{2r}\left[\frac{(1+\lambda r^{2})}{(1-\lambda r^{2})}dr\right]\\ &+\frac{(Q_{e}+iQ_{m})}{2\pi}\left[\frac{(1-\lambda a^{2}\cos \theta)}{2(1+\cos\theta)(1+\lambda a^{2}\cos^{2}\theta)}d(\cos\theta)\right] \end{split} \tag{43}\]
and
\[\Omega=\frac{1}{\sqrt{2}}\bigg{[}2-\sqrt{\lambda}r(2-\Delta)\frac{\sinh(\sqrt {\lambda}t)}{\cosh(\sqrt{\lambda}t)}-\lambda\Delta r^{2}-\lambda\Delta a^{2} (1-\cos\theta)\frac{(1-\lambda r^{2})}{(1+\lambda a^{2})}\bigg{]}. \tag{44}\]
Finally, looking separately at the real and imaginary parts, we get
\[\begin{split}\Re(A_{\text{NP}})=&\ \frac{Q_{e}r}{4\pi(r^{2}+a^{2} \cos^{2}\theta)}\left[\left(1+\lambda a^{2}\cos^{2}\theta\right)dt+dr-a\sin^{2} \theta d\varphi\right]\\ &+\frac{Q_{m}a\cos\theta}{4\pi(r^{2}+a^{2}\cos^{2}\theta)}\left[ (1-\lambda r^{2})dt+dr-\frac{(r^{2}+a^{2})}{a}d\varphi\right]\end{split} \tag{45}\]
and
\[\begin{split}\Im(A_{\text{NP}})=&\ \frac{Q_{m}r}{4\pi(r^{2}+a^{2} \cos^{2}\theta)}\left[(1+\lambda a^{2}\cos^{2}\theta)dt+dr-a\sin^{2}\theta d \varphi\right]\\ &-\frac{Q_{e}a\cos\theta}{4\pi(r^{2}+a^{2}\cos^{2}\theta)}\left[ (1-\lambda r^{2})dt+dr-\frac{(r^{2}+a^{2})}{a}d\varphi\right].\end{split} \tag{46}\]
For comparison, the Kerr-Schild double copy of a metric in double Kerr-Schild form (11) is defined as [14]
\[A_{\text{KS}}=\psi\ell+\phi k. \tag{47}\]
For the Kerr-Taub-NUT-(A)dS metric in \((2,2)\) signature (18), we can perform a slightly modified version of the coordinate transformation (22) (\(t\rightarrow-t\)) which does not change the background metric
\[dq =dr\] \[dp =d(a\cos\theta)\] \[d\tilde{\tau} =\frac{1}{(1-\lambda a^{2})}\left(-dt-\frac{1}{(1-\lambda r^{2}) }dr+ad\varphi\right)-\frac{a\cos^{2}\theta}{(1-\lambda a^{2}\cos^{2}\theta) \sin\theta}d\theta \tag{48}\] \[d\tilde{\sigma} =\frac{1}{(1-\lambda a^{2})}\left(\lambda dt+\frac{\lambda}{(1- \lambda r^{2})}dr-\frac{1}{a}d\varphi\right)+\frac{1}{(1-\lambda a^{2}\cos^{2 }\theta)a\sin\theta}d\theta.\]
In these coordinates the two scalar functions \(\phi\) and \(\psi\) are
\[\psi =\frac{2Mq}{q^{2}-p^{2}}=\frac{2Mr}{(r^{2}-a^{2}\cos^{2}\theta)} \tag{49}\] \[\phi =\frac{2Np}{q^{2}-p^{2}}=\frac{2Na\cos\theta}{(r^{2}-a^{2}\cos^{2 }\theta)}, \tag{50}\]
and the vectors are
\[\ell =\frac{(1-\lambda a^{2}\cos^{2}\theta)}{(1-\lambda a^{2})}dt+ \frac{(1-\lambda a^{2}\cos^{2}\theta)}{(1-\lambda r^{2})(1-\lambda a^{2})}dr- \frac{a\sin^{2}\theta}{(1-\lambda a^{2})}d\varphi \tag{51}\] \[k =\frac{(1-\lambda r^{2})}{(1-\lambda a^{2})}dt+\frac{1}{(1-\lambda a ^{2})}dr-\frac{(r^{2}-a^{2}\cos^{2}\theta)}{a\sin\theta(1-\lambda a^{2}\cos^{ 2}\theta)}d\theta-\frac{(r^{2}-a^{2})}{a(1-\lambda a^{2})}d\varphi. \tag{52}\]
After analytic continuation to Lorentzian signature \((a\to-ia,\ \varphi\to i\varphi,\ N\to iN)\), the Kerr-Schild gauge field for this metric is then (mapping \(M\to Q_{e}/8\pi\) and \(N\to Q_{m}/8\pi\))
\[A_{\text{KS}} =\frac{Q_{e}r}{4\pi(r^{2}+a^{2}\cos^{2}\theta)(1+\lambda a^{2})} \bigg{[}(1+\lambda a^{2}\cos^{2}\theta)dt+dr-a\sin^{2}\theta d\varphi\bigg{]}\] \[\quad+\frac{Q_{m}a\cos\theta}{4\pi(r^{2}+a^{2}\cos^{2}\theta)(1+ \lambda a^{2})}\bigg{[}(1-\lambda r^{2})dt+dr-\frac{(r^{2}+a^{2})}{a}d\varphi \bigg{]} \tag{53}\] \[\quad+A_{\text{KS, gauge}}\]
with
\[A_{\text{KS, gauge}}=\frac{Q_{e}}{4\pi(1+\lambda a^{2})}\bigg{[}\frac{\lambda r }{(1-\lambda r^{2})}dr\bigg{]}-\frac{iQ_{m}}{4\pi}\bigg{[}\frac{\cos\theta}{ \sin\theta(1+\lambda a^{2}\cos^{2}\theta)}d\theta\bigg{]}. \tag{54}\]
This agrees exactly with the real part of the Newman-Penrose map (45) up to an overall constant factor of \(\frac{1}{(1+\lambda a^{2})}\) and pure gauge terms.
In the \(\lambda\to 0\) limit the metric is the Kerr-Taub-NUT solution, and the gauge field becomes
\[\begin{split}\Re(A_{\text{NP}})\bigg{|}_{\lambda\to 0}& =\frac{Q_{e}r}{4\pi(r^{2}+a^{2}\cos^{2}\theta)}\left[dt+dr-a\sin^{2} \theta d\varphi\right]\\ &\quad+\frac{Q_{m}a\cos\theta}{4\pi(r^{2}+a^{2}\cos^{2}\theta)} \left[dt+dr-\frac{(r^{2}+a^{2})}{a}d\varphi\right]\end{split} \tag{55}\]
\[\begin{split}\Im(A_{\text{NP}})\bigg{|}_{\lambda\to 0}& =\frac{Q_{m}r}{4\pi(r^{2}+a^{2}\cos^{2}\theta)}\left[dt+dr-a\sin^{2} \theta d\varphi\right]\\ &\quad-\frac{Q_{e}a\cos\theta}{4\pi(r^{2}+a^{2}\cos^{2}\theta)} \left[dt+dr-\frac{(r^{2}+a^{2})}{a}d\varphi\right].\end{split} \tag{56}\]
The electric part of this solution in this limit is exactly equivalent to the one found in [12] (known as \(\sqrt{\text{Kerr}}\)) for the Kerr metric, and the addition of the NUT charge simply adds the magnetic dual part. The electric part of this gauge field was shown in [11, 29] to be sourced by an axisymmetric electric charge distribution rotating at a uniform rate about the \(z\)-axis (where \(z=r\cos\theta\)). We expect the magnetic part of this field to be similarly sourced by an axisymmetric magnetic charge distribution rotating at a uniform rate about the \(z\)-axis.
In the \(a\to 0\) limit, the gravitational solution is the Taub-NUT-(A)dS metric. The gauge solution can be written (with the addition/subtraction of pure gauge terms) as
\[\Re(A_{\text{NP}})\bigg{|}_{a\to 0} =\frac{Q_{e}}{4\pi r}dt+\frac{Q_{m}}{4\pi}(1-\cos\theta)d\varphi \tag{57}\] \[\Im(A_{\text{NP}})\bigg{|}_{a\to 0} =\frac{Q_{m}}{4\pi r}dt-\frac{Q_{e}}{4\pi}(1-\cos\theta)d\varphi. \tag{58}\]
The real part of this solution is a dyon (a Coulomb charge plus a magnetic monopole). This is equal to the gauge solution found via the Kerr-Schild double copy in [14] in flat space. The electric part of this solution also agrees with the Schwarzschild-(A)dS solution (up to pure gauge terms) found via the Kerr-Schild double copy in [20, 21].
In this limit, the \(\lambda\) dependence of the solution completely drops out. Although a priori in the \(a\to 0\) limit the \(\Phi\)'s appear to depend on \(\lambda\) through \(\Delta\), this dependence is completely eliminated upon expressing \(\Phi\)'s in terms of null coordinates \((U,V,X,Y)\) only. For the \(\Phi\)'s depend on \(r\) and \(\Delta\) only in the combination \(\Delta r\), and it is only this combination that appears in the quadratic equation relating \(r\) to the null coordinates.
In other words,
\[\Phi(a=0,\lambda,t,r,\cos\theta,\varphi)=\Phi(a=0,\lambda=0,U,V,X,Y). \tag{59}\]
is independent of \(\lambda\). Again, this result isn't too surprising given that \(\Phi\) describes a shear-free null geodesic congruence, physical properties that are conformally invariant.
### Electric-magnetic duality
The field strength \(F_{\rm NP}=dA_{\rm NP}\) is of the form
\[F_{\rm NP}=(Q_{e}+iQ_{m})F \tag{60}\]
where both \(F_{\rm NP}\) and \(F\) are self-dual, with \(F_{\rm NP}=i\star_{0}F_{\rm NP}\) and \(F=i\star_{0}F\). This solution exhibits a discrete electric-magnetic duality that acts on the charges, separate from its Hodge star self-dual property. From the self-dual property of \(F\) it follows that
\[F=f+i\star_{0}f \tag{61}\]
for some real two-form \(f\), and also that
\[F_{\rm NP}=Q_{e}f-Q_{m}\star_{0}f+i(Q_{m}f+Q_{e}\star_{0}f). \tag{62}\]
This result implies a relation between the complex electric \(E\) and magnetic \(B\) fields (vector notation suppressed) of \(F_{\rm NP}\) and the real-valued electric \(E_{f}\) and magnetic \(B_{f}\) components of \(f\), namely
\[\begin{split}\Re(E)&=Q_{e}E_{f}-Q_{m}B_{f}\\ \Re(B)&=Q_{e}B_{f}+Q_{m}E_{f}\\ \Im(E)&=Q_{m}E_{f}+Q_{e}B_{f}\\ \Im(B)&=Q_{m}B_{f}-Q_{e}E_{f}\end{split} \tag{63}\]
where \(\star E=B\) and \(\star B=-E\) has been used. Comparing these equations gives \(\Im(E)=\Re(B)\) and \(\Im(B)=-\Re(E)\), which taken together is none other than the statement that \(F_{\rm NP}\) is a self-dual solution, i.e., \(E=iB\). Note however that \(\Re(E)\) and \(-\Re(B)\), which are not related by the Hodge star operation, are obtained from \(\Re(B)\) and \(\Re(E)\), respectively, by \((Q_{e},Q_{m})\rightarrow(-Q_{m},Q_{e})\), which is a counter-clockwise \(U(1)\) rotation on the electric and magnetic charge by an angle \(\theta=\pi/2\).
Given such a self-dual \(F_{\rm NP}\), one can take \(\Re(F_{\rm NP})\) alone (or \(\Im(F_{\rm NP})\)), forget the other, and obtain a non-self dual solution to the source-free Maxwell's equation. The resulting field strength will also exhibit electric-magnetic duality.
For gravitational solutions on a flat background with vanishing rotational parameter \(a\), the object on the gauge field side describes a dyon which has manifest electric-magnetic duality. Here we find this electric-magnetic duality of the solution carries over to maximally symmetric spacetimes, including those having non-vanishing rotational parameter. The gravitational analog of electric-magnetic duality has been explored in a general setting using the Kerr-Taub-NUT spacetime as a case study in [95].
Discussion
In this paper we applied the intuition of the Newman-Penrose map to more general spacetimes, including those in double Kerr-Schild form and on maximally symmetric backgrounds. Although we do not have a rigorous formulation of the Newman-Penrose map for general double Kerr-Schild metrics, for a particular example metric we are able to reproduce the gauge field from the Kerr-Schild and Weyl double copies, as shown in table 1.
In particular, we studied the Kerr-Taub-NUT-(A)dS metric which can be put into double Kerr-Schild form after analytic continuation to Kleinian signature. In analogy with the Newman-Penrose map for single Kerr-Schild spacetimes with a flat background, we obtained scalar functions \(\Phi_{\ell}\) and \(\Phi_{k}\) for each of the null vectors by identifying null coordinates that make the conformal flatness and Lorentz invariance of the maximally symmetric background metric manifest. These scalar functions are shown to satisfy the non-linear PDEs sufficient for them to be harmonic in flat spacetime (but not conformally flat spacetime).
Acting with our spin-raising operator gives a gauge field, which, because of the harmonic conditions on the \(\Phi_{\ell}\) and \(\Phi_{k}\), satisfies the vacuum Maxwell equations in _both_ flat and conformally flat spacetime. We then compare our results with those of the Kerr-Schild and Weyl double copies and show they match up to pure gauge terms. We also find that the field strength obtained from the Newman-Penrose map gauge field displays a discrete electric-magnetic duality.
The fact that \(\Phi_{\ell}\) satisfies the non-linear PDEs implies it be can be obtained from the zero set of a holomorphic function \(F\) (the "Kerr function") of twistor variables, one of the consequences of Kerr's Theorem [96, 97, 98, 99]. What this function is in general for the Kerr-Taub-NUT-(A)dS family of solutions is presently unknown. In the \(a=0,\lambda\neq 0\) limit it is seen to be identical to the function \(F_{Sch}\) of the Schwarzschild spacetime, and for \(\lambda=0\) it is a quadratic function of \(\Phi_{\ell}\) and invariants of the non-linear PDEs (and obtainable from \(F_{Sch}\) upon applying the Newman-Janis trick). Knowing that the Kerr function is holomorphic in twistor variables is insufficient information to fully fix it. For more general metrics of the single Kerr-Schild metric form, the Kerr function is obtained by integrating the Einstein equations [100] (see also, [101]). Finding the Kerr function in general for double Kerr-Schild metrics, or even for the subset of the Kerr-Taub-NUT-(A)dS family of solutions, presumably involves similar steps.
Our results suggest that the Newman-Penrose map might be generalizable to all double Kerr
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Gravity solution & Gauge solution & KS/Weyl DC & NP Map \\ \hline \hline Schwarzschild & Coulomb & [11] & [12] \\ \hline Kerr & rotating disc of charge & [11] & [12] \\ \hline Photon Rocket & Lienard-Weichert potential & [15] & [12] \\ \hline Taub-NUT & dyon & [14] & this work \\ \hline Kerr-Taub-NUT & rotating dyon & [14, 35] & this work \\ \hline Schwarzschild-(A)dS & Coulomb in (A)dS & [20, 21] & this work \\ \hline Kerr-(A)dS & rotating disc in (A)dS & [21] & this work \\ \hline Kerr-Taub-NUT-(A)dS & aligned fields in (A)dS & [78] & this work \\ \hline \end{tabular}
\end{table}
Table 1: Table enumerating example metrics and their gauge fields obtained via either the Kerr-Schild/Weyl double copy or the Newman-Penrose map.
Schild spacetimes admitting an integrable distribution of totally null vectors. For flat backgrounds, these can all be solved exactly [102]. The tetrad can be generalized to accommodate double Kerr-Schild spacetimes, but this only leads to a subset of the non-linear PDEs being satisfied, so there remains a conceptual obstruction to defining the Newman-Penrose map for general double Kerr-Schild spacetimes.
These results provide further evidence that the Newman-Penrose map is a manifestation of the double copy, with the potential to generically study non-vacuum and non-stationary solutions. In future work we hope to understand the relationship between the Newman-Penrose map and the Kerr-Schild double copy, which a priori appear very different, yet agree in all known examples. This promises to provide insight into the structure of the double copy more generally, and the gravitational and gauge theories to which it applies.
## Acknowledgements
The work of KF is supported by the Swiss National Science Foundation under grant no. 200021-205016. KF also acknowledges support from Simons Foundation Award Number 658908. The work of MG is supported by the LDRD program at Los Alamos National Laboratory and by the U.S. Department of Energy, Office of High Energy Physics, under Contract No. DE-AC52-06NA25396. The Los Alamos preprint number for this manuscript is LA-UR-23-26863. |
2308.11686 | Relativistic Hydrodynamics under Rotation: Prospects & Limitations from
a Holographic Perspective | The AdS/CFT correspondence, or holography, has provided numerous important
insights into the behavior of strongly-coupled many-body systems. Crucially, it
has provided a testing ground for the construction of new effective field
theories, especially those in the low frequency, long wavelength limit known as
hydrodynamics. We review the study of strongly-coupled rotating fluids using
holography, and we examine the hydrodynamics emerging from the study of
rotating Myers-Perry black holes. We discuss three regimes in which holographic
rotating fluids display either (1) hydrodynamic behavior of a boosted fluid,
(2) hydrodynamic behavior distinct from a boosted fluid, or (3) no obvious
hydrodynamic behavior. We describe techniques to obtain hydrodynamic and
non-hydrodynamic modes, and we compute the radius of convergence for the
hydrodynamic regimes. The limitations of hydrodynamics under rotation are
discussed alongside our findings. | Markus A. G. Amano, Casey Cartwright, Matthias Kaminski, Jackson Wu | 2023-08-22T15:28:23Z | http://arxiv.org/abs/2308.11686v2 | # Relativistic Hydrodynamics under Rotation: Prospects & Limitations from a Holographic Perspective
###### Abstract
The AdS/CFT correspondence, or holography, has provided numerous important insights into the behavior of strongly-coupled many-body systems. Crucially, it has provided a testing ground for the construction of new effective field theories, especially those in the low frequency, long wavelength limit known as hydrodynamics. In this article we continue the study of strongly-coupled rotating fluids using holography and hydrodynamics. We provide an overview of recent developments arising from the study of simply spinning Myers-Perry black holes. We review techniques to obtain hydrodynamic and non-hydrodynamic modes, describe how branch point singularities in the complex frequency and momentum space limit the radius of convergence of the hydrodynamic gradient expansion, and outline the relation of pole-skipping in the linear response functions to early time chaotic behavior.
###### Contents
* I Introduction
* I.1 Overview
* I.2 Summary of Fluid Results
* I.3 Gravity Results
* II The Holographic Setup
* II.1 Rotating background metric and BH thermodynamics
* II.2 Linearized metric fluctuation equations and separability
* II.3 General ansatz for spin 2 fields
* III Results
* III.1 QNMs dual to hydrodynamic and non-hydrodynamic eigenmodes
* III.1 Previous QNM results for large BHs
* III.2 \(\mathcal{K}=\mathcal{J}\) -- Tensor fluctuations
* III.3 \(\mathcal{K}=\mathcal{J}-1\) -- Vector fluctuations
* III.4 \(\mathcal{K}=\mathcal{J}-2\) -- Scalar fluctuations
* III.5 Horizon radius dependence & the emergence of hydrodynamics
* III.6 Perturbative instabilities - superradiant and Gregory-Laflamme
* III.7 Cross Sector Spectrum Comparison
* III.2 Critical points
* III.2.1 Implicit functions and their series expansions
* III.2.2 Determination of the critical point
* III.3 Pole-Skipping
* III.3.1 Gravitational shockwaves
* III.4
2 Near horizon metric fluctuations
* Outlook
* QNM Data
* 1. Tensor sector
* 2. Vector sector
* 3. Scalar sector
* B Numerical methods
* C Null coordinates
* 1. Infalling Eddington-Finkelstein coordinates
* 2. Kruskal coordinates
* D Hydrodynamic dispersion relations and conventions
* E Limits of the Wigner D function
## I Introduction
In this article we review recent results about rotating holographic fluids and add previously unpublished results. The rotating holographic fluid which we study here may be viewed as a model of real-world rotating fluids, with our results aiding the construction of effective (e.g. hydrodynamic) descriptions of such fluids. These investigations were motivated by the recent report of large vorticity in the quark-gluon-plasma generated at RHIC [1].
### Overview
Rotating fluids occur in great abundance in Nature, from eddies in rivers to rotating binary systems of merging black holes (BHs) or neutron stars creating gravitational waves which are measured with increasing precision, requiring a more precise theoretical understanding and modeling [2; 3]. Thus the effective description of rotating systems is of high interest. Hydrodynamics provides such a universal effective description for low energy and long wavelength excitations around an equilibrium state. In this article, we will see that the rotating holographic fluids we study present such equilibrium states allowing a hydrodynamic description.
In 2017, the hyperon polarization measurement by STAR [1] indicated that the quark-gluon-plasma generated in the heavy-ion-collisions at RHIC could possess a large vorticity. Since then, there has been an increased interest in rotating (in part strongly coupled and confined) systems [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18]. Rotating systems, specifically rotating BHs, have also been investigated in the context of AdS/CFT correspondence, or holography [19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30]. The eternal rotating black holes (BHs) we consider here are in thermodynamic equilibrium, as their observable properties, namely temperature and angular momentum, do not change over time. In both rotating fluids and rotating BHs, the rotation introduces nontrivial inertial effects into the dispersion relations of eigenmodes (with respect to an inertial observer in relation to the fluid rest frame). Holography relates rotating fluids to rotating BHs [23; 31], which allows to study rotating BHs and derive properties of the holographically dual rotating fluids.
In this article, we present the hydrodynamic regime and parts of the non-hydrodynamic regime of a rotating holographic plasma. In particular, we find hydrodynamic behavior in large simply spinning five-dimensional Myers-Perry (MP) Anti-de Sitter (AdS) BHs which are solutions of type IIB Supergravity. These are dual to rotating quantum fluids within strongly coupled \(\mathcal{N}=4\) Super-Yang-Mills theory (SYM) through the AdS/CFT correspondence. Therefore, it is possible to compute metric fluctuations around such rotating BHs and then translate the results into properties of rotating strongly coupled
SYM plasma. In particular, we focus on the computation of the dispersion relations of hydrodynamic and non-hydrodynamic modes.
Several limitations of the hydrodynamic description of the rotating holographic fluid are discussed, among them the explicit calculation of the radius of convergence of the gradient expansion facilitated by the study of so-called _critical points in complex momentum space_. In addition, cases are considered in which the low energy regime is dominated by non-hydrodynamic modes, leading to a breakdown of the hydrodynamic description. We compute the lowest few non-hydrodynamic modes and study their dynamics describing the fluctuations around the rotating holographic fluid, reaching far beyond the standard regime of hydrodynamics (thus we also include fluctuations with frequencies and momenta on the same order or much larger than the fluid temperature). Prospects are explored, pushing the hydrodynamic description beyond its limitations and exploring fluid properties outside the hydrodynamic regime, for example with the so-called _pole-skipping points_ relating chaotic dynamics to the hydrodynamic sound dispersion relation.
### Summary of Fluid Results
The rotating holographic fluid which we study here may be viewed as a model of real-world rotating fluids, aiding the construction of effective (e.g. hydrodynamic) descriptions of such fluids. For convenience, we begin with a quick guide to the main results, which will be discussed and explained further below.
* At any temperature (for any size BHs):
* Hydrodynamic dispersion relations with modified transport coefficients are a good fit already at small to intermediate temperatures (corresponding to a horizon radius \(r_{+}\approx 10^{4}\), and possibly even smaller), see Fig. 8.
* There is a remarkable window of horizon values in the range of \(1000\lesssim r_{+}\lesssim 10^{7}\), in which these holographic rotating fluids display hydrodynamic behavior that is distinct from a boosted fluid, see Sec. III.1.5.
* Multiple _level crossings_ occur between distinct non-hydrodynamic modes, as illustrated in Fig. 5.
* At small temperature (horizon radius \(r_{+}=10\)), the lowest few hydrodynamic and non-hydrodynamic modes are computed for the tensor (Fig. 4), vector (Fig. 6), and scalar (Fig. 7) sectors.
* At _extremality_, holographic fluids display a similarity with rotating charged fluids. Consider a state of fixed energy with just rotational and thermal energy. When all the energy is rotational and the temperature vanishes, the zero momentum eigenmodes appear to coalesce, forming a branch cut along the imaginary axis similar to what is observed in charged holographic fluids [32; 33; 34], see bottom graph (\(a/\ell=0.9\), angular momentum parameter \(a\) normalized to the AdS radius \(\ell\)) in Fig. 4, 6), and 7.
* At small temperatures, corresponding to horizon radius \(r_{+}=\ell\) (with the AdS radius \(\ell\)) the fluid becomes unstable at smaller angular momentum, as superradiance occurs below extremality (i.e. below rotational parameter \(a=\ell\)) at \(a/a_{\rm ext}\sim 0.916\), see Fig. 9.
* Radius of convergence of the hydrodynamic expansion in momentum space was computed for \(100\leq r_{+}\leq 10^{6}\), see Fig. 11; the convergence is affected very little by the value of the horizon radius, see Fig. 12.
* At large temperature (for large BHs):
* Fluid is nontrivially rotating, low-energy (hydrodynamic) excitations _see_ it as a fluid boosted with velocity \(a\), see Sec. III.1.1.
* Hydrodynamic dispersion relations with boosted fluid transport coefficients are given by eqs. (54) to (61).
* Eikonal limit (large momentum limit): scaling as expected, see Fig. 5.
* Superradiance instabilities occur only at extremality, \(a=\ell\), while the fluid is stable (according to our numerical checks) against all fluctuations at \(a<\ell\).
The pole skipping point which is believed to be related to quantum chaotic dynamics, encoding the quantum butterfly velocity and Lyapunov exponent, is shown to persist under rotation. This may indicate a relation between chaotic dynamics and hydrodynamics persisting under rotation. Pole skipping frequencies (and momenta) are determined in a rotating fluid (these are points where the numerator and the denominator of an energy-energy Green's function vanish simultaneously, and consequently the pole is "skipped" [35; 36]). Fig. 13 shows that the pole skipping point lies on the hydrodynamic sound dispersion relation; the associated quantum chaos butterfly velocity and Lyapunov exponent are displayed in Fig. 14.
Most strikingly, we find hydrodynamic behavior in the rotating fluids already at moderate temperatures, as noted before [37]. We holographically compute fluctuations of the hydrodynamic fields (temperature, fluid velocity) and determine their spectrum of eigenmodes. Already at fairly low temperatures, the spectrum of this fluid contains modes which have vanishing frequency in the limit of vanishing momentum eigenvalue, a defining property of hydrodynamic modes. Furthermore, the frequencies of these hydrodynamic mode candidates scale with the (angular) momentum of the fluctuations linearly or quadratically, closely resembling the linear scaling of sound modes with the speed of sound and the quadratic damping of diffusion modes. Notably, this momentum of the fluctuations is now a discrete angular momentum eigenvalue because the rotating fluid enforces an expansion into eigenmodes which resemble spherical harmonics, being labelled by discrete angular momentum eigenvalues \(\mathcal{J}\).
Our analysis here goes beyond [37] in that we present non-hydrodynamic modes along with the hydrodynamic modes. Extending [37] even further, we present results at moderate to low temperatures (corresponding to medium to small BH horizon radius). At such low temperatures, even the lowest energy eigenmodes may not behave hydrodynamically, in that their dispersion relations deviate from hydrodynamic scaling. These results demonstrate how the hydrodynamic scaling of the lowest lying eigenfrequencies with angular momentum emerges as the temperature increases. Furthermore, non-hydrodynamic modes are responding strongly to the coupling of modes in the scalar and vector sectors. This is seen in the level crossing in Fig. 5.
Furthermore, we study large momentum values \(\mathcal{J}\) and large frequencies \(\nu\), beyond the naive regime of validity of hydrodynamics, and find that the hydrodynamic momentum diffusion mode dominates the low energy dynamics in the whole regime of momentum \(j=0,...,10\) for all values of \(a\) which we scanned, see Fig. 6 from which it is obvious that the diffusion mode originating at zero frequency \(\nu=0\) with momentum \(\mathcal{J}=0\) (marked by triangle) always has the smallest imaginary part, i.e. the smallest damping, thus it is the longest lived mode dominating the late time low energy dynamics. In the case of the two sound modes, the behavior is more complicated, as displayed in Fig. 7. However, at least one of the two sound modes is dominating the low energy late time dynamics for all values of \(a\) and \(\mathcal{J}\) which we scanned (very similar to the diffusion mode described above). The other sound mode, at a specific momentum value \(\mathcal{J}\), becomes more damped than the lowest-lying non-hydrodynamic modes, leading to a change in the late-time transport properties of the fluid, as seen also at vanishing rotation. There is a third hydrodynamic mode appearing in Fig. 7 at the triangle marking \(\mathcal{J}=0\), \(\nu=0\), namely the momentum diffusion mode, which mixes with the sound modes at finite values of the horizon radius \(r_{+}\) (moderate to small temperatures). At large values of the momentum, we observe the expected [38; 39] linear scaling of the eigenfrequencies with momentum, see Fig. 3.
One obvious limitation of hydrodynamics when viewed as an expansion into gradients of the hydrodynamic quantities (temperature, fluid velocities) is the radius of convergence of this series expansion. While it is currently impossible to determine the radius of convergence of the hydrodynamic derivative expansion when the microscopic theory is QCD, this task is feasible when the microscopic theory is SYM [40; 41; 42]. Thus, we here review the radius of convergence of the linearized relativistic hydrodynamic expansion around a non-trivially rotating strongly coupled SYM plasma [43]. Our results show that the validity of hydrodynamics is sustained and can even get enhanced in a rapidly rotating plasma, see Fig. 11, and this result holds true for intermediate down to small temperatures, see Fig. 12.
Furthermore, we study the relationship between many-body quantum chaos and energy dynamics in holographic quantum field theory states dual to the simply-spinning MP-AdS5 BH [44]. In the large BH limit we are able to obtain a simple analytic expression for the out-of-time-order-correlator (OTOC) for operator configurations on Hopf circles, and demonstrate that the associated Lyapunov exponent and butterfly velocity are robustly related to the locations of a family of pole-skipping points in the energy response. We demonstrate that the dispersion relations of sound modes in the energy response explicitly pass through our pole-skipping locations.
**Large black hole (BH) limit:** One truly remarkable result is: _Hydrodynamic fluctuations at large temperatures perceive the rotating holographic fluid as if it was a boosted fluid._
On the gravity side, the statement above is verified as follows. In the case of large rotating BHs, the metric simplifies slightly. If one now chooses to probe this slightly simplified rotating background metric only with metric fluctuations which have _small frequency and momentum compared to the horizon radius_, then these fluctuations effectively _see_ a boosted black brane metric.1 This is explicitly seen in the equations of motion obeyed by the metric fluctuations around the rotating background metric, as will be explained in Sec. III.
Footnote 1: We stress, however, that the background metric itself is not that of a boosted black brane.
Previously, in a particular limit (the large BH limit), analytic results for the hydrodynamic dispersion relations and transport coefficients of this holographic rotating plasma as a function of their values in a plasma at rest were given [37]. Such analytic expressions are provided for the two shear viscosities, the longitudinal momentum diffusion coefficient, two speeds of sound, and two sound attenuation coefficients. The shear viscosity to entropy density ratio varies between zero and depending on the direction of the shear. At vanishing rotation, \(a=0\), the longitudinal shear viscosity \(\eta_{\perp,||}\) (along the rotation axis) and the longitudinal momentum diffusion coefficient \(D_{||}\), and the two sound attenuations \(\Gamma_{\pm}\) satisfy the known (Einstein) relations, \(D_{||}\propto\eta_{||}\) and \(\Gamma_{||}\propto\eta_{||}\). At nonzero rotation, \(a\neq 0\), known relations between these coefficients are generalized to include dependence on angular momentum. These previously discussed relations are reviewed here in Sec. III.1.
Now this seems to indicate that the hydrodynamic description of this rotating holographic fluid studied here may be trivial in that it may be equivalent to a boosted fluid. It turns out that this suspicion is _not_ true. In the present article, we analyze to what degree this hydrodynamic behavior of a boosted fluid persists to lower temperatures and especially away from the large black hole limit predominantly considered in [37]. We show that this holographic fluid behaves hydrodynamically, with transport coefficients deviating significantly from those of a boosted fluid, see section III.1.5.
### Gravity Results
With the help of the AdS/CFT correspondence[45] one can study the dissipative dynamics of (confined) systems with a nontrivial amount of angular momentum. This is made possible by the equivalence of the spectrum of quasinormal modes (QNMs) around asymptotically AdS black holes and the poles of retarded Green's functions in the dual field theory [46]. QNMs are characteristic linearized metric solutions of Einstein's field equations. So, while we report on dispersion relations of hydrodynamic and nonhydrodynamic modes, we simultaneously obtain results for QNMs.
In particular, we consider (4+1)-dimensional rotating asymptotically AdS Myers-Perry black holes (MP BHs), where the horizon induces a temperature and the angular momenta introduce angular momenta in the dual field theory [23; 31]. Previous work modeled a rotating quark gluon plasma holographically [37; 48; 49; 50; 47].
We provide an overview of the recent progress in the analysis of the linearized field equations of metric fluctuations around (4+1)-dimensional MP BHs, see section II. In the large black hole limit, (52) and (53), these black holes are perturbatively stable against all metric fluctuations [37]. However, near extremal BHs become unstable. This so-called superradiant instability has been known [51; 52; 29; 24]. We report some details at small horizon radius \(r_{+}=\ell\) (with the AdS radius \(\ell\)): the BH becomes unstable at small angular momentum, as superradiance occurs below extremality (i.e. below rotational parameter \(a=\ell\)) at \(a/a_{\rm ext}\sim 0.916\), see Fig. 9.
Reference values for all QNMs in all three channels of fluctuations which we studied for various values of the angular momentum parameter \(a\) of the rotating black hole are recorded in tables in appendix A. Examples of QNMs are visualized in Fig. 4, 6, 7, and shown together for comparison in Fig. 10.
Throughout this article, w point out various similarities between the QNM spectra of rotating MP BHs studied in the present article and the QNM spectra charged Reissner-Nordstrom black branes studied in [34]. Most striking is the occurrence of a set of "purely imaginary QNMs" at nonzero rotation or charge parameter, respectively. These modes at large rotation or charge parameter coalesce and appear to form a branch cut along the negative imaginary frequency axis as the black holes approach extremality, as studied in the lower dimensional AdS4 Reissner-Nordstrom case [32; 33].
Pole skipping points as well as critical points in the QNM spectra of these MP BHs have been computed and are reported in sections III.2 and III.3, where the significance of both critical and pole skipping points is discussed in detail.
The Holographic Setup
In this section we describe the theory which we study, the background metric our work centers on, and the separability of the equations of motion of the metric fluctuations around the background metric. Our discussion of the separability of the field equations follows closely the original work [26].
### Rotating background metric and BH thermodynamics
We are interested in the metric fluctuations around stationary (4+1)-dimensional, asymptotically locally AdS (AlAdS) metrics, which are solutions to the equations of motion obtained from the Einstein-Hilbert action
\[S_{EH}=\frac{1}{16\pi G_{5}}\int\mathrm{d}^{5}x\sqrt{-g}\left(R-2\Lambda \right)+S_{ct}\,. \tag{1}\]
Here, \(\Lambda=-6/\ell^{2}\) denotes the cosmological constant given in terms of the AdS radius, \(G_{5}\) is the five-dimensional Newton's constant, and \(S_{ct}\) is the counterterm action containing the Gibbons-Hawking-York boundary term required for a well-defined variation and other counterterms required to render the boundary action finite. Variation of \(S_{EH}\) leads to the standard source free Einstein equations with cosmological constant
\[R_{\mu\nu}-\frac{1}{2}(R-2\Lambda)g_{\mu\nu}=0\,. \tag{2}\]
Given our focus on the finite temperature rotating states of the dual field theory, we are interested in the most general rotating BH solution with boundary topology \(S^{3}\times\mathbb{R}\) to the Einstein equations.2 The solution is given by the following metric [31; 23]
Footnote 2: For higher dimensional BHs we note that other topologies are possible besides \(S^{3}\times\mathbb{R}^{1,1}\), e.g. black saturns [53] and black rings [54]. We do not consider these more exotic topologies here.
\[ds^{2} =\frac{\left(1+r_{H}^{2}\ell^{-2}\right)}{\rho^{2}r_{H}^{2}} \left(ab\mathrm{d}t_{H}-\frac{b\left(a^{2}+r_{H}^{2}\right)\sin^{2}(\theta_{ H})}{\Xi_{a}}\mathrm{d}\phi_{H}-\frac{a\left(b^{2}+r_{H}^{2}\right)\cos^{2}( \theta_{H})}{\Xi_{b}}\mathrm{d}\psi_{H}\right)^{2}\] \[-\frac{\Delta_{r}}{\rho^{2}}\left(\mathrm{d}t_{H}-\frac{a\sin^{2 }(\theta_{H})}{\Xi_{a}}\mathrm{d}\phi_{H}-\frac{b\cos^{2}(\theta_{H})}{\Xi_{ b}}\mathrm{d}\psi_{H}\right)^{2}+\frac{\rho^{2}}{\Delta_{\theta}}\mathrm{d}\theta_{H}^{2} +\frac{\rho^{2}}{\Delta_{r}}\mathrm{d}r_{H}^{2}\] \[+\frac{\Delta_{\theta}\sin^{2}(\theta_{H})}{\rho^{2}}\left(a \mathrm{d}t_{H}-\frac{a^{2}+r_{H}^{2}}{\Xi_{a}}\mathrm{d}\phi_{H}\right)^{2}+ \frac{\Delta_{\theta}\cos^{2}(\theta_{H})}{\rho^{2}}\left(b\mathrm{d}t_{H}- \frac{b^{2}+r_{H}^{2}}{\Xi_{b}}\mathrm{d}\psi_{H}\right)^{2}\,, \tag{3}\]
with coordinates \((r_{H},t_{H},\theta_{H},\phi_{H},\psi_{H})\) residing in ranges
\[-\infty <t_{H}<\infty\,, \tag{4a}\] \[0 <r_{H}<\infty\,,\] (4b) \[0 <\theta_{H}<\pi/2\,,\] (4c) \[0 <\phi_{H}<2\pi\,,\] (4d) \[0 <\psi_{H}<2\pi\,, \tag{4e}\]
and we define
\[\Delta_{r} =\frac{1}{r_{H}^{2}}(r_{H}^{2}+a^{2})(r_{H}^{2}+b^{2})(\frac{r_{H }^{2}}{\ell^{2}}+1)-2M\,, \tag{5}\] \[\Delta_{\theta} =1-\frac{a^{2}}{\ell^{2}}\cos(\theta_{H})^{2}-\frac{b^{2}}{\ell^{ 2}}\sin(\theta_{H})^{2}\,,\] (6) \[\rho^{2} =r_{H}^{2}+a^{2}\cos(\theta_{H})^{2}+b^{2}\sin(\theta_{H})^{2}\,,\] (7) \[\Xi_{a} =1-\frac{a^{2}}{\ell^{2}}\,,\quad\Xi_{b}=1-\frac{b^{2}}{\ell^{2}}\,. \tag{8}\]
Although originally discovered in [31], this solution is referred to as the Myers-Perry BH due to the pioneering work of Malcolm Perry and Robert Myers to extend Kerr BHs to higher dimensions [19]3. The solution features two rotational parameters, \((a,b)\), related to the two angular momenta, \((J_{\phi},J_{\psi})\). This can be anticipated from the fact that in (4+1)-dimensions the rotation group is \(SO(4)\) whose double cover is \(SU(2)\times SU(2)\); irreducible representations of the rotation group are thus labeled by two parameters. It is however important to note that the full \(SO(4)\) is not present in the solution, eq. (3), which is only invariant under time translations and the two \(U(1)\) transformations associated with independent rotations in \((\phi_{H},\psi_{H})\).
Footnote 3: It should be noted that their generalized Kerr solutions were computed without a cosmological constant. The generalization of the Kerr solution in (3+1)-dimensions to include AdS-dS boundary conditions was found by Carter [20] and subsequently generalized to all higher dimensions by Gibbons, Lu, Page and Pope [22].
The two angular momenta, \((J_{\phi},J_{\psi})\), are given by [55]
\[J_{\phi}=\frac{4\pi^{2}M}{\kappa^{2}\Xi_{a}^{2}\Xi_{b}}a\,,\qquad J_{\psi}= \frac{4\pi^{2}M}{\kappa^{2}\Xi_{a}\Xi_{b}^{2}}b\,, \tag{9}\]
where \(\kappa^{2}=8\pi G_{5}\). Associated with \((J_{\phi},J_{\psi})\) are the two angular velocities, \((\Omega_{a},\Omega_{b})\), which are non-vanishing throughout the entire spacetime. In particular, at the horizon and the conformal infinity, i.e. the AdS boundary, they taken on values
\[\Omega_{a}|_{hor} =\frac{a-a^{3}/\ell^{2}}{a^{2}+r_{H}^{+2}}\,, \Omega_{b}|_{hor} =\frac{b-b^{3}/\ell^{2}}{b^{2}+r_{H}^{+2}}\,, \tag{10a}\] \[\Omega_{a}|_{bdy} =-a/\ell^{2}\,, \Omega_{b}|_{bdy} =-b/\ell^{2}\,, \tag{10b}\]
where \(r_{H}^{+}\) denotes the radius of the outer horizon defined as the largest real root of the equation \(\Delta_{r}=0\). Defining the relative angular velocities, \((\widetilde{\Omega}_{a},\widetilde{\Omega}_{b})\), by the respective differences [55]:
\[\widetilde{\Omega}_{a}=\Omega_{a}|_{hor}-\Omega_{a}|_{bdy} =\frac{(1+r_{H}^{+2}/\ell^{2})}{(a^{2}+r_{H}^{+2})}a\,, \tag{11a}\] \[\widetilde{\Omega}_{b}=\Omega_{b}|_{hor}-\Omega_{b}|_{bdy} =\frac{(1+r_{H}^{+2}/\ell^{2})}{(b^{2}+r_{H}^{+2})}b\,, \tag{11b}\]
we have the quantities thermodynamically conjugate to \((J_{\phi},J_{\psi})\).
As the metric is independent of the coordinates \((t,\phi,\psi)\), \((\partial_{t},\partial_{\phi},\partial_{\psi})\) are Killing vectors. The combination \(\partial_{t}+\Omega_{a}\,\partial_{\phi}+\Omega_{b}\,\partial_{\psi}\) is tangent to the null generators of the horizon from which one can obtain the Hawking temperature of the horizon (and thus the temperature of the dual field theory) from the surface gravity of the horizon:
\[T=\frac{r_{H}^{+4}\left(a^{2}/\ell^{2}+b^{2}/\ell^{2}+1\right)-a^{2}b^{2}+2r_{ H}^{+6}/\ell^{2}}{2\pi r_{H}^{+}\left(a^{2}+r_{H}^{+2}\right)\left(b^{2}+r_{H}^{ +2}\right)}\,. \tag{12}\]
One can notice that there exist extremal solutions for which the temperature vanishes, an explicit relation between the horizon radius and the rotational parameters where this occurs will be given in section II.2.
As stated in the beginning of this section, the geometry described by the metric in eq. (3) is AlAdS, which is a geometry whose solutions does not approach AdS exactly but rather a general conformal structure. The first law of BH thermodynamics for AlAdS and the definition of conserved holographic charges can be found in [55] along with the analysis for MP BHs in (4+1)-dimensions. The entropy which appears in this first law is given by the Bekenstein-Hawking formula \(S=A/(4G_{5})\), where \(A\) is area of the event horizon (as given by the integral over the square root of the spatial determinant of the metric evaluated at the event horizon):
\[A=\frac{2\pi^{2}\left(a^{2}+r_{H}^{+2}\right)\left(b^{2}+r_{H}^{+2}\right)}{ \Xi_{a}\Xi_{b}r_{H}^{+}}\,. \tag{13}\]
As usual, the entropy density is obtained from the entropy by dividing out the volume. The relevant volume here is the spatial volume of the boundary metric, \(V_{bdy}=2\pi^{2}\ell^{3}\), and thus the entropy density of the dual field theory is
\[s=\frac{S}{V_{bdy}}=\frac{A/(4G_{5})}{V_{bdy}}=\frac{l\left(a^{2}+r_{H}^{+2} \right)\left(b^{2}+r_{H}^{+2}\right)}{4G_{5}\,r_{H}^{+}(\ell^{2}-a^{2})(\ell^{2 }-b^{2})}\,. \tag{14}\]
### Linearized metric fluctuation equations and separability
As stated in the introduction, our goal is to provide an overview of the recent progress in the analysis of the field equations in (4+1)-dimensional MP BH. To that end we consider perturbations of the metric in eq. (3) as was done in [24; 26; 37; 43; 44]:
\[g^{p}_{\mu\nu}dx^{\mu}dx^{\nu}=\left(g_{\mu\nu}+\epsilon\ h_{\mu\nu}+O( \epsilon^{2})\right)dx^{\mu}dx^{\nu}\,, \tag{15}\]
where \(g\) is the metric satisfying the Einstein equations (at order \(\epsilon^{0}\)). Expanding the Einstein equations to leading order in \(\epsilon\), the perturbation \(h_{\mu\nu}\), is required to satisfy [56]
\[-\frac{1}{2}\nabla_{\mu}\nabla_{\nu}h-\frac{1}{2}\nabla^{\lambda}\nabla_{ \lambda}h_{\mu\nu}+\nabla^{\lambda}\nabla_{(\mu}h_{\nu)\lambda}=\frac{2\Lambda }{D-2}h_{\mu\nu}\,, \tag{16}\]
where \(h=h_{\mu}{}^{\mu}=h_{\nu\mu}g^{\mu\nu}\), and the covariant derivatives in eq. (16) are constructed entirely from the background metric \(g_{\mu\nu}\). Although the metric in eq. (3) may not be the most complicated solution to the Einstein equations, it is not immediately clear if or when field equations in this geometry are separable. It is typically the case that separability is intimately tied to symmetry, for instance the field equations in a highly symmetric Schwarzschild geometry may be easily separated by considering an expansion in spherical harmonics. Furthermore, for a clever choice of field variables, fields carrying a spin under the little group may be decoupled into invariant sectors. While this may be expected in highly symmetric spacetimes, it is obviously not guaranteed in general. For the (4+1)-dimensional AlAdS MP BH we consider, separability of the field equations of a propagating spin 2 field was demonstrated in [26] for the case of equal angular momenta.4 We will follow this technique of decomposition of the field equations in what follows.
Footnote 4: Separability of the field equations for dimensions larger than six was demonstrated in [57] for equal angular momenta.
The metric in eq. (3) simplifies in two special cases:
\[\text{(i)}\quad J_{\phi}=J_{\psi}\,,\qquad\text{(ii)}\quad J_{\psi}=0\,. \tag{17}\]
In this work, we focus on case (i) for which the MP BH is referred to as simply spinning. Imposing \(J_{\phi}=J_{\psi}\) (and thus \(a=b\)) leads to a symmetry enhancement of the metric crucial to the following analysis. To see this, first transform to a new set of coordinates:
\[\theta =2\theta_{H}\,,\ \phi=\psi_{H}-\phi_{H}\,,\ \psi=2a\ell^{-2}t_{H}+ \psi_{H}+\phi_{H}, \tag{18a}\] \[t =-t_{H}\,,\ r=\sqrt{\frac{a^{2}+r_{H}^{2}}{1-a^{2}\ell^{-2}}}\,, \ M=\mu\left(1-a^{2}\ell^{-2}\right)^{3}\ \, \tag{18b}\]
and we have defined the effective mass parameter \(\mu\). It is important to note that the coordinate ranges are now given by
\[-\infty <t<\infty\,, \tag{19a}\] \[0 <r<\infty\,,\] (19b) \[0 <\theta<\pi\,,\] (19c) \[0 <\phi<2\pi\,,\] (19d) \[0 <\psi<4\pi\,. \tag{19e}\]
Under the transformation of eq. (18) the metric is brought to the form first described in [24; 26]
\[ds^{2}=\frac{dr^{2}}{G(r)}-dt^{2}\left(\frac{r^{2}}{\ell^{2}}+1\right)+\frac{ 1}{4}r^{2}\left((\sigma^{1})^{2}+(\sigma^{2})^{2}+(\sigma^{3})^{2}\right)+ \frac{2\mu\left(\frac{a\sigma^{3}}{2}+dt\right)^{2}}{r^{2}}\,, \tag{20}\]
with the blackening factor given by
\[G(r)=-\frac{2\mu\left(1-a^{2}\ell^{-2}\right)}{r^{2}}+\frac{2a^{2}\mu}{r^{4}} +\frac{r^{2}}{\ell^{2}}+1\,, \tag{21}\]
and we have introduced the one-forms \(\sigma^{a}\), \(a=1,2,3\), defined by
\[\sigma^{1} =d\phi\sin(\theta)\cos(\psi)-d\theta\sin(\psi)\,, \tag{22}\] \[\sigma^{2} =d\theta\cos(\psi)+d\phi\sin(\theta)\sin(\psi)\,,\] (23) \[\sigma^{3} =d\psi+d\phi\cos(\theta)\,. \tag{24}\]
The location of the event horizon in this set of coordinates corresponds to the largest positive real root of the polynomial equation \(G(r_{+})=0\). Expanding this equation one can define a relation between the horizon radius \(r_{+}\) and the effective mass \(\mu\):
\[\mu=\frac{r_{+}^{4}\left(\ell^{2}+r_{+}^{2}\right)}{2\ell^{2}r_{+}^{2}-2a^{2} \left(\ell^{2}+r_{+}^{2}\right)}\,. \tag{25}\]
In terms of the redefined parameters and coordinates 5 the Hawking temperature, in particular, is given by
Footnote 5: The thermodynamic data displayed in previous sections are in the Boyer-Lindquist coordinate system used in [23; 31].
\[T=\frac{r_{+}^{2}\ell^{2}\left(2r_{+}^{2}+\ell^{2}\right)-2a^{2}\left(r_{+}^{ 2}+\ell^{2}\right)^{2}}{2\pi r_{+}^{2}\ell^{3}\sqrt{r_{+}^{2}\ell^{2}-a^{2} \left(r_{+}^{2}+\ell^{2}\right)}}\,, \tag{26}\]
and the entropy density of the dual field theory has the expression
\[s=\frac{r_{+}^{4}}{4G_{5}\ell^{2}\sqrt{r_{+}^{2}\ell^{2}-a^{2}\left(r_{+}^{2}+ \ell^{2}\right)}}\,. \tag{27}\]
The extremal value of the rotational parameter, \(a_{ext}\), at which the temperature vanishes is given by
\[a_{\text{ext}}=\pm\frac{r_{+}^{2}}{1+r_{+}^{2}/\ell^{2}}\sqrt{\frac{1}{2r_{+} ^{2}}+\frac{1}{\ell^{2}}}\,. \tag{28}\]
For \(a=b\), one can see from eq. (9) the two angular momenta coincide and thus so does the two associated angular velocities, \(\Omega_{a}=\Omega_{b}=\Omega\). In particular, in the new coordinates we have
\[\Omega|_{hor}=\frac{a}{r_{+}^{2}}\,,\qquad\Omega|_{bdy}=-\frac{a}{\ell^{2}}\,, \tag{29}\]
and the thermodynamically relevant relative angular velocity is given by 6
Footnote 6: The thermodynamic value of the angular velocity is mistakenly given in place of the horizon value on pg. 7 of [37]. The same mistake is made in [44] on pg. 4, including \(\Omega^{\prime}=2\Omega\), as alluded to below eq. (31).
\[\widetilde{\Omega}=\Omega|_{hor}-\Omega|_{bdy}=a\left(\frac{1}{r_{+}^{2}}+ \frac{1}{\ell^{2}}\right)\,. \tag{30}\]
The angular momentum thermodynamically conjugate to \(\widetilde{\Omega}\) has the expression
\[J=\frac{4a\pi^{2}\mu}{\kappa^{2}}\,. \tag{31}\]
Note that the form of the first law now contains identical factors related to the contribution from the rotation, viz. \(\sum_{i}J_{i}\widetilde{\Omega}_{i}=2J\widetilde{\Omega}\). Different conventions have been used regarding the factor of two: some leave it in place while others absorb it into \(J^{\prime}=2J\)[43] or \(\Omega^{\prime}=2\Omega\)[44]. We choose here to leave the factor of two alone.
With the thermodynamic information associated with the metric in these new coordinates recounted, we now turn to the symmetries of the metric. We begin our discussion with the form basis \(\sigma^{a}\) introduced
in eq. (22). These forms are the left-invariant 1-forms of the group \(SU(2)\).7 Dual to the left-invariant 1-forms are the left-invariant vector fields \(\sigma_{a}\) (which can be expanded in a basis \(E_{a}\) of the tangent space): 8
Footnote 7: We recall here some basic results of the theory of Lie groups and their algebras (see for instance [58] or the classic text [59], in addition [60] covers explicitly the \(SU(2)\) case). Let \(E^{a}\) (\(E_{a}\)) be a basis of the cotangent (tangent) space at the unit element of a group \(G\), the left invariant 1-forms on \(G\) are generated by \(E^{a}\). Given a parameterization of a generic element of the group, \(A\), the left-invariant 1-forms (also known as the Maurer-Cartan form) can be found from \(A^{-1}\dd A=\sigma^{a}E_{a}\); the right-invariant 1-forms, \(\xi^{a}\), are likewise given by \(\dd AA^{-1}=\xi^{a}E_{a}\). In the case of \(SU(2)\) of interest here, the basis we choose to work in is the standard one for the algebra \(su(2)\): \(E_{a}=-\frac{1}{2}i\tau_{a}\) where \(\tau_{a}\) are the Pauli matrices. Parameterize \(A\in SU(2)\) as
\[A=\left(\begin{array}{c}z-w^{*}\\ w^{*}\end{array}\right)\,\qquad z=\cos(\theta/2)e^{-i(\phi+\phi)}\,,\quad w= \sin(\theta/2)e^{-i(\psi-\phi)}\]
where \((\theta,\psi,\phi)\) range as in eq. (19), in the standard basis \(E_{a}=-\frac{1}{2}i\tau_{a}\) one would get \(\sigma^{1}=-d\phi\sin(\theta)\cos(\psi)+d\theta\sin(\psi)\) and a corresponding opposite sign for \(\sigma_{1}\). Since a constant multiple of a left-invariant field is still a left invariant field and \(\sigma^{1}\) only appears in the metric as \((\sigma^{1})^{2}\), we redefine these as displayed in the text. 8 Note that Refs. [24; 25; 26; 37; 34; 43] make the non-standard choice of labelling the vector fields dual to \(\sigma^{a}\) with a different symbol, \(e\), rather than \(\sigma^{a}\) with the index lowered. In order to clear the notational mine-field surrounding the discussion of separability in MP BHs, we change to the standard notation in this work. 9 Naively computing the right-invariant forms and vector fields as described in the previous footnote, one would find that \(\xi_{1}\) and \(\xi_{2}\) are interchange along with the 1-forms \(\xi^{1}\) and \(\xi^{2}\). We rearrange them as displayed in the text. 10
Footnote 10: We recall here some basic results of the theory of Lie groups and their algebras (see for instance [58] or the classic text [59], in addition [60] covers explicitly the \(SU(2)\) case). Let \(E^{a}\) (\(E_{a}\)) be a basis of the cotangent (tangent) space at the unit element of a group \(G\), the left invariant 1-forms on \(G\) are generated by \(E^{a}\). Given a parameterization of a generic element of the group, \(A\), the left-invariant 1-forms (also known as the Maurer-Cartan form) can be found from \(A^{-1}\dd A=\sigma^{a}E_{a}\); the right-invariant 1-forms, \(\xi^{a}\), are likewise given by \(\dd AA^{-1}=\xi^{a}E_{a}\). In the case of \(SU(2)\) of interest here, the basis we choose to work in is the standard one for the algebra \(su(2)\): \(E_{a}=-\frac{1}{2}i\tau_{a}\) where \(\tau_{a}\) are the Pauli matrices. Parameterize \(A\in SU(2)\) as
\[A=\left(\begin{array}{c}z-w^{*}\\ w^{*}\end{array}\right)\,\qquad z=\cos(\theta/2)e^{-i(\phi+\phi)}\,,\quad w= \sin(\theta/2)e^{-i(\psi-\phi)}\]
where \((\theta,\psi,\phi)\) range as in eq. (19), in the standard basis \(E_{a}=-\frac{1}{2}i\tau_{a}\) one would get \(\sigma^{1}=-d\phi\sin(\theta)\cos(\psi)+d\theta\sin(\psi)\) and a corresponding opposite sign for \(\sigma_{1}\). Since a constant multiple of a left-invariant field is still a left invariant field and \(\sigma^{1}\) only appears in the metric as \((\sigma^{1})^{2}\), we redefine these as displayed in the text. 11 Note that Refs. [24; 25; 26; 37; 34; 43] make the non-standard choice of labelling the vector fields dual to \(\sigma^{a}\) with a different symbol, \(e\), rather than \(\sigma^{a}\) with the index lowered. In order to clear the notational mine-field surrounding the discussion of separability in MP BHs, we change to the standard notation in this work. 12 Naively computing the right-invariant forms and vector fields as described in the previous footnote, one would find that \(\xi_{1}\) and \(\xi_{2}\) are interchange along with the 1-forms \(\xi^{1}\) and \(\xi^{2}\). We rearrange them as displayed in the text. 13. 14
The Wigner-D functions, expanded in coordinate basis \(D^{\mathcal{J}}_{\mathcal{M}\mathcal{K}}(\psi,\phi,\theta)\), for different \(\mathcal{J}\) are mutually orthogonal when integrated over the volume of \(SU(2)\), defined in eq. (19)
\[\int_{0}^{4\pi}\mathrm{d}\psi\int_{0}^{\pi}\mathrm{d}\theta\sin(\theta)\int_{0}^ {2\pi}\mathrm{d}\phi D^{\mathcal{J}_{2\ast}}_{\mathcal{K}_{2}\mathcal{M}_{2}}( \psi,\phi,\theta)D^{\mathcal{J}_{1}}_{\mathcal{K}_{1}\mathcal{M}_{1}}(\psi, \phi,\theta)=\frac{16\pi^{2}}{2\mathcal{J}_{1}+1}\delta_{\mathcal{J}_{1} \mathcal{J}_{2}}\delta_{\mathcal{M}_{1}\mathcal{M}_{2}}\delta_{\mathcal{K}_{1} \mathcal{K}_{2}} \tag{39}\]
and form a complete set of eigenfunctions
\[\frac{1}{\sin(\theta_{1})}\delta(\tilde{\psi}-\tilde{\psi}_{1}) \delta(\phi-\phi_{1})\delta(\theta-\theta_{1})\] \[=\sum_{\mathcal{J}=0,1/2,1,\dots}^{\infty}\sum_{\mathcal{K}=- \mathcal{J}}^{\mathcal{J}}\sum_{\mathcal{M}=-\mathcal{J}}^{\mathcal{J}}\frac {2\mathcal{J}+1}{16\pi^{2}}D^{\mathcal{J}}_{\mathcal{K}\mathcal{M}}(\tilde{ \psi},\phi,\theta)D^{\mathcal{J}\ast}_{\mathcal{K}\mathcal{M}}(\tilde{\psi}_{ 1},\phi_{1},\theta_{1})\,. \tag{40}\]
on \(S^{3}\) (see for instance [61]).
These commutation relations, eq. (35), and eigenvalue equations, eq. (36), are immediately familiar since the Wigner-D functions are precisely the matrix elements of the rotation operator \(\mathscr{D}(R)\) of ordinary, non-relativistic quantum mechanics [62]
\[\mathscr{D}^{\mathcal{J}}_{\mathcal{K}\mathcal{M}}=\langle\mathcal{JK}|\,e^{ \left(\frac{-i\mathcal{J}+\theta}{\hbar}\right)}\,|\mathcal{JM}\rangle. \tag{41}\]
For this reason, we refer to \(\mathcal{J}\) as the total angular momentum. Also, familiar from ordinary quantum mechanics is the ability to raise and lower the eigenvalue of \(L_{3}\) and \(W_{3}\) by the action of ladder operators which, for the eigenvalue of \(W_{3}\), can be defined as \(W_{\pm}:=W_{1}\pm iW_{2}\). Using (36), we can express the action of \(W_{\pm}\) and \(W_{3}\) on \(D^{\mathcal{J}}_{\mathcal{K}\mathcal{M}}\) as
\[W_{+}D^{\mathcal{J}}_{\mathcal{K}\mathcal{M}} =i\sqrt{(\mathcal{J}+\mathcal{K})(\mathcal{J}-\mathcal{K}+1)}D^{ \mathcal{J}}_{\mathcal{K}-1\ \mathcal{M}} \tag{42}\] \[W_{-}D^{\mathcal{J}}_{\mathcal{K}\mathcal{M}} =-i\sqrt{(\mathcal{J}-\mathcal{K})(\mathcal{J}+\mathcal{K}+1)}D^{ \mathcal{J}}_{\mathcal{K}+1\ \mathcal{M}}\] (43) \[W_{3}D^{\mathcal{J}}_{\mathcal{K}\mathcal{M}} =\mathcal{KD}^{\mathcal{J}}_{\mathcal{K}\mathcal{M}} \tag{44}\]
These ladder operators take a particularly useful form when rewritten in terms of the partial derivatives 10, \(\partial_{\pm}=\sigma_{\pm}{}^{a}\partial_{a}\) and \(\partial_{3}=\sigma_{3}{}^{a}\partial_{a}\)
Footnote 10: Note again here \(a=1,2,3\) and \(\partial_{a}=\partial/\partial Y^{a}\) where as before \(Y^{a}\) is the angular set of coordinates \(Y^{a}=(\theta,\phi,\psi)\)
\[\partial_{+}D^{\mathcal{J}}_{\mathcal{K}\mathcal{M}} =\sqrt{(\mathcal{J}+\mathcal{K})(\mathcal{J}-\mathcal{K}+1)}D^{ \mathcal{J}}_{\mathcal{K}-1\ \mathcal{M}} \tag{45}\] \[\partial_{-}D^{\mathcal{J}}_{\mathcal{K}\mathcal{M}} =-\sqrt{(\mathcal{J}-\mathcal{K})(\mathcal{J}+\mathcal{K}+1)}D^{ \mathcal{J}}_{\mathcal{K}+1\ \mathcal{M}}\] (46) \[\partial_{3}D^{\mathcal{J}}_{\mathcal{K}\mathcal{M}} =-i\mathcal{KD}^{\mathcal{J}}_{\mathcal{K}\mathcal{M}}\,. \tag{47}\]
Finally, given that our goal is to use the expressions detailed above to separate the equations of motion of the metric fluctuations it will be advantageous to rephrase the metric in terms of the \(\pm\) basis where \(\sigma^{\pm}=\frac{1}{2}\left(\sigma^{1}\mp i\sigma^{2}\right)\) and \(\sigma_{\pm}=\sigma_{1}\pm i\sigma_{2}\). In this basis, the metric in eq. (20) now takes the form
\[ds^{2}= -\left(1+\frac{r^{2}}{\ell^{2}}\right)dt^{2}+\frac{dr^{2}}{G(r)}+ \frac{r^{2}}{4}\left(4\sigma^{+}\sigma^{-}+(\sigma^{3})^{2}\right)+\frac{2\mu} {r^{2}}\left(dt+\frac{a}{2}\sigma^{3}\right)^{2}\,. \tag{48}\]
### General ansatz for spin 2 fields
We are now in the position to describe the general ansatz for the fluctuations. As originally discussed in [26] the choice of ansatz is facilitated by two parts, a decomposition of coordinate dependence of \(h_{\mu\nu}\) in terms of a suitable basis of eigenfunctions and a compensating choice of frame/basis, i.e. a decomposition of the fluctuation in the invariant basis \(h_{\mu\nu}=h_{ij}\overline{\sigma}^{i}_{\mu}\overline{\sigma}^{j}_{\mu}\). Here we have introduced \(\overline{\sigma}^{a}\) as an extension of the frame basis 11 defined in eq. (22) with \(\overline{\sigma}^{i}=(\mathrm{d}t,\sigma^{+},\sigma^{-},\sigma^{3},\mathrm{d}r)\). However when constructing the decomposition it is important to note that \(\sigma^{b}\) are charged under the operator \(W_{3}\),
Footnote 11: In these expressions Latin symbols near the beginning of the alphabet, \(a,b,c\) take values \(1,2,3\) over the angular coordinates, Latin symbols near the middle of the alphabet \(i,j,k\) take the values \(t,1,2,3,r\).
\[\mathcal{L}_{W_{3}}\sigma^{\pm}=\pm\sigma^{\pm}\,,\qquad\mathcal{L}_{W_{3}} \sigma^{3}=0\,. \tag{49}\]
Hence in constructing the ansatz it is important to compensate for the additional charge of the basis under \(W_{3}\). This can be written succinctly by introducing the function \(Q\), defined as
\[Q(\overline{\sigma}^{i})=\left\{\begin{array}{cc}0&i=r,t,3\\ 1&i=+\\ -1&i=-\end{array}\right. \tag{50}\]
This auxiliary function \(Q\) gives the charge of the form basis under the operator \(W_{3}\). With it, we write the ansatz for the fluctuations as
\[h_{\mu\nu}=\int d\omega e^{-i\omega t}\sum_{\mathcal{J}=0}\sum_{\mathcal{M}= \mathcal{J}}^{\mathcal{J}}\sum_{\mathcal{K}^{\prime}=-(\mathcal{J}+2)}^{ \mathcal{J}+2}h_{ij}(r,\omega,\mathcal{J},\mathcal{M},\mathcal{K}^{\prime}) \overline{\sigma}^{i}_{\mu}\overline{\sigma}^{j}_{\nu}D^{\mathcal{J}}_{ \mathcal{K}^{\prime}-Q(\overline{\sigma}^{i})-Q(\overline{\sigma}^{j}) \mathcal{M}} \tag{51}\]
One can choose a particular mode since the modes of different \((\mathcal{J},\mathcal{K}^{\prime})\) decouple.
Note, we choose to work in the radial gauge \(h_{r\mu}=0\), in order to find solutions to (16) in what follows.
## III Results
In this section we collect and summarize the salient results from the recent studies of quasinormal modes (QNMs) of metric fluctuations in the MP BH geometry [43; 37; 44]. These QNM results are holographically dual to the dispersion relations of hydrodynamic and nonhydrodynamic modes in the dual fluid. Technical details of the numerical methods developed and used are gathered in the appendices.
### QNMs dual to hydrodynamic and non-hydrodynamic eigenmodes
QNMs are eigenmodes of non-hermitean operators that are in general complex-valued. In the context of linear perturbations in general relativity, the hermiticity of the linear operator given in eq. (16) is lost due to the ingoing boundary conditions, as modes may fall into the BH but cannot come out. We provide here an overview of the QNM spectrum of perturbations whose basis consists of eigenfunctions whose eigenvalues under the \(W_{3}\) operator are \(\mathcal{K}=\mathcal{J}\), \(\mathcal{K}=\mathcal{J}-1\), or \(\mathcal{K}=\mathcal{J}-2\) as originally analyzed in [37].
Unique to this set of perturbations is a limiting case whose spectrum is that of a boosted Schwarzschild black brane. Reminiscent of the classification of metric fluctuations under rotations in the Schwarzschild black brane, these "sectors" are referred to as tensor, vector, and scalar sectors respectively. Given that there is a large parameter space one can explore, we limit our discussion to cases where the background spacetime configuration is away from the Hawking-Page phase transition which is dual to the deconfinement transition [63], and we choose the outer horizon radius such that \(T\ell>1/2\). Below we work in units where the AdS radius is set to unity, \(\ell=1\), and the QNM frequency is normalized by the outer horizon radius so that \(\nu=\omega/(2r_{+})\).
#### iii.1.1 Previous QNM results for large BHs
In the present article, QNMs of rotating MP BHs with small as well as large horizon radius are examined. A previous analysis of these QNMs was focused on the large temperature or large BH limit [37]. It is so called because the dual field theory temperature, \(T\), is proportional to the horizon radius, \(r_{+}\), which was taken large. This limit was taken directly on the level of the equations of motion, eq. (16), and is defined by
\[r\to\alpha r\,,\qquad r_{+}\to\alpha r_{+}\,,\qquad\alpha\to\infty\,, \tag{52}\]
with
\[\omega\to\alpha 2\nu r_{+}/\ell\,,\qquad\mathcal{J}\to\alpha jr_{+}/\ell\,, \qquad\alpha\to\infty\,, \tag{53}\]
where only terms leading order in \(\alpha\) are kept in the fluctuation equations. QNM frequencies are reported in terms of the dimensionless frequency, \(\nu\), which is a function of the dimensionless angular momentum parameter, \(j\).
As \(\omega\) and \(\mathcal{J}\) scale with \(r_{+}\), as \(r_{+}\) is taken large, fluctuation frequency \(\nu\) and angular momentum \(j\) are necessarily small at any finite values of \(\omega\) and \(\mathcal{J}\). In this limit, instead of decoupling according to \((\mathcal{J},\mathcal{K})\), modes decouple according to the \(\mathcal{K}\) charge of the \(\sigma\) basis, e.g. the \(\mathcal{K}\) charge of \(h_{++}\to 2\) and \(h_{3-}\to-1\). Up to parity, this results in three sectors: tensor, scalar, and vector.12 Below we reproduce figures from [37] that show the dispersion relations of hydrodynamic and low-lying non-hydrodynamic QNMs in the vector and scalar sector in the large BH limit.
Footnote 12: In [37] there was a mistake in obtaining the fluctuation equations for the boosted black brane, which is referred to as case (iii) in [37]. Thus, in [37] it was not recognized that the metric fluctuation equations in the large BH limit (52) and (53) are identical to those around a boosted black brane. Already in [43] it was realized that the large BH limit leads to this identification. We stress, however, that the metric in this limit is _not_ that of a boosted black brane, but that of a nontrivially rotating BH.
Note the notation used in [37] is slightly different: in Fig. 1 and 2, \(a\equiv a/\ell\) is implicit as \(\ell=1\). Note also there is a sign flip in the values of
Figure 1: Vector sector dispersion relations. Displayed here are the real and imaginary parts of quasinormal frequencies of \(h_{3+}\) and \(h_{t+}\) for large BHs in case (ii) as a function of \(j\). Modes displayed here were found with precision of at least \(d\nu=\nu 10^{-5}\). Missing modes did not converge to sufficient precision and have been filtered out by the numerical routine.
Figure 2: Scalar QNMs of the simply spinning BH in the large BH limit. There are two hydrodynamic modes, which we identify as sounds modes, which mirror each other regarding their real part at \(a/\ell=0\), and both have the same imaginary part. At \(a/\ell=0\) these two modes behave differently. Modes displayed here were found with precision of at least \(d\nu=\nu 10^{-5}\). Missing modes did not converge to sufficient precision and have been filtered out by the numerical routine.
\(a\) used in [37], so the real part of the QNMs found there has the opposite sign when compared to the results in this work below.
To further study the role of the angular momentum parameter \(j\), QNMs for large \(j\) were calculated in [37]. In the _eikonal limit_, which is defined by \(j\to\infty\), a general analytic expectation for the behavior of QNM frequencies was known [38; 39]. Reproduced in Fig. 3 is the figure from [37] that shows the large BH limit tensor sector result confirming the known scaling in [38; 39].13 As \(j\) is equivalent to the linear momentum \(k\)[43], the quantity \(c(k)\to c(j)\) defined in [38; 39] satisfies the predicted behavior at large values of \(j\). The numerical results in [37] imply the following large-\(j\) expansion for the \(n\)th mode: \(2\nu=2j+c_{n}e^{-i(\pi/3)}(2j)^{-1/3}+O((2j)^{-1})\), where the conjugate frequency \(-2\nu^{*}=-2j+c_{n}e^{-i(2\pi/3)}(2j)^{-1/3}\) is also included in our analysis, and \(c_{n}\) are real numbers in agreement with [39].14 This observation further confirmed that the angular momentum \(j\) is associated with the linear momentum \(q\) from [64]. As we now know, in the large BH limit the two are related by \(j=q/(2\pi T)\).
Footnote 13: Again, \(a\equiv a/\ell\) as \(\ell=1\) is implicit here.
Footnote 14: In order to relate to [39], the following identifications are needed: \(\omega_{F}\equiv 2\nu\) and \(q_{F}\equiv 2j\) with subscript “F” indicating quantities in [39].
Summarizing the remaining results from [37], mostly the hydrodynamic behavior in large simply spinning five-dimensional AdS BHs was analyzed in the large BH limit (52) and (53). These are holographically dual to spinning quantum fluids. Due to the spatial anisotropy introduced by the angular momentum, hydrodynamic transport coefficients are split into groups longitudinal or transverse to the angular momentum, and aligned or anti-aligned with it. Analytic expressions are provided for the two shear viscosities, \(\eta_{||}\) and \(\eta_{\perp}\), the longitudinal momentum diffusion coefficient, \(D_{||}\), two speeds of sound, \(v_{\pm}\), and two sound attenuation coefficients, \(\Gamma_{\pm}\) in the momentum diffusion dispersion relation \(\nu=v_{||}j-\mathcal{D}_{||}j^{2}\) and the sound dispersion relation \(\nu=v_{s,\pm}\,j-i\Gamma_{s,\pm}\,j^{2}\)[37]:
\[\eta_{\perp}(a) = \eta_{0}\frac{1}{\sqrt{1-a^{2}}}\,, \tag{54}\] \[\eta_{||}(a) = \eta_{0}\sqrt{1-a^{2}}\,,\] (55) \[v_{||}(a) = a\,,\] (56) \[\mathcal{D}_{||}(a) = (2\pi T)\,\mathcal{D}\,(1-a^{2})^{3/2}=\frac{1}{2}(1-a^{2})^{3/2}\,,\] (57) \[v_{\pm}(a) = v_{0}\frac{\sqrt{3}a\pm 1}{1\pm\frac{a}{\sqrt{3}}}\,,\] (58) \[\Gamma_{\pm}(a) = \Gamma_{0}\frac{\left(1-a^{2}\right)^{3/2}}{\left(1\pm\frac{a}{ \sqrt{3}}\right)^{3}}\,, \tag{59}\]
Figure 3: Large \(j\) (eikonal limit) analysis. Shown here are the quantities \(\text{Abs}(c)\) and \(\text{Arg}(c)/(-\pi/3)\), The dashed lines are the expected values according to [38; 39]. For large \(j\), the values at \(a/\ell=0\) (circles) and \(a/\ell=0.5\) (triangles) in the right plot converge to the predicted ones. In the left plot, the same is true for the case \(a/\ell=0\) (circles). But for \(a/\ell\neq 0\) (triangles), the large \(j\) asymptotes change depending on the value of \(a/\ell\). These plots are generated from data computed in the large BH limit.
where for the holographic (\(\mathcal{N}=4\) SYM) fluid the quantities at vanishing angular momentum, \(a=0\), are given by: \(\eta_{0}=N^{2}\pi T_{0}^{3}/8\), conformal speed of sound \(v_{0}=1/\sqrt{3}\), and conformal sound attenuation \(\Gamma_{0}=1/3\). Known relations between these coefficients are generalized to include dependence on angular momentum:
\[\mathcal{D}_{||}(a) = 2\pi T_{0}\frac{\eta_{||}(a)}{\epsilon(a)+P_{\perp}(a)}\,,\quad \text{(Einstein relation)} \tag{60}\] \[\Gamma_{\pm}(a) = \frac{2\eta_{||}(a)}{3(\epsilon(a)+P_{\perp}(a))}\frac{1}{(1\pm a /\sqrt{3})^{3}}\,. \tag{61}\]
The shear viscosity to entropy density ratio varies between zero and \(1/(4\pi)\), depending on the direction of the shear, varying from transverse to the anisotropy of the fluid \(\frac{\eta_{||}}{s}=\frac{1}{4\pi}\) to perpendicular to that anisotropy \(\frac{\eta_{||}}{s}=\frac{1}{4\pi}(1-a^{2})\) within \(-1<a<1\).
As pointed out above, these relations (54) to (61) are those for shear transport, momentum diffusion and sound propagation in a boosted fluid, as discussed in [65; 66]. The holographic fluid considered here is truly rotating (not just a boosted fluid), however, when probed by low energy fluctuations (hydrodynamic fluctuations), then their dispersion relations look like those of a fluid boosted with boost parameter \(a\). This statement gets corrected at any finite horizon radius when the large BH limit is not taken, as discussed in Sec. III.1.5 and Sec. III.2.
These results furthermore demonstrate that large simply spinning five-dimensional MP BHs in the large BH limit are perturbatively stable for all angular momenta which we tested below extremality (which occurs at the same angular momentum \(a=\ell\) in this limit as the superadiance instabilities). Away from the large BH limit, this statement is limited by superradiance instabilities which then occur at lower values of \(a\) than extremality, as discussed in Sec. III.1.6. Beyond the large BH limit, also the QNMs for metric fluctuations around the full rotating BH metric were considered, merely evaluated at a large value for \(r_{+}=100\), this is referred to as "case (i)" in [37]. Qualitatively and quantitatively, the results from the large BH limit were confirmed by this computation.
One major task in the present article is to go beyond large BHs and examine intermediate to small BHs, which is mainly discussed in section III.1.5.
#### iii.1.2 \(\mathcal{K}=\mathcal{J}\) -- Tensor fluctuations
For \(\mathcal{K}=\mathcal{J}\), the fluctuation \(h_{++}\) decouples from all other fields and hence it is a naturally gauge invariant master variable for this class of fluctuations. In Fig. 4 we display the spectrum for three different values of angular momentum per mass \(a/\ell=0,1/2,9/10\), for total angular momentum of the fluctuation \(\mathcal{J}=0,1/2,1,\ldots,100\) (increasing in steps of \(1/2\)), and horizon radius \(r_{+}/\ell=10\).
From Fig. 4 it is immediately clear that there is no mode with frequency \(\nu\) for which \(\lim_{\mathcal{J}\to 0}\nu=0\), and hence there are no "hydrodynamic" modes in the tensor sector. This is quite remarkable as the \(h_{++}\) fluctuation couples to other fluctuations at nonzero \(a\).
At \(a/\ell=0\) the modes are mirror symmetric about the imaginary axis. This is due to the time reversal invariance in the background geometry present when \(a/\ell=0\), but broken by the angular momentum when \(a/\ell\neq 0\), which then separates modes into those rotating with and against the BH/fluid. Indeed, as \(a/\ell\) becomes positive definite and increases towards one, the trajectories of modes as \(\mathcal{J}\) varies move down and further apart for \(\Re(\nu)>0\), but they move up and closer together for \(\Re(\nu)<0\). Furthermore, QNMs from deep down in the \(\Im(\nu)<0\) complex half plane are pushed up vertically towards \(\Im(\nu)=0\), moving up like a "bowl of a spoon" that then joins up with the "handle of the spoon" provided by the \(\mathcal{J}\)-trajectory of each successively higher mode. This behavior as \(a/\ell\) increases resembles that of the purely imaginary QNMs in the Reissner-Nordstrom BH as the charge is pushed towards extremality [34].
To show in more detail how the QNM spectrum changes in the tensor sector, and in particular this movement of QNMs coming up from the deep of the \(\Im(\nu)<0\) complex half plane as \(a/\ell\) changes, we display in the top of Fig. 5 the spectrum for \(a/\ell\in[0.30,0.34]\) in steps of \(\Delta a/\ell=0.001\). We see from the inset a behavior that looks like "level touching", which is quite remarkable as \(\mathcal{J}\) is discrete here: first a spoon is formed with the \(\mathcal{J}\)-trajectory of a lower mode, but as the bowl of the spoon is pushed up with increasing \(a/\ell\), at some point the "neck of the spoon" curls up so much it breaks off and a new spoon is formed with the upper mode. This "spoon formation" continues as \(a/\ell\) is increased to ever higher values until there is a spoon for each of the modes, starting with the highest. A snapshot of the spoon
Figure 4: Scaled tensor modes, \(\nu\), for three values of \(a/\ell\in\{0,1/2,9/10\}\) at \(r_{+}/\ell=10\). _Top:_\(a/\ell=0\). _Middle:_\(a/\ell=1/2\). _Bottom:_\(a/\ell=9/10\). The total angular momentum of the fluctuation \(\mathcal{J}=0,1/2,1,\ldots,199/2,100\) increases in steps of \(1/2\). The curves begin at \(\mathcal{J}=0\) represented by an empty triangle (\(\Delta\)), and end at \(\mathcal{J}=100\), represented by a circle with a plus mark (\(\bigoplus\)).
Figure 5: Scaled tensor modes, \(\nu\), for two different ranges of \(a/\ell\). _Top:_\(a/\ell\in[0.3,0.34]\) in steps of \(\Delta a/\ell=0.001\). _Bottom:_\(a/\ell\in[0.47,0.54]\) in steps of \(\Delta a/\ell=0.01\). Both images were computed with \(r_{+}/\ell=10\) and the total angular momentum \(\mathcal{J}=0,1/2,\ldots,60\) increasing in steps of \(1/2\). Colors denote the different values of \(a/\ell\), red corresponds to the minimum value of the \(a/\ell\) in the range and blue corresponds to the maximum value in the range. For each curve triangles (\(\Delta\)) correspond to \(\mathcal{J}=0\) while a circle with a plus mark (\(\bigoplus\)) corresponds to \(\mathcal{J}=60\). The inset of the top image zooms in on the region containing the dovetailing behavior. An animated GIF showing this in real time is included in the supplementary material.
forming process in the early and late stages are shown in the bottom of Fig. 5 and Fig. 4 respectively. An animated GIF showing the this process in real time is also provided in the supplementary material.
Another feature to note from Figs. 4 and 5 is that as \(a/\ell\) increases and the spoons form, the \(\mathcal{J}=0\) mode approaches the imaginary axis. This behavior is perhaps not surprising as a branch cut is expected to form when linear differential equations contain essential singularities 15, and this can only happen if modes can come back onto the imaginary axis and coalesce.
Footnote 15: For a general discussion see [67], while specific examples of this in the fluctuations around Reissner-Nordström BHs can be found in [32; 33].
#### iv.1.3 \(\mathcal{K}=\mathcal{J}-1\) -- Vector fluctuations
From the ansatz given in eq. (51) one can see that for the choice of \(\mathcal{K}=\mathcal{J}-1\) the modes \(h_{++}\), \(h_{+3}\), and \(h_{++}\) couple together. This occurs for all modes for which \(\mathcal{K}<\mathcal{J}\), and all modes for \(\mathcal{K}\in[n,\mathcal{J}]\) will couple. In Fig. 6 we display the spectrum for three different values of angular momentum per mass \(a/\ell=0,1/2,9/10\), for total angular momentum of the fluctuation \(\mathcal{J}=0,1/2,1,\cdots 100\) (increasing in steps of \(1/2\)), and horizon radius \(r_{+}/\ell=10\).
Like in the tensor sector, the trajectories of modes are mirror symmetric about the imaginary axis at \(a/\ell=0\), but once \(a/\ell\neq 0\) this symmetry (due to time reversal invariance) is broken by the angular momentum, and the \(\Re(\nu)>0\) trajectories are pushed down relative to the \(\Re(\nu)<0\) ones.
Unlike the tensor sector though, there does exist a mode \(\nu\) for which \(\lim_{\mathcal{J}\to 0}\nu=0\) and hence a potential hydrodynamic mode in this sector. At \(a/\ell=0\), this is the expected shear diffusion mode residing on the imaginary axis. Furthermore, we observe at \(a/\ell=0\) each non-hydrodynamic QNM splits into two trajectories, one for \(\mathcal{K}=\mathcal{J}\) and the other \(\mathcal{K}=\mathcal{J}-1\).
As \(a/\ell\) becomes positive definite and increases, we see from Fig. 6 again this behavior of QNMs deep down in the \(\Im(\nu)<0\) complex half plane being pushed up and forming spoons with the \(\mathcal{J}\)-trajectories of the vector modes. What is interesting here is that as the QNMs move up for \(a/\ell>0\), the two branches previously joined at \(\mathcal{J}=0\) when \(a/\ell=0\) split open to form spoons. Even more so is that not only is there "level touching" seen in the middle of Fig. 6 when \(a/\ell=1/2\), there is also "level crossing" seen in the bottom of Fig. 6 when \(a/\ell=9/10\). As \(a/\ell\) approaches one, we see again the \(\mathcal{J}=0\) modes approaching the imaginary axis as seen in the tensor sector.
At \(a/\ell=0\), the spectrum contains, in addition to the expected vector modes, also an almost exact copy of the tensor modes from Fig. 4. This is perhaps surprising given that the fields \(h_{++}\), \(h_{+t}\), and \(h_{+3}\) all couple together and the QNMs frequencies of \(h_{++}\) are expected to be modified by this coupling compared to the tensor case discussed above. However, away from \(a/\ell=0\) one observes strong effects from the coupling of modes, as discussed above, there is a level crossing between modes formerly from the tensor sector and those from the vector sector. Furthermore, additional modes from the tensor sector continue to appear in the vector sector, although they are no longer a copy of the modes found for \(\mathcal{K}=\mathcal{J}\).
#### iv.1.4 \(\mathcal{K}=\mathcal{J}-2\) -- Scalar fluctuations
As discussed in the beginning of section III.1.3, lowering the eigenvalue of the operator \(W_{3}\) couples fields to all higher \(W_{3}\) eigenfunctions. Thus choosing \(\mathcal{K}=\mathcal{J}-2\) introduces four additional fields \(h_{33}\), \(h_{tt}\), \(h_{t3}\), and \(h_{+-}\), which couple to \(h_{+t}\), \(h_{+3}\), and \(h_{++}\). In Fig. 7 we display the spectrum for three different values of angular momentum per mass \(a/\ell=0,1/2,9/10\), for total angular momentum of the fluctuation \(\mathcal{J}=0,1/2,1,\ldots,100\) (increasing in steps of \(1/2\)), and horizon radius \(r_{+}/\ell=10\).
Like in the vector sector, there is also a mode \(\nu\) for which \(\lim_{\mathcal{J}\to 0}\nu=0\), and so a potential hydrodynamic mode in the scalar sector. In addition, at \(a/\ell=0\) the spectrum contains an almost exact copy of both the tensor and vector spectra. Again, this is somewhat unexpected given the nontrivial coupling between the fields, from which one would expect more changes in the mode frequencies. As a result, the scalar sector contains three hydrodynamic modes: two are sound modes new in this sector, and one corresponds to the diffusion mode encountered in the vector sector discussed above. For \(a/\ell>0\), we see again the diffusion mode shifts away from the imaginary axis with an increasingly large negative real part for increasing \(\mathcal{J}\).
Figure 6: Scaled vector modes, \(\nu\), for three values of \(a/\ell\in\{0,1/2,9/10\}\) at \(r_{+}/\ell=10\). _Top:_\(a/\ell=0\). _Middle:_\(a/\ell=1/2\). _Bottom:_\(a/\ell=9/10\). The total angular momentum of the fluctuation \(\mathcal{J}=0,1/2,1,\ldots,199/2,100\) increases in steps of \(1/2\). The curves begin at \(\mathcal{J}=0\) represented by an empty triangle (\(\Delta\)), and end at \(\mathcal{J}=100\), represented by a circle with a plus mark (\(\bigoplus\)).
Also like in the vector sector, the coupling of the fields in the scalar sector at \(a/\ell=0\) leads to bands of non-hydrodynamic QNM modes degenerate at \(\mathcal{J}=0\), but split up for \(\mathcal{J}>0\). While there are only bands of two modes in the vector sector, this is enhanced to bands of three in the scalar sector, one for each of the possibilities: \(\mathcal{K}=\mathcal{J},\mathcal{J}-1,\mathcal{J}-2\). While \(\mathcal{J}>0\) lifts the degeneracy at \(a/\ell=0\), this is automatically lifted for \(a/\ell>0\). Indeed, this "split open" of the bands at \(\mathcal{J}>0\) when \(a/\ell>0\) happens as QNMs deep down in the \(\Im(\nu)<0\) complex half plane move up and form spoons with the \(\mathcal{J}\)-trajectories of the scalar modes.
Analogous to the vector sector, we see again the "level crossing" behavior in Fig. 7 as the spoons form with increasing \(a/\ell\). As the spoons are locked into place for each of the mode trajectories displayed (starting from the highest), the \(\mathcal{J}=0\) modes again approach the imaginary axis as \(a/\ell\) approaches one.
#### iii.1.5 Horizon radius dependence & the emergence of hydrodynamics
In all previous sections (except section III.1.1), the horizon radius was chosen to be \(r_{+}=10\) (in units of the AdS radius, \(\ell=1\)). In this section, we discuss how different values of the horizon radius influence the QNM spectra. In particular, in the large BH limit (56), the dispersion relations of the lowest gapless QNMs display the hydrodynamic behavior expected for \(\mathcal{N}=4\) SYM theory in a boosted fluid state.
In order to study the dispersion relations of the lowest gapless modes, we choose the momentum diffusion sector as an example. Dispersion relations of the lowest gapless QNM in that channel have been computed for various values of the horizon radius \(r_{+}=\{10,100,1000,10^{4},10^{5},10^{6},10^{7}\}\). The momentum of the mode has been chosen in half-integer steps \(\mathcal{J}=0,1/2,1,...,\mathcal{J}_{max}\) up to a maximal value of \(\mathcal{J}=\mathcal{J}_{max}\), which is determined by the relation \(\mathcal{J}_{max}/r_{+}=j_{max}=0.1\).16 These values will be used as data points that will be fitted to the expected form of the hydrodynamic dispersion relation below. It is important to note that with this definition of a fit window for the momentum \(0<j\leq j_{max}\), there are fewer data points to fit at small \(r_{+}\) than at larger \(r_{+}\). This is because \(j=\mathcal{J}/r_{+}\), where \(\mathcal{J}\) takes discrete values. This introduces an increasing systematic error for this fit at decreasing \(r_{+}\).
Footnote 16: This value of \(j\) is well within the radius of convergence indicated by \(R_{c}\) as shown in Sec. III.2.
Now this QNM data is fitted to the dispersion relation:
\[\omega=v\mathcal{J}^{\beta}-iD\mathcal{J}^{\alpha}\,, \tag{62}\]
where we expect a diffusion mode of \(\mathcal{N}=4\) SYM theory at rest (\(a=0\)) to take the values \(\alpha=2\), \(\beta=1\), \(v=0\), \(D=1/(\pi T)=1/r_{+}\). Here, \(D\) is related to the coefficient defined in [64] by \(\mathcal{D}=D/4\). The result of this fit is shown in Fig. 8. This figure shows that in the fit window \(\mathcal{J}/r_{+}\leq j_{max}=0.1\) the expected diffusion scaling \(\alpha=2\) occurs for \(r_{+}\approx 1000\) while the linear velocity term scaling \(\beta=1\) occurs already at much smaller radii \(r_{+}\approx 10\). For larger values of \(a\), these integer scaling values \(\alpha=2\), \(\beta=1\) are reached at larger \(r_{+}\) values.
As pointed out in Appendix D, for a boosted fluid with boost velocity \(a\) one expects \(r_{+}D=(1-a^{2})^{3/2}\) and \(v=2a\)[65; 66; 37]. As seen in Fig. 8, these values are approached at large horizon radius \(r_{+}\approx 10^{6}\) for the diffusion coefficient \(r_{+}D\), but already at small radii \(r_{+}\approx 10\) for the propagation speed \(v\) of this former diffusion mode.
Finally, we point out two distinct features displayed in Fig. 8 as the horizon radius (and thus the temperature in the dual field theory) are increased:
* The temperature \(T\) becomes the largest scale in the system, therefore the hydrodynamic approximation becomes more applicable.
* For fluctuations at a fixed energy \(\omega\), increasing the horizon radius leads to the large BH limit given by eqs. (52) and (53), and such fluctuations become hydrodynamic (\(\omega,j\ll T\)). This occurs as these fluctuations see less and less of the rotating fluid and perceive it rather as a boosted fluid with boost velocity \(a\).
In the case of the strict large rotating BH limit (applying eqs. (52) and (53), keeping only the leading order in scaling coefficient \(\alpha\)), the metric given in eq. (48) simplifies slightly. But it retains the topology \(S^{3}\times\mathbb{R}\), and remains in a rotating state with nonvanishing angular momentum. In particular, it does
Figure 7: Scaled scalar modes, \(\nu\), for three values of \(a/\ell\in\{0,1/2,9/10\}\) at \(r_{+}/\ell=10\). _Top:_\(a/\ell=0\). _Middle:_\(a/\ell=1/2\). _Bottom:_\(a/\ell=9/10\). The total angular momentum of the fluctuation \(\mathcal{J}=0,1/2,1,\ldots,199/2,100\) increases in steps of \(1/2\). The curves begin at \(\mathcal{J}=0\) represented by an empty triangle (\(\Delta\)), and end at \(\mathcal{J}=100\), represented by a circle with a plus mark (\(\bigoplus\)).
Figure 8: Horizon radius dependence & emergence of hydrodynamic scaling. The dashed horizontal lines indicate the values of diffusion coefficient \(r_{+}D=r_{+}\mathcal{D}/4\) and diffusion speed \(v\) expected for a fluid boosted with velocity \(a=0,0.5,0.9\) (taken from Table X derived in appendix D). Dashed horizontal lines at \(\alpha=2\) and \(\beta=1\) indicate the expected hydrodynamic dispersion scaling (linear propagation \(\propto\mathcal{J}\), quadratic damping \(\propto-i\mathcal{J}^{2}\)).
_not_ simplify to a boosted black brane, and the dual fluid is _not_ merely a boosted fluid, but a _nontrivially rotating fluid_. If one now chooses to probe this slightly simplified rotating background metric with metric fluctuations having _small frequency and momentum compared to the horizon radius_ (i.e. hydrodynamic fluctuations), then these fluctuations effectively _see_ a boosted black brane metric. This is explicitly seen in the equations of motion obeyed by the metric fluctuations around the rotating background metric in the large BH limit given by eqs. (52) and (53), which we have shown to be identical to those of the metric fluctuations around a boosted black brane with boost parameter \(a\).17
Footnote 17: This had not been recognized in the originally published version of [37], which is to be updated shortly.
**Summary:** While there are gapless QNMs of rotating BHs at any value of the angular momentum parameter \(a\), these modes apparently only start following hydrodynamic dispersion relations with integer powers (e.g. \(\omega=v\mathcal{J}-iD\mathcal{J}^{2}+\mathcal{O}(3)\)) at larger values of the horizon radius \(r_{+}\approx 1000\). This may merely be due to the growing systematic error in the fit window \(0\leq\mathcal{J}/r_{+}\leq 0.1\) at small \(r_{+}\), as discussed above, or due to the discreteness of the momentum in principle. It is remarkable, that the mode at \(r_{+}=1000\) scales hydrodynamically, \(\alpha\approx 1\), \(\beta\approx 2\), but the diffusion coefficient has not yet taken the value expected for a boosted fluid; that value it only takes at very large horizon values \(r_{+}>10^{6}\). _Therefore, there is an interesting window of horizon values, approximately \(1000<r_{+}<10^{7}\), in which these holographic rotating fluids display hydrodynamic behavior which is distinct from a boosted fluid._ In this window, we expect to see how hydrodynamic transport coefficients are changed by rotation (and not only by a boost).18 In Sec. III.2, the convergence radius of the hydrodynamic expansion in momentum space is going to be computed for a range of horizon radii \(100\leq r_{+}\leq 10^{6}\), overlapping with the window in which we expect hydrodynamic scaling with nontrivially modified transport coefficients due to rotation.
Footnote 18: We note that the dispersion relations analyzed in a regime around \(r_{+}\approx 10\) may also be described by a derivative expansion. In this regime, there are less numerical data points available for the fit in the window \(0\leq\mathcal{J}/r_{+}\leq 0.1\). Thus the fit values \(\alpha,\beta,D,v\) displayed for \(r_{+}=10\)fm in Fig. 8 have a large systematic error. Thus, hydrodynamic scaling \(\alpha=2\), \(\beta=1\) may persist to these small horizon values. Alternatively, in that regime there may be a scaling with non-integer exponents \(\alpha\) and \(\beta\); for examples of such a behavior see [68; 69].
#### iii.2.6 Perturbative instabilities - superradiant and Gregory-Laflamme
It was first found by [24] that the simply spinning five-dimensional AdS BH suffered from a linear superradiant instability. This instability is caused by the presence of the conformal boundary in combination with the Penrose process. It was found in [24] that modes of high enough \(\mathcal{J}\) would induce such an instability. It is still an open question what is the final state of such an unstable system.
Several meta-stable states have been found [70; 71; 72; 73; 27]. Such states have an angular momentum but bear neither \(U(1)\) axial nor time translational symmetry. Instead, what is present is a helical symmetry that is a combination of the time translation and axial rotation. In Figs. 4, 6, and 7, the modes to
Figure 9: \(\Im(\nu)\) vs \(a/a_{\rm ext}\) in the tensor sector for \(a\lesssim a_{\rm ext}\), \(r_{+}=\ell\), and \(\mathcal{J}=5\). The red points indicate unstable modes, while the blue points mark stable modes.
the left of the imaginary axis approach the real axis as \(a\) and \(\mathcal{J}\) are increased. For a large enough but non-extremal \(a\) and \(J\), these modes cross over. For example, in the tensor sector, for \(r_{+}/\ell=1\), \(\mathcal{J}=5\), and \(a/a_{ext}=91/100\) there is an unstable mode with a frequency of \(\nu\approx-15.56+i0.00004541\). This behavior of modes can further be seen in Fig. 9. It was shown in [24] that superradiant radiant modes occur when
\[a_{\text{unstable}}=\frac{r_{+}^{2}/\ell}{1+r_{+}^{2}/\ell^{2}}<a<a_{\text{ext }}=\frac{r_{+}^{2}/\ell}{1+r_{+}^{2}/\ell^{2}}\sqrt{1+\frac{\ell^{2}}{2r_{+}^{2 }}}\,. \tag{63}\]
In the regime of large \(r_{+}\) and thus large temperature, \(a_{\text{unstable}}\) and \(a_{\text{ext}}\) would coincide as \(r_{+}\to\infty\). The reason behind such an instability is similar to Cherenkov radiation. The constant time-directed vector field
\[v_{t}:=\partial_{t}-2a\left(\frac{1+r_{+}^{2}/\ell^{2}}{r_{+}^{2}}\right) \partial_{\psi} \tag{64}\]
measures the angular velocity of the horizon up to an overall constant normalization. For values of \(a\) in the superradiant regime, this timelike vector cannot be normalized such that it remains timelike at the AdS boundary. Rather, it becomes spacelike, and perturbations of the BH can be dragged faster than the local speed of light, thus leading to an instability.
#### iii.1.7 Cross Sector Spectrum Comparison
For \(\mathcal{J}\geq 2\) there is non-trivial coupling in tensor fields in the vector sector equation of motion and tensor/vector fields in the scalar sector. Non-trivial coupling means, that despite enhanced \(U(2)\) angular symmetry, linearized field equations do not couple cleanly by component. It is thus interesting to compare the spectra of the tensor, scalar, and tensor sector to each other. Graphically, this has been done in Fig. 10. The first observation is, for \(a=0\), the tensor spectrum appears in the spectra of the vector and scalar sectors. Similarly, the vector spectrum appears in the scalar spectrum. This points to the fact that for \(a=0\), the full spherical symmetry of the spacetime is restored, and the tensor, vector, and scalar fields decouple from each other for all \(\mathcal{J}\) and \(\mathcal{K}\). A similar decoupling was seen in the large temperature [37] limit as defined in Sec. III.1.1. The second observation is that the once degenerate modes of the \(a=0\) split in the \(a\neq 0\) case, as seen in the second plot of Fig. 10. Because the splitting occurs for \(a\neq 0\) and for finite \(r_{+}\), it is reasonable to conclude that curvature scale associated with the spherical geometry of the BH causes this.
### Critical points
In this section we give an overview of the recent work [43], whose goal was to study the behavior of bounds on the convergence of linearized hydrodynamic expansions when the fluid is in a rotating state. Taking the MP geometry in eq. (20) as a concrete example, such bound for \(\mathcal{N}=4\) SYM theory in a rotating state was found in [43] by calculating critical points (as introduced in [40]).
#### iii.2.1 Implicit functions and their series expansions
As discussed in [40], a relativistic fluid supports, in general, collective excitations - linearized fluctuations of energy and momentum - around its equilibrium state, and these are referred to as hydrodynamic modes. The frequency of these modes \(\omega\) are related to the momentum via a dispersion relation which takes the form,
\[\omega(\mathbf{q})=\sum_{c=0}^{\infty}d_{n}\mathbf{q}^{c/m}\,, \tag{65}\]
where \(\mathbf{q}\) is the wave vector, \(d\in\mathbb{C}\), and \(m\in\mathbb{N}\). The number of terms in the series is significant as it indicates the number of terms kept in the derivative expansion. Indeed, the series representation of the
Figure 10: Combined spectra of the sectors considered in sections III.1.2.2-III.1.4 at \(r_{+}/l=10\). The green depicts the tensor sector, red the vector sector, and blue the scalar sector. The modes are plotted for total angular momentum of the fluctuation \(\mathcal{J}=0,1/2,1,\ldots,199/2,100\) increasing in steps of \(1/2\). _Top:_\(a/\ell=0\). _Bottom:_\(a/\ell=1/2\).
dispersion relation \(\omega(\mathbf{q})\) truncates when the hydrodynamics is considered up to a certain finite order, and it is infinite only when hydrodynamic expansion to all orders is considered. The coefficients of the series can naturally be matched to the transport coefficients at each order.
It is important to understand how dispersion relations such as eq. (65) arise. One begins by linearizing around an equilibrium state (\(T_{\mu\nu}+\delta T_{\mu\nu}\)) and making a choice of the hydrodynamic degrees of freedom. The derivative expansion is then contained in the constitutive relations: e.g. by expressing the spatial stress in terms of the energy and momentum density, these relations then contain up to \(k\) derivatives of the energy and momentum density and so correspond to \(k\)-th order hydrodynamics. The resulting linear system, when expressed in terms of Fourier modes proportional to \(\exp(-i\omega t+i\mathbf{q}\cdot x)\), contains non-trivial solutions provided its determinant vanishes. This determinant defines a non-trivial polynomial in \(\omega\) and \(\mathbf{q}\) and thus a complex algebraic curve for general \((\omega,\mathbf{q})\in\mathbb{C}^{2}\), and may hence be regarded as an implicit function with the form \(F(\mathbf{q},\omega)=0\).
Given that we can construct this implicit function, we are now in a position to borrow some basic results from the theory of algebraic curves. Consider an implicit function \(f(x,y)=0\) defining a curve. Points on the curve can be classified by how it intersects with a line \(L\). Following [74], by choosing a point \(p_{0}=(x_{0},y_{0})\) on the curve the line \(L\) can be parametrized by \((x_{0}+\lambda t,y_{0}+\mu t)\). Intersections of the line with the curve are then roots of
\[f(x_{0}+\lambda t,y_{0}+\mu t)=0\,. \tag{66}\]
Taylor expanding this expression around \(t=0\) leads to the following equation for the intersection
\[(\partial_{x}f\lambda+\partial_{y}f\mu)t+\frac{1}{2}(\partial_{x}^{2}f\lambda ^{2}+2\partial_{xy}^{2}f\lambda\mu+\partial_{y}^{2}f\mu^{2})t^{2}+\cdots=0\,, \tag{67}\]
from which we can classify the number of intersections of the line and the curve.
_Single intersections:_ Consider first the term linear in \(t,\ df/dt|_{t=0}\). If either or both of the partial derivatives are non-zero, then every line through \(p_{0}\) has a single intersection with the curve. There is however an exception: it is possible that \(\mu/\lambda\) takes a value such that \(\partial_{x}f\lambda+\partial_{y}f\mu=0\), and this line is the tangent to the curve at the point \(p_{0}\).
_Double intersections:_ Consider now the term quadratic in \(t,\ d^{2}f/dt^{2}|_{t=0}\), supposing that \(\partial_{x}f=\partial_{y}f=0\). Suppose further that not all of the second partial derivative are zero. Then every line through \(p_{0}\) has at least 2 intersections at \(p_{0}\). As before, there is again an exception: it is possible that \(\mu/\lambda\) takes a value such that \(\partial_{x}^{2}f\lambda^{2}+2\partial_{xy}^{2}f\lambda\mu+\partial_{y}^{2}f \mu^{2}=0\), and these lines are the tangents to the curve at the point \(p_{0}\).
\(r^{th}\) _intersections:_ Consider now the \(r\)-th term in \(t,\ d^{r}f/dt^{r}|_{t=0}\), supposing that all partial derivatives up to and including the \((r-1)\) order vanish. Again, suppose further that not all of the \(r\)-th partial derivative are zero. Then every line through \(p_{0}\) has at least \(r\) intersections at \(p_{0}\). The exceptional case still remains that it is possible that \(\mu/\lambda\) takes a value such that
\[\partial_{x}^{r}f\lambda^{r}+\binom{r}{1}\partial_{x}^{r-1}\partial_{y}f \lambda^{r-1}\mu+\cdots+\binom{r}{r}\partial_{y}^{r}f\mu^{r}=0. \tag{68}\]
Again these lines are the tangents to the curve at the point \(p_{0}\).
In each of these examples \(p_{0}\) is said to be an \(r\)-fold point, or a point of multiplicity \(r\); a point with multiplicity two or more is referred to as singular. It is clear that for a point to be singular it must be the case that \(f(x_{0},y_{0})=\partial_{x}f(x_{0},y_{0})=\partial_{y}f(x_{0},y_{0})=0\). We are most interested in 1-fold points, which are referred to as critical points in [40]. One can prove that the point \(p_{0}\) is an \(r\)-fold point of the curve if and only if all of the \((r-1)\)-th derivatives \(f\) vanish at \(p_{0}\) but not all the \(r\)-th derivatives [74].
With this in hand we are now in a position to understand the derivation of dispersion relations we are interested in. If the curve \(f(x,y)=0\) has a non-singular point at \((x_{0},y_{0})\), then either \(\partial_{x}f\) or \(\partial_{y}f\) is non-zero. Without loss of generality we can assume it to be \(\partial_{y}f\), and this allows the use of the implicit function theorem.
_Analytic implicit function theorem:_ Suppose \(f(x_{0},y_{0})=0\) and \(\partial_{y}f(x_{0},y_{0})\neq 0\). Then there exist \(\epsilon>0\) and \(\delta>0\) such that \(\mathbb{D}_{\epsilon}(x_{0})\times\mathbb{D}_{\delta}(y_{0})\) is in the neighborhood of where \(f\) is defined. In addition there exists an injective function \(g:\mathbb{D}_{\epsilon}(x_{0})\rightarrow\mathbb{D}_{\delta}(y_{0})\) such that \(f(x_{0},g(x_{0}))=0\). Furthermore, for each
\(x\in\mathbb{D}_{\epsilon}(x_{0})\), \(g(x)\) is the unique solution of \(f(x,g(x))=0\). The mapping \(g(x)\) is analytic in \(\mathbb{D}_{\epsilon}(x_{0})\) and that
\[g^{\prime}(x)=-\frac{\partial_{x}f(x,g(x))}{\partial_{y}f(x,g(x))}\,. \tag{69}\]
Higher order derivatives can be obtained in a similar way, and hence the implicit function theorem guarantees that an expression for \(y(x)\) exists locally when expressed as a power series of \((x-x_{0})\), and it converges in a neighborhood of \(x_{0}\).
Unfortunately this theorem applies only to a non-singular point, and for the hydrodynamic dispersion relations we are interested in, there can be in principle 1-fold points at the origin of the complex frequency and momentum plane. However, as discussed in [74; 75], the analytic implicit function theorem can be extended to points \((x_{0},y_{0})\in\mathbb{C}^{2}\), singular or not. In particular, there exist branches of functions \((x(t),y(t))\) parameterized by t that are analytic in a neighborhood of \(t=0\) such that \((x(0),y(0))=(x_{0},y_{0})\) is a root, viz. \(f(x_{0},y_{0})=0\). Furthermore, for every other point \((x,y)\) there is a unique branch in a suitable neighborhood of \((x_{0},y_{0})\) for which \(x=x(t)\) and \(y=y(t)\). The proof of this is typically attributed to Puiseux and arises from the proof that the field of Puiseux series is algebraically closed. The construction of such a Puiseux series solution to \(f(x,y)=0\) as well as the pertinent proof can be found in any text on algebraic curves including [74; 75]19, and we refer interested readers to them for further details.
Footnote 19: A particularly easy to follow description of the construction of such a Peuiseux series solution is given in [40].
The most important result of all this, as was pointed out in [40], is that for a branch given in the neighborhood of \((x_{0},y_{0})\), the series is convergent with a radius of convergence determined by the location of next closest critical point. We have thus a method of determining the radius of convergence of a hydrodynamic series without having to calculate a large number of coefficients of the series expansion, which may be computationally expensive or difficult.
#### iii.2.2 Determination of the critical point
With the tools discussed above in hand we can proceed to compute the locations of the critical points of the linearized hydrodynamic expansion of a rotating fluid. For this purpose, we focus on the fluid behavior of the \(\mathcal{N}=4\) SYM theory in a rotating state dual to the MP BH given by eq. (20), as was done in [43]. From the previous discussion it is clear we must search for 1-fold points, which are solutions to the equations
\[P(j,\nu)=0\,,\qquad\partial_{\nu}P(j,\nu)=0\,, \tag{70}\]
where \(P\) is the implicit function encoding the dispersion relations \(\nu(j)\), and \(j=\mathcal{J}/r_{+}\) here. To obtain \(P\) we recall the arrangement of the field equations in a generalized eigenvalue problem as discussed in section III.1. The differential operator \(M\) is a function of the frequency and momentum, encoding the QNM spectrum. While in section III.1 we directly solve the generalized eigenvalue problem, an alternative method of obtaining the QNMs is to evaluate the determinant of this operator, \(P(\nu,j)=\det(M(\nu,j))\), leading to an implicit function in terms of \((j,\nu)\), whose roots are precisely the QNM spectrum. Crucially, this is precisely the implicit function which encodes the dispersion relations of both hydrodynamic and non-hydrodynamic modes of collective excitations about the equilibrium of the CFT dual to this BH geometry.
In Fig. 11 we present the distance from the origin (recall hydrodynamic dispersion relations, constructed as Puiseux series, are roots of the implicit function around the origin of complex frequency-momentum plane) to the nearest critical point of the implicit function \(P\) encoding the dispersion relations for the MP BH in eq. (20). In the figure we display the nearest critical point for each of the three sectors discussed in section III.1 as hollow shapes. Interestingly, as already discussed in section III.1 sectors with successively lower values of \(\mathcal{K}\), i.e. for instance the scalar sector defined by \(\mathcal{K}=\mathcal{J}-2\) contain the fields of all higher sectors. As a result, when constructing the implicit function \(P\) one can find numerically that the critical points obtained in, say the tensor sector, also appear in the scalar
sector. An organizational strategy is then needed to understand the appearance of the modes. A simple one is to organize by the importance of contributions in a large \(r_{+}\) expansion. In the strict large BH limit where we consider only the leading term in the limit \(r_{+}\to\infty\) as shown in [37; 43], the behavior of the fluctuation equations is that of a boosted planar black brane 20. In this limit the sectors decouple, hence the vector sector does not interact with tensor sector, nor does the scalar sector interact with the vector and tensor sector. Furthermore, the discrete momentum \(j=\mathcal{J}/r_{+}\) becomes a continuous variable. Hence the hollow shapes in Fig. 11 are those obtained in this limit. A consequence of this limit is that one can interpret these critical points like in [40] as radii of convergence of the dispersion relations obtained from the implicit function representing the hydrodynamic expansion. Its for this reason we label this plots vertical axis as \(R_{c}\), the convergence radius of the corresponding sector 21.
Footnote 20: An example of the relation between the Wigner d functions and Fourier transformations in the large BH limit is discussed in appendix E.
Footnote 21: It should be noted that in this large BH limit there are no hydrodynamic modes in the tensor sector. However these play a role as one moves away from the large BH limit.
It was already shown in [37] that the QNM spectrum behaves, roughly speaking, hydrodynamically for \(r_{+}/\ell\approx O(10^{2})\). Therefore one can think of the large BH limit, in a sense, as a hydrodynamic limit of the rotating MP BH solution. In [43] it was shown that the QNMs, which are also critical points, already behave hydrodynamically for \(r_{+}/\ell\approx O(10^{2})\). This is displayed in Fig. 12, which shows the ratio of the nearest critical point to the origin for a finite horizon radius to that of the nearest mode in the strict \(r_{+}\to\infty\) limit. It is interesting to note that differences (on the percent level) in the radius of convergence only begin to appear for \(r_{+}/\ell\approx O(10^{2})\).
Interestingly, the effect which more severely limits the radius of convergence of the hydrodynamic series away from the large BH limit is the interaction of modes of different values of \(\mathcal{K}\). While the tensor sector (\(\mathcal{K}=\mathcal{J}-2\)) does not contain any hydrodynamic modes, the implicit function encoding the dispersion relations does contain critical points, and these points are closer to the origin in the complex momentum plane. This is reflected in the radius of convergence displayed as the solid shapes in Fig. 11. The resulting radius of convergence is still non-zero and finite, but its behavior has changed dramatically as a result of the appearance of the inter-sector couplings implied by the Einstein equations.
Figure 11: The convergence radius, \(R_{c}\), of the hydrodynamic expansion as a function of the angular momentum parameter \(a/\ell\). Hollow shapes indicate the value of the distance from the origin (\(j=0\)) to the nearest critical point \(j=j_{c}\) in the strict large BH limit, while solid shapes indicate this value in the full MP BH with horizon radius \(r_{+}/\ell=100\). The sector with the smallest absolute value determines \(R_{c}\). Adapted from [43].
### Pole-Skipping
In this section, we give an overview of the pole-skipping phenomenon in the MP background described in section II and discussed in recent work [44]. While the critical points discussed in the previous section can be shown to arise as the result of mode collisions in the complex frequency-momentum plane, pole-skipping modes arise when the residue of a Greens function "collides" with the pole, and as a result the would-be pole is "skipped". The study of pole-skipping has garnered much attention in recent years due to the discovery in [76]. The authors of [76] found that in the gravitational shockwave calculations [77; 78], the scrambling rate of the dual strongly-coupled quantum field theory is directly related to the hydrodynamic sound modes of the theory. This realization quickly led to the development of effective descriptions of quantum chaos, and the general prediction of pole-skipping in the energy-energy correlation functions as a "smoking gun" for the hydrodynamic origin for the chaotic mode [35; 36]. When this occurs, it provides a potentially simpler method of obtaining information to characterize the chaotic dynamics (maximal Lyapunov exponents and butterfly velocities) than the computation of OTOCs, a standard candle to diagnose operator growth in chaotic quantum field theories. Although much is known about these quantities in simple settings (insert examples), less is known about their behavior in non-trivial states of chaotic quantum field theories, such as the rotating state dual to the MP BH we consider. Earlier works have considered the rotating BTZ BH dual to a (1+1)-dimensional CFT with a chemical potential for rotation [79; 80; 81], and the field theory dual to a (3+1)-dimensional rotating Kerr-AdS BH [82].
#### iv.3.1 Gravitational shockwaves
In [44] we focused on computations of OTOCs in the field theory dual to eq. (20) of the form
\[\langle\hat{W}(t_{2},\Psi_{2},\Phi_{2},\Theta_{2})\hat{V}(t_{1},\Psi_{1},\Phi_ {1},\Theta_{1})\hat{W}(t_{2},\Psi_{2},\Phi_{2},\Theta_{2})\hat{V}(t_{1},\Psi_ {1},\Phi_{1},\Theta_{1})\rangle\,, \tag{71}\]
where capital letters denote boundary coordinates, the difference in operator insertions is large compared to the inverse temperature \(t_{2}-t_{1}\gg\beta\) and the angled brackets denote the trace with respect to the
Figure 12: The deviation of the convergence radius, \(R_{c}\), of the hydrodynamic expansion as a function of the horizon radius normalized to its value in the large BH limit. The various points represent different values of the angular momentum parameter \(a/\ell\). The vertical width of the bands displays the “strength" of rotational effects on the radius of convergence. The plot markers are the same as those used in Fig. 11. Adapted from [43].
thermal density matrix \(e^{-\beta H}/Z\) in the dual field theory. The gravitational process dual to this OTOC is the scattering amplitude between a particle of momentum \(p_{1}^{U}\) travelling along the \(V=0\) horizon of the BH (corresponding to quanta created by the \(\hat{V}\) operator) and a particle of momentum \(p_{2}^{V}\) travelling along the \(U=0\) horizon (corresponding to quanta created by the \(\hat{W}\) operator). The OTOC is then given by an integral, over momenta and angular coordinates, of \(e^{i\delta}\) (a two-two scattering amplitude) weighted by bulk-boundary wave functions, which describe the distribution of quanta along at the horizon [78].
The computation is made feasible by working in the eikonal approximation [83] which consists of linearizing the gravitational Lagrangian and treating the two particles (created by \(\hat{V}\) and \(\hat{W}\)) as fixed energy momentum sources following their classical trajectories. The scattering amplitude for this process, \(e^{i\delta}\), is given by evaluating the Einstein-Hilbert action, expanded to quadratic order, on the classical solutions sourced by the \(\hat{V}\) and \(\hat{W}\) operators [77; 83; 78]
\[\delta=\frac{1}{2}\int d^{5}x\sqrt{-g}(\delta g_{UU}\mathscr{D}^{2}\delta g_{VV }\delta g_{UU}T^{UU}+\delta g_{VV}T^{VV})\,, \tag{72}\]
where \(\mathscr{D}^{2}\) is related to the inverse of the graviton propagator and as noted in [78], the first term is equal in magnitude but opposite in sign as the other terms in eq. (72 ).
Before discussing the extraction of the OTOC from the gravitational amplitude we begin by computing the linearized backreaction of the two particles following (near) geodesic trajectories on the geometry 22. These solutions are examples of the well known shockwave geometries [84; 85; 86]. The trajectory of the particle of momentum \(p_{1}^{U}\) approximates the null geodesic given by \((U=U(\tau),V=0,\tilde{\psi}_{1},\phi_{1},\theta_{1})\). The only non-zero component of the stress tensor of this particle is given by
Footnote 22: This calculation is most easily done in Kruskal coordinates. These are detailed in the original paper [44] but are included in appendix C for convenience.
\[T^{UU}=\frac{1}{\sqrt{-g}}p_{1}^{U}\delta(V)\delta(\tilde{\psi}-\tilde{\psi}_{ 1})\delta(\phi-\phi_{1})\delta(\theta-\theta_{1})\,, \tag{73}\]
while the particle of momentum \(p_{1}^{V}\) follows the null-geodesic \((U=0,V=V(\tau),\tilde{\psi}_{2},\phi_{2},\theta_{2})\), with the stress tensor
\[T^{VV}=\frac{1}{\sqrt{-g}}p_{2}^{V}\delta(U)\delta(\tilde{\psi}-\tilde{\psi}_{ 2})\delta(\phi-\phi_{2})\delta(\theta-\theta_{2}). \tag{74}\]
The only non-zero component of the stress tensor sourced by the \(\hat{V}\) quanta is \(T_{VV}=A(0)^{2}T^{UU}/4\). The ansatz for the gravitational backreaction of this source takes the form of a shockwave across the \(U=0\) horizon, \(U\to U+f_{1}(\tilde{\psi},\phi,\theta)\) which can be written to leading order as
\[\delta g_{VV}=-A(0)f_{1}(\tilde{\psi},\phi,\theta)\delta(V). \tag{75}\]
Inserting the ansatz (75) into Einstein's equations, one finds that the \(VV\)-component is the only non-trivial equation, which reduces to the following partial differential equation for the angular profile \(f_{1}(\tilde{\psi},\phi,\theta)\)
\[\mathcal{D}f_{1}(\tilde{\psi},\phi,\theta)=(\Box+\lambda_{1}+ \lambda_{2}\partial_{\tilde{\psi}}+\lambda_{3}\partial_{\tilde{\psi}}^{2})f= \frac{16\pi G_{N}}{\sin(\theta_{1})}\frac{K}{Lr_{+}^{2}}p_{1}^{U}\delta(\tilde {\psi}-\tilde{\psi}_{1})\delta(\phi-\phi_{1})\delta(\theta-\theta_{1}), \tag{76}\]
with \(\Box\) given by 23
Footnote 23: It turns out that this operator is one quarter of the Laplacian on the unit 3-sphere written in terms of the Hopf fibration.
\[\Box=\cot(\theta)\partial_{\theta}+\partial_{\theta}^{2}+\csc^{2}(\theta) \partial_{\tilde{\psi}}^{2}+\csc^{2}(\theta)\partial_{\phi}^{2}-2\cot(\theta )\csc(\theta)\partial_{\tilde{\psi}}\partial_{\phi}, \tag{77}\]
where \(\lambda_{1},\lambda_{2},\lambda_{3}\) are constants given by the expressions
\[\lambda_{1}=\frac{(2K^{2}+L^{2}r_{+}^{2})(2a^{2}(r_{+}^{2}+L^{2})^{2}-\Delta)} {4L^{4}r_{+}^{2}K^{2}},\ \ \ \ \lambda_{2}=-\frac{2a(r_{+}^{2}+L^{2})K}{L^{3}r_{+}^{2}},\ \ \ \ \lambda_{3}=-\frac{a^{2}(r_{+}^{2}+L^{2})}{L^{2}r_{+}^{2}}. \tag{78}\]
In these expressions, eq. (76) and eq. (78) we have made use of two quantities \(\Delta\) and \(K\) to shorten the expressions, these constants are defined as
\[\Delta=L^{2}r_{+}^{2}(L^{2}+2r_{+}^{2}),\ \ \ \ \ \ \ \ K=\sqrt{L^{2}r_{+}^{2}-a^{2}(L^{2}+r_{+}^{2})} \tag{79}\]
Repeating these steps we can now consider the \(W\) quanta travelling on the \(U=0\) horizon. The calculation goes through as above, only now the geometry is shifted across the \(U=0\) horizon \(V\to V+f_{2}(\tilde{\psi},\phi,\theta)\). To linear order in \(f_{2}\) this leads to the requirement that \(f_{2}(\tilde{\psi},\phi,\theta)\) satisfy
\[\tilde{\cal D}f_{2}(\tilde{\psi},\phi,\theta)=\frac{16\pi G_{N}}{\sin(\theta_{2 })}\frac{K}{Lr_{+}^{2}}p_{2}^{V}\delta(\tilde{\psi}-\tilde{\psi}_{2})\delta( \phi-\phi_{2})\delta(\theta-\theta_{2}), \tag{80}\]
where \(\tilde{\cal D}\) is the same differential operator given in (76) but with the replacements \(\partial_{\tilde{\psi}}\to-\partial_{\tilde{\psi}}\) and \(\partial_{\phi}\to-\partial_{\phi}\).
As was first demonstrated in [44], we can find a general analytic solution to the angular profile of the shockwave. Introducing \(f(\tilde{\psi},\phi,\theta,\theta^{\prime})\) as the Green's function of the Laplacian on \(S^{3}\) with normalised delta-function source
\[{\cal D}f(\tilde{\psi},\phi,\theta,\theta^{\prime})=-\frac{1}{2\sin(\theta^{ \prime})}\delta(\tilde{\psi})\delta(\phi)\delta(\theta-\theta^{\prime}) \tag{81}\]
we can expand the delta functions in terms of Wigner D-functions using the completeness relation given in eq.(40). Using the fact that the Wigner D-functions satisfy
\[\Box D^{\cal J}_{{\cal K}{\cal M}}+{\cal J}({\cal J}+1)D^{\cal J}_{{\cal K}{ \cal M}}=0\,, \tag{82}\]
we can solve eq. (81) for the normalised shockwave profile
\[f(\tilde{\psi},\phi,\theta,\theta^{\prime})=\sum_{{\cal J}=0,1/2,1,\dots}^{ \infty}\sum_{{\cal K}=-{\cal J}}^{{\cal J}}\sum_{{\cal M}=-{\cal J}}^{{\cal J }}\frac{2{\cal J}+1}{16\pi^{2}}\frac{d^{\cal J}_{{\cal K}{\cal M}}(\theta^{ \prime})d^{\cal J}_{{\cal K}{\cal M}}(\theta)}{{\cal J}({\cal J}+1)-\lambda_{1 }-i\lambda_{2}{\cal K}+\lambda_{3}{\cal K}^{2}}e^{i{\cal K}\tilde{\psi}+i{ \cal M}\phi}\,, \tag{83}\]
where we have made use of the decomposition of the Wigner D-functions in terms of the Wigner d-functions \(D^{\cal J}_{{\cal K}{\cal M}}(\tilde{\psi},\phi,\theta)=d^{\cal J}_{{\cal K}{ \cal M}}(\theta)e^{i{\cal K}\tilde{\psi}+i{\cal M}\tilde{\phi}}\). Finally, by using the general Greens function, we compute the eikonal phase24 from the action in eq. (72)
Footnote 24: Note here we have used rotational symmetry in the \(\phi,\tilde{\psi}\) directions and that \(\tilde{f}(\tilde{\psi}_{1}-\tilde{\psi}_{2},\phi_{1}-\phi_{2},\theta_{1},\theta _{2})=f(\tilde{\psi}_{2}-\tilde{\psi}_{1},\phi_{2}-\phi_{1},\theta_{2},\theta _{1})\) with \(\tilde{f}\) the solution to the analogous equation to eq. (81) with \({\cal D}\to\tilde{\cal D}\).
\[\delta=\frac{16\pi G_{N}K}{Lr_{+}^{2}}A(0)p_{1}^{U}p_{2}^{V}f(\psi_{2}-\psi_{1},\phi_{2}-\phi_{1},\theta_{2},\theta_{1})\,. \tag{84}\]
Unfortunately, for general configurations \((\psi,\phi,\theta,\theta^{\prime})\) we have not encountered a simple way to evaluate the sum in eq. (83). However, there do exist configurations for which one can obtain an explicit closed form of the OTOC in the large BH limit. Making use of the existence of radially in-falling geodesics which satisfy \(\dot{\theta}=\dot{\phi}=0\)[87] we can consider configurations for which the operators \(\hat{V}\) and \(\hat{W}\) lie on a Hopf circle (and hence are only separated in the direction of the fibre coordinate \(\psi\), parallel to the rotation). Such operator configurations with \(\Theta_{1}=\Theta_{2}=\Theta\) and \(\Phi_{1}=\Phi_{2}=\Phi\) can be mapped by an isometry to the North pole, i.e. without loss of generality we consider configurations for which \(\Theta=\Phi=0\). The OTOC then takes the form
\[H(t,\Psi)=\langle W(t_{2},\Psi_{2},0,0)V(t_{1},\Psi_{1},0,0)W(t_{2},\Psi_{2},0, 0)V(t_{1},\Psi_{1},0,0) \tag{85}\]
where \(t=t_{2}-t_{1}\) and \(\Psi=\Psi_{2}-\Psi_{1}\).
To obtain the closed form expression of the OTOC we work in the large BH limit \(r_{+}\gg\ell\) where the bulk to boundary wave-functions describing the distribution of quanta on the horizon will be sharply peaked in the angular coordinates on the horizon around \(\tilde{\psi}_{2}-\tilde{\psi}_{1}=\Psi-\overline{\Omega}t\), \(\theta_{1}\approx\theta_{2}\approx 0\), \(\phi_{1}\approx\phi_{2}\approx 0\). Expanding the standard expression from [78] for the OTOC to leading order in \(G_{N}\), and using that the bulk-to-boundary wave-functions are sharply peaked around 25\(p_{1}^{U}p_{2}^{V}\sim e^{2\pi Tt}\), the OTOC then takes the form
Footnote 25: we note that in the large BH limit the expressions for the temperature and angular velocity reduce to
\[2\pi T=\frac{2r_{+}}{\ell^{2}}\sqrt{1-a^{2}/\ell^{2}},\hskip 28.452756pt \overline{\Omega}=-2a/\ell^{2}. \tag{86}\]
where \(c\) is a constant and \(f(\tilde{\psi})\) is the solution to equation (81) with \(\theta=\theta^{\prime}=\phi=0\). As was shown in the appendix of [44] in the large BH limit the expression (83) can be approximated by an integral, leading to the expression
\[f(\tilde{\psi})=\int_{0}^{\infty}d\mathcal{J}\int_{-\mathcal{J}}^{\mathcal{J}}d \mathcal{K}\frac{\mathcal{J}}{4\pi^{2}}\frac{e^{i\mathcal{K}\tilde{\psi}}}{ \mathcal{J}^{2}-\lambda_{1}-i\lambda_{2}\mathcal{K}+\lambda_{3}\mathcal{K}^{2}} \tag{88}\]
for the shock-wave profile on Hopf circles. This integral can be computed exactly by contour integration (see [44] for details)
\[f(\tilde{\psi})=-\frac{1}{4\pi|\tilde{\psi}|}\begin{cases}e^{-k_{+}\tilde{\psi }},&\tilde{\psi}>0\\ e^{k_{-}\tilde{\psi}},&\tilde{\psi}<0\end{cases} \tag{89}\]
where
\[k_{\pm}=\frac{r_{+}}{\ell\sqrt{1-a^{2}/\ell^{2}}}\left(\sqrt{\frac{3}{2}}\mp \frac{a}{\ell}\right). \tag{90}\]
In co-rotating coordinates, the OTOC is therefore exponentially growing in time with an exponent \(2\pi Tt\) and exponentially decaying in the \(\tilde{\psi}\) direction governed by \(k_{\pm}\). Hence we can now arrive the one of the main results of [44] and this sections over, given in terms of the fixed boundary coordinates \((t,\Psi)\) the OTOC takes the functional form
\[H(t,\Psi)\sim 1-\frac{cG_{N}}{4\pi|\Psi-\Omega t|}\begin{cases}\exp \left(2\pi T_{+}\bigg{(}t-\frac{L\Psi}{2v_{B}^{\pm}}\bigg{)}\right),&\Psi- \overline{\Omega}t>0,\\ \exp\left(2\pi T_{-}\bigg{(}t-\frac{L\Psi}{2v_{B}^{\pm}}\bigg{)}\right),&\Psi -\overline{\Omega}t<0\end{cases} \tag{91}\]
where \(\ell\Psi/2\) corresponds to the spatial distance between the operators \(\hat{V},\hat{W}\) and the exponents \(T_{\pm},v_{B}^{\pm}\) are given by
\[2\pi T_{\pm} =\frac{2r_{+}}{\ell^{2}\sqrt{1-a^{2}/\ell^{2}}}\left(1\mp\sqrt{ \frac{3}{2}}\frac{a}{\ell}\right), \tag{92}\] \[\frac{2\pi T_{\pm}}{v_{B}^{\pm}} =\frac{2r_{+}}{\ell^{2}\sqrt{1-a^{2}/\ell^{2}}}\left(\sqrt{\frac{ 3}{2}}\mp\frac{a}{\ell}\right). \tag{93}\]
A quick inspection of eq. (92) and eq. (93) reveals that these expressions can be obtained by applying a boost of velocity \(v=-\overline{\Omega}\ell/2=a/\ell\) to the expression \(H_{4}\sim\exp(2\pi T_{0}(t-\ell\Psi/2v_{B}^{(0)}))\) for the OTOC of the AdS\({}_{5}\)-Schwarzschild black brane. Performing a Lorentz transformation of this form gives
\[2\pi T_{\pm}=2\pi T_{0}\gamma_{v}\bigg{(}1\mp\frac{v}{v_{B}^{(0)}}\bigg{)}, \qquad\quad\frac{2\pi T_{\pm}}{v_{B}^{\pm}}=2\pi T_{0}\gamma_{v}\bigg{(}\frac {1}{v_{B}^{(0)}}\mp v\bigg{)}, \tag{94}\]
which are equivalent to (92) and (93) upon using the expressions \(2\pi T_{0}=2r_{+}/\ell^{2}\) and \(v_{B}^{(0)}=\sqrt{2/3}\), with \(v_{B}^{(0)}\) the butterfly velocity of the static Schwarzschild-AdS\({}_{5}\) black brane.
Before demonstrating that this butterfly also has a hydrodynamic origin, as discussed in the introduction to this section, we note that the Lyapunov exponent \(\lambda_{L}\) associated to the OTOC on Hopf circles can be extracted from \(H(t,0)\) in eq. (91) (see for instance [80]). Doing so gives \(\lambda_{L}=2\pi T_{+}=\text{Im}(\omega_{+})\) (for \(a>0\)) and \(\lambda_{L}=2\pi T_{-}=\text{Im}(\omega_{-})\) (for \(a<0\)). We therefore have that
\[\lambda_{L}=2\pi T\left(1-\sqrt{\frac{3}{2}}\frac{|a|}{\ell}\right)=2\pi T \left(1-|v|/v_{B}^{(0)}\right) \tag{95}\]
which, as discussed in [44], saturates a velocity-dependent generalization of the Maldacena/Shenker/Stanford chaos bound [88] orginally proposed in [80]. Furthermore, this naturally saturates the generalized bound for the positive operators \(\theta_{a}Q_{a}\) proposed in [80], given by
\[\frac{|\theta_{a}[Q_{a}]^{\mu}\partial_{\mu}H|}{1-H}\leq 2\pi\,. \tag{96}\]
where \(H\) is the OTOC, \(\theta_{a}\) are chemical potentials and \(Q_{a}\) are the corresponding generators. In our case the operators are given by \(\theta_{a}Q_{a}=\beta(\tilde{H}+\overline{\Omega}J)\) (with \(\beta\) the inverse temperature, \(\tilde{H}\) the Hamiltonian, \(J\) the angular momentum and \(\overline{\Omega}\) the angular velocity) and in this case the generalized bound given above in equation (96) reduces to
\[\frac{\left|\partial_{t}H(t,\psi)+\overline{\Omega}\partial_{\psi}H(t,\psi) \right|}{1-H(t,\psi)}\leq 2\pi T\,. \tag{97}\]
Using the definition of the OTOC given in equation (87) one finds the bound is saturated 26
Footnote 26: It was pointed out already in [82], that rotating systems in 1+1 d (e.g. [81]) have been noticed to satisfy the generalized MSS bound of [80].
#### iv.2.2 Near horizon metric fluctuations
As demonstrated in [44] and discussed in the introduction of this section, we now turn to the hydrodynamic origin of chaos in our holographic system, and we give an overview of the calculation done to extract information about the retarded Green's function, \(G^{R}_{T^{00}T^{00}}\), from the study of infalling metric perturbations. We will see that this exhibits the characteristic features of pole-skipping as identified in [35; 36; 76]. To be concrete, at a special value of the frequency \(\omega=i2\pi T\) the linearized Einstein equations admit an additional infalling mode when the angular profile of the metric component \(\delta g_{vv}\) at the horizon is solution to the (sourceless) shockwave equation (76) that governs the form of the OTOC.
Unlike the discussion of the previous section we now take the infalling Eddington-Finkelstein coordinates 27 (see appendix C) \((v,r,\tilde{\psi},\phi,\theta)\). We then study linearized perturbations of the metric of the form
Footnote 27: Note we still choose the co-rotating coordinate defined in eq. (101).
\[\delta g_{\mu\nu}(v,r,\tilde{\psi},\phi,\theta)=e^{i(k\tilde{\psi}-\omega v)} \delta g_{\mu\nu}(r,\phi,\theta) \tag{98}\]
Note that we have not yet employed the full machinery of the Wigner D-functions as developed in section II. We initially refrain from this in order to demonstrate the equivalence between the linearized Einstein equations near the horizon and the shockwave computation. Inserting this ansatz into the linearized Einstein equations will give rise to a coupled set of coupled PDEs for \(\delta g_{\mu\nu}(r,\phi,\theta)\). In particular, as in previous examples of pole-skipping, we find that at \(\omega=i2\pi T\) the \(vv\) component of the Einstein equations at \(r=r_{+}\) reduces to a decoupled equation for the angular profile of \(\delta g_{vv}\) evaluated at the horizon. Furthermore, all other metric perturbation decouple from this component of the Einstein equations which takes the form
\[\left\{\left[\cot(\theta)\partial_{\theta}+\partial_{\theta}^{2} -\csc^{2}(\theta)k^{2}+\csc^{2}(\theta)\partial_{\phi}^{2}-2i\cot(\theta)\csc (\theta)k\partial_{\phi}\right]\right.\] \[+\lambda_{1}+i\lambda_{2}k-\lambda_{3}k^{2}\bigg{\}}\delta g_{vv} (r_{+},\phi,\theta)=0\,, \tag{99}\]
where \(\lambda_{1}\), \(\lambda_{2}\) and \(\lambda_{3}\) the same constants defined in equation (78). As advertised, eq. (99) is precisely the source-less version of the shockwave equation governing the form of OTOC (76), after a Fourier transform with respect to the \(\psi\) coordinate. The key point is that the metric perturbation \(\delta g_{vv}(r_{0},\phi,\theta)\) must either have a specific angular profile on the horizon (i.e. be a non-trivial solution to eq.(99)) or it must be zero at the horizon. As such, for metric perturbations with a particular angular profile related to the shockwave equation we expect an extra infalling mode, and the pole-skipping phenomenon.
We can go further by recalling that the Wigner D-functions are related to the Wigner d-functions as \(\mathcal{D}^{\mathcal{J}}_{\mathcal{K}\mathcal{M}}(\tilde{\psi},\theta,\phi) =d^{\mathcal{J}}_{\mathcal{K}\mathcal{M}}(\theta)e^{i\mathcal{K}\tilde{\psi} +i\mathcal{M}\phi}\). With this in mind we can can parameterize \(\delta g_{vv}(r_{+},\phi,\theta)=e^{i\mathcal{K}\phi}d^{\mathcal{J}}_{ \mathcal{K}\mathcal{M}}(\theta)\), inserting this into eq. (99) leads to the statement that at pole-skipping points the quantum numbers \(\mathcal{K}\) and \(\mathcal{J}\) are related 28 by
Footnote 27: Note we still choose the co-rotating coordinate defined in eq. (101).
Footnote 28: It is important to understand that for integer and half-integer \(\mathcal{J}\) with \(|\mathcal{M}|,|\mathcal{K}|\leq\mathcal{J}\) there is a unique solution to eq. (82) that is regular on \(S^{3}\). In order to consider pole-skipping, it is necessary to relax the condition that \(\mathcal{M},\mathcal{K},\mathcal{J}\) be integers and consider them to be arbitrary complex parameters. This corresponds to considering angular profiles of the metric components that are not necessarily regular on the three-sphere.
\[\lambda_{1}+i\lambda_{2}\mathcal{K}-\lambda_{3}\mathcal{K}^{2}=\mathcal{J}( \mathcal{J}+1). \tag{100}\]
It is worth pointing out that as a result of the equivalence between the horizon equation and shockwave equation, this relationship corresponds exactly to the locations of the poles in the shockwave profile given in eq. (83).
To compare to eq. (91) we take the large BH limit \(r_{+}/\ell\gg 1\) while keeping \(a/\ell\sim\mathcal{O}(1)\). In the this limit the leading order behavior of the constants \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\) is
\[\lambda_{1}=\frac{r_{+}^{2}}{2\ell^{4}}(2a^{2}-3\ell^{2}),\qquad \quad\lambda_{2}=-\frac{2ar_{+}}{\ell^{2}}\sqrt{1-\frac{a^{2}}{\ell^{2}}} \qquad\quad\lambda_{3}=-\frac{a^{2}}{\ell^{2}}.\]
As we discussed in section III.1, sectors in the MP BH are determined by a choice of the quantum number \(\mathcal{K}\), here we will consider the class of pole-skipping points where \(\mathcal{K}\) is parallel to the rotation direction i. e. taking its maximal value (\(\mathcal{K}=\mathcal{J}\)). Further, we recall that in the large BH limit we also choose the momentum to scale with the horizon radius as \(\mathcal{J}\sim r_{+}/\ell\gg 1\). Equipped with a behavior of the momentum in the large BH limit we find that eq. (100) reduces to the following quadratic equation for \(\mathcal{J}\)
\[\left(1-\frac{a^{2}}{\ell^{2}}\right)\mathcal{J}^{2}+\frac{2iar_ {+}}{\ell^{2}}\sqrt{1-\frac{a^{2}}{\ell^{2}}}\mathcal{J}+\frac{r_{+}^{2}}{2 \ell^{4}}\left(3\ell^{2}-2a^{2}\right)=0\,, \tag{101}\]
whose solutions are given by
\[\mathcal{J}=\pm\frac{ir_{+}}{\ell\sqrt{1-a^{2}/\ell^{2}}}\bigg{(} \sqrt{\frac{3}{2}}\mp\frac{a}{\ell}\bigg{)}=\pm ik_{\pm}. \tag{102}\]
Finally, we see these modes exhibit pole-skipping at a frequency \(\omega=i2\pi T\) and at precisely the same wave-vectors \(k_{\pm}\) that appear in the exponential form of the OTOC when the operators lie on a Hopf circles (see eq. (91)).
So far we have been working in terms of co-rotating coordinates, hence the boundary response is characterized by coordinates \((t,\bar{\psi},\theta,\phi)\). If, however, we transform these results to the standard coordinates \((t,\psi,\theta,\phi)\) for the three sphere, then we find that the wave-vectors \(k_{\pm}\) at which pole-skipping occurs are unchanged, while the frequencies are modified to
\[\omega_{\pm}=i(2\pi T\pm k_{\pm}\overline{\Omega})=\frac{2ir_{+} }{\ell^{2}\sqrt{1-a^{2}/\ell^{2}}}\bigg{(}1\mp\sqrt{\frac{3}{2}}\frac{a}{ \ell}\bigg{)}. \tag{103}\]
Before closing this section it is worth discussing one of the main conclusions of this section in more detail. Namely, we have see that the pole-skipping locations uncovered in the near horizon analysis of the Einstein equations match precisely the poles of the summand in the expression for the angular profile of the shockwave geometry (compare eq. (100) and eq. (83)). And, furthermore, in the large BH limit, where we can obtain a closed form expression for the OTOC, the pole-skipping points match precisely the wave vector and exponential growth rate of the OTOC (compare eq. (103) and eq. (102) to eq. (92) and eq. (93)). As already remarked, the form of the OTOC in this limit can be obtained directly from the equivalent calculation in a Schwarzschild black brane by performing a Lorentz transformation. To be concrete, for a non-rotating Schwarzschild AdS black brane geometry it is the pole-skipping points in the scalar channel which precisely match the wave vector and exponential growth rate of the OTOC, these are given by [78]
\[\mathfrak{w}=i,\quad\mathfrak{q}=\pm\sqrt{\frac{3}{2}}i\,. \tag{104}\]
Therefore, we should expect to see explicitly that the equations of motion for the fluctuations in the scalar channel of the MP BH reduce to those of a boosted Schwarzschild black brane in the large BH limit and hence we should be able to numerically verify, directly, that the mode given by eq. (104) when boosted appropriately, is contained in the spectrum of fluctuations.
To do see these observations we turn to the scalar channel as discussed in section III.1 and take the large BH limit defined in eq. (52) and eq. (53) keeping only the leading term in \((r_{+}/\ell)\). Completing this procedure one finds that the fluctuations \((h_{33},h_{tt},h_{43},h_{+-},h_{+t},h_{+3},h_{++})\) decouple into three sectors a scalar channel \((h_{33},h_{tt},h_{43},h_{+-})\), vector channel \((h_{+t},h_{+3})\) and tensor channel \((h_{++})\). Again, focusing on the scalar channel, one finds 7 equations, which can be arranged as four dynamical equations and
three constraints. Transforming to the coordinate \(u=(r_{+}/r)^{2}\) (and setting \(\ell=1\) for simplicity), an analysis of residual gauge symmetry leads to a master variable \(Z_{0}(u)\) which can be constructed from the fields \((h_{33},h_{tt},h_{t3},h_{+-})\) as
\[Z_{0}(u) =(2\left(j^{2}\left(a^{2}-z^{4}-1\right)-a^{2}\nu^{2}\left(z^{4}+1 \right)-2aj\nu z^{4}+\nu^{2}\right)h_{+-}(u)\] \[+\left(a^{2}-1\right)\left(j^{2}h_{tt}(u)-4j\nu h_{t3}(u)+4\nu^{ 2}h_{33}(u)\right))\,. \tag{105}\]
In terms of this master variable the scalar channel can be reduced to a single equation
\[0 =\frac{\left(a^{2}\left(4u^{2}-u\nu^{2}-4\right)-2aj\nu-u\left(j^ {2}+4u\right)+4\right)}{a^{2}-1}Z_{0}(u)+ \tag{106}\] \[\qquad\frac{f(u)}{u}\left(3u^{2}-5\right)Z_{0}^{\prime}(u)+\frac {f(u)^{2}}{u^{2}}Z_{0}^{\prime\prime}(u),\]
At this point, as was shown originally in [44], we can demonstrate that the explicit dependence on the rotation parameter \(a\) can be removed by the following Lorentz boost
\[\mathfrak{q} =\frac{a\nu+j}{\sqrt{1-a^{2}}}\,,\qquad\mathfrak{q}=\frac{k}{2 \pi T}, \tag{107}\] \[\mathfrak{w} =\frac{\nu+aj}{\sqrt{1-a^{2}}}\,.\qquad\mathfrak{w}=\frac{\omega }{2\pi T}\,\,\,,\]
after which the resulting \(a\)-independent equation is equivalent to the equation for the master field governing (scalar sector) metric perturbations around a Schwarzschild AdS black brane [46]. Hence, (106) is simply the master equation 29 for scalar fluctuations around a boosted black brane, where the momentum of the fluctuations is aligned with the boost direction.
Figure 13: _Chaos in the sound dispersion._ The complex-valued dispersion relation \(\nu(j)\) for the rotating BH in the large BH limit. Displayed is the imaginary part of the frequency parameter \(\Im(\nu)\) of the perturbation as a function of the imaginary part of its angular momentum \(\Im(j)\). Each color (and different symbol) represents a different value for the angular momentum per mass \(a/\ell\) in incremental steps of \(3/10\). The black line indicates the trajectory of the pole-skipping location as the angular momentum per mass, \(a/\ell\), varies. The cross, \(\mathbf{x}\), highlights the point where one of the pole-skipping points associated to this dispersion relation ceases to be located in the upper half plane, corresponding to \(a/\ell=\sqrt{\frac{2}{3}}\). Adapted from [44].
Having confirmed explicitly that the fluctuation equations in the large BH limit reduce to those of a boosted Schwarzschild black brane it is then a simple matter to apply eq. (107) to eq. (104) to confirm that it gives that the pole-skipping points in eq. (103) and eq. (102). Furthermore, in [44] pole-skipping at these locations was verified explicitly by numerically 30 solving for the dispersion relation of the (sound) QNM in the scalar sector of perturbations for each value of \(a/\ell\). These numerical results are illustrated in [13]. The pole-skipping locations are indicated with black symbols in Fig. 13 for increasing values of rotation \(a/\ell\). As noted in [44], the rotation parameter \(a/\ell\) affects both the frequency and wave-vector of the pole-skipping points 31. At the special value \(a/\ell=\sqrt{2/3}=v_{B}^{(0)}\) i.e. the boost speed is equivalent to the conformal butterfly velocity, the pole-skipping point associated with \((\omega,\mathcal{K})=(\omega_{+},ik_{+})\) crosses over into the lower half plane of the complex frequency plane (which has been highlighted in the figure with a cross).
Footnote 30: This was done using two independent methods from two independently constructed routines, first, from a static Schwarzschild black brane whose modes were then boosted and second, directly from the large BH limit of the rotating BH (for further details see the Appendix of [44]).
Footnote 31: This is in contrast to static planar BHs for which only the wave-vector changes as one varies microscopic parameters at a fixed temperature (e.g. as in the general results of [35, 36] and the Reissner-Nordstrom-AdS geometry of [89])
The fact that the pole-skipping location \((\omega,\mathcal{K})=(\omega_{+},ik_{+})\) enters the lower half complex frequency plane when the boost associated to rotation exceeds \(v_{B}^{(0)}\) has an interesting implication for the associated butterfly velocity. Extracting a butterfly velocity from the pole-skipping locations (or equivalently through the OTOC profile in eq. (93) by \(\omega_{\pm}/k_{\pm}=v_{B}^{\pm}\) we obtain
\[v_{B}^{\pm}=\frac{v_{B}^{(0)}\mp\frac{a}{\ell}}{1\mp v_{B}^{(0)}\frac{a}{\ell }}\,, \tag{108}\]
which, as expected, is nothing but relativistic addition of velocities obtained by applying a boost with speed \(a/\ell\) parallel to \(\pm v_{B}^{(0)}\)32. From this formula one can note that the butterfly velocity \(v_{B}^{+}\) vanishes at \(a/\ell=v_{B}^{(0)}\) precisely at same time that the pole-skipping location \((\omega,\mathcal{K})=(\omega_{+},ik_{+})\) passes into the lower half complex frequency plane. This is displayed in Fig. 14 with a cross denoting the crossing
Figure 14: _Butterfly velocity in a rotating plasma._ The butterfly velocities \(v_{B}^{\pm}\) given in (108) are displayed as a function of the rotation parameter \(a/\ell\). The colored dashed lines indicate the conformal value \(v_{B}^{(0)}=\pm\sqrt{2/3}\) whilst the black dashed lines indicate the speed of light. Notice that the“upstream” butterfly velocity \(v_{B}^{+}\) crosses zero at exactly the same value of rotation \(a/\ell=\sqrt{\frac{2}{3}}\) for which the pole-skipping point plotted in Fig. 13 ceases to be located in the upper half complex frequency plane. Adapted from [44].
location. This phenomena is quite natural since, a perturbation traveling upstream against a uniform fluid flow in a conformal theory will be experienced by an observer at rest as sitting still exactly when the perturbation has the speed \(|a/\ell|\). Thus, a perturbation moving at the conformal butterfly speed \(v_{B}^{(0)}\) appears to sit still when the fluid streams with velocity \(|a/\ell|=v_{B}^{(0)}\). When the fluid flows faster, \(|a/\ell|>v_{B}^{(0)}\), this perturbation is dragged downstream. Although natural, this phenomenon could not be observed in previous studies of OTOCs and pole-skipping in rotating BHs. For instance, in the BTZ BH case [79; 81] the butterfly velocity is equal to the speed of light, and while in the Kerr-AdS case [82] the authors were restricted to the slowly rotating limit, a limit in the which the rotation speed was parametrically smaller than the corresponding conformal butterfly velocity.
## IV Outlook
A summary was already given in the introduction sections for the fluid results I.2 and for the gravity results I.3. Here, we focus on the outlook:
1. The general case with unequal angular momenta \(J_{\phi}\neq J_{\psi}\) (\(a\neq b\)) or the singly spinning configurations \(J_{\phi}\neq 0,J_{\psi}=0\) and \(J_{\phi}=0,J_{\psi}\neq 0\) could be explored. 1. The singly spinning case contains an axis of rotation, so large temperature expansions around such a state would be interesting. 2. Both cases involve solving 2D partial differential equations in order to find the QNMs.
2. Explore the intermediate horizon radius regime around \(10^{3}<r_{+}<10^{7}\) in which nontrivial hydrodynamic transport is expected (which is not simply that of a boosted fluid, as argued in Sec. III.1.5).
3. What is the physical relevance of the level crossings between non-hydrodynamic modes (see Fig. 5)? Are these crossing points related to the level crossings and mode collisions determining critical points at complexified momentum [90; 40]?
4. Only a subset of fluctuations was explored in this article, namely the ones that reduce in the large BH limit to the fluctuations with linear momentum aligned with the anisotropy created in the fluid due to its rotation. More general fluctuations should be considered, which would for example allow one to compute the perpendicular diffusion coefficient \(\mathcal{D}_{\perp}\).
5. The presence of a linear instability in global AdS Kerr begs the question of what is the dual field theoretic interpretation to this instability?
6. An implementation of rotation, polarization and spin in hydrodynamic codes is desirable for various applications ranging from modelling heavy-ion-collisions [91] over modelling astrophysical settings with relevance to BH mergers or neutron star mergers [2; 3], and fluids in the context of condensed matter physics, e.g. [92; 93; 94; 95]. The holographic fluid presented in this article can serve as a testing ground for such hydrodynamic codes.
7. It is possible to systematically construct a version of hydrodynamics around a rotating equilibrium state. This state breaks rotation invariance and thus involves a new vector, the unit vector pointing in the symmetry direction, similar to anisotropic hydrodynamics [96; 97; 98]). The fluid/gravity correspondence [99] can be used to systematically derive and test such a description. It can be used to extend the rotating black hole solutions (20) into metric solutions which have temperature and angular momentum depend on the transverse coordinates and time. This leads to a gradient expansion in the dual field theory, locally correcting the rotating equilibrium state by gradients of temperature and angular momentum, likely modifying the hydrodynamic dispersion relations significantly.
It is worthwhile pointing out the logical possibility that the quark-gluon-plasma generated in heavy-ion-collisions is _not_ carrying any significant vorticity or angular momentum. Note that a recent numerical study of a holographic fluid created in the collision of localized, lumpy shocks (modeling a granular nuclear structure) finds that most of the angular momentum after the collision resides in the periphery while only a small fraction of the total angular momentum is present in the region representing the
holographic QGP [100]. If the same should be true for the QCD QGP generated at RHIC and LHC, then the hyperon polarization can not be explained by any arguments involving any significant amount of vorticity or angular momentum in the QGP. In that case, the hyperon polarization data may be considered an open question lacking theoretical understanding.
## Acknowledgements
C.C., M.A. and M.K. acknowledge the excellent discussions and collaborations with Jorge Noronha, Enrico Speranza, Mike Blake and Anthony Thompson, on which part of this work is based. This research was supported in part by the U.S. Department of Energy under Grant No. DE-SC0012447 and by the National Science Foundation under Grant No. NSF PHY-1748958. C.C. was supported in part, by the Netherlands Organisation for Scientific Research (NWO) under the VICI grant VI.C.202.104. M.K. thanks the Kavli Institute for Theoretical Physics, Santa Barbara for hospitality while part of this work was completed.
## Appendix A QNM Data
For convenience we include the six modes closest to the line \(\Im(v)=0\) in the three sectors discussed in section III.1 for \(a/\ell=0,1/2/9/10\). In each table we fix a value of \({\cal J}=0,1/2,1\) and \(r_{+}=10\), the remaining data can be found in the supplementary material.
### Tensor sector
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} n & \multicolumn{2}{c|}{\(a/\ell=0\)} & \multicolumn{2}{c|}{\(a/\ell=1/2\)} & \multicolumn{2}{c}{\(a/\ell=9/10\)} \\ \hline & \(\Re(v)\) & \(\Im(v)\) & \(\Re(v)\) & \(\Im(v)\) & \(\Re(v)\) & \(\Im(v)\) \\ \hline
1 & 1.57792 & -1.36943 & 1.83839 & -1.73479 & -0.214354 & -0.529707 \\
2 & -1.57792 & -1.36943 & -1.63407 & -1.73893 & -0.223526 & -1.00114 \\
3 & 2.60488 & -2.37904 & -0.471547 & -1.94296 & -0.231857 & -1.4879 \\
4 & -2.60488 & -2.37904 & -2.90868 & -2.98463 & -0.239585 & -1.98766 \\
5 & -3.61756 & -3.38252 & 3.08986 & -2.98592 & -0.246234 & -2.49843 \\
6 & 3.61756 & -3.38252 & -4.14395 & -4.21689 & 2.84375 & -2.73946 \\ \end{tabular}
\end{table}
Table 1: \({\cal J}=0\) tensor modes. \(n\) indicates the \(n\)th furthest mode from the \(\Im(\nu)=0\) axis.
## 2 Vector sector
### Scalar sector
\(i=1,\ldots R-1\). The original equation, eq. (107), then becomes a set of linear equations
\[\omega\left(\sum_{n=1}^{R}M_{n}\Psi_{n-1}\right)=-M_{0}\Psi_{0}\,,\qquad\qquad \Psi_{i}=\omega\Psi_{i-1}\,,\quad i=1,\ldots R-1\,, \tag{108}\]
which is now a linear generalized eigenvalue problem. We work in null coordinates (see Appendix C) to avoid non-regular behavior of the fields at the horizon.
To proceed, we discretize the resulting equations of motion by decomposing the fields in a truncated Chebyshev basis on a pseudo-spectral grid. Fields in the equations of motion are replaced with vectors of these fields evaluated at the grid points, the coefficients of these fields in the equations of motion become diagonal matrices with diagonal elements the coefficients evaluated at the grid points, and the partial derivatives turn into pseudo-spectral differentiation matrices. Appropriate boundary conditions also have to be imposed, whereby the dynamical equations are required to be regular at the horizon (null coordinates ensure this for us) and are not sourced at the conformal boundary. Finally we can solve the resulting (generically) non-linear eigenvalue problem defined on the pseudo-spectral grid using the **Eigenvalues** function in Mathematica, and these give the frequencies of QNMs.
To ensure convergence, comparison over several orders of grid points should be performed to filter out spurious modes introduced in the spectrum due to the discretization scheme. Physical modes will move only slightly when changing the number of grid points used in the scheme unlike spurious modes. Moreover, the equations of motion contain additional unused equations, and these function as constraints that have to be satisfied. By using the **Eigensystem** function in Mathematica, eigenvalues and the corresponding eigenvectors can be obtained in one go. The eigenvectors represent the values of the fields at each of the grid points for a given QNM frequency. Inserting the eigenvector and eigenvalue back into the constraint equations gives a measure of the success of the method. We use the following conservative measure:
Let \(S_{i}^{j}\), \(i=1,\ldots,c\), represent the \(c\) constraint equations at each grid point \(j\). Using a Gauss-Lobatto grid of \(N\) points, we compute (recall there are \(m\) fields)
\[\varsigma=\frac{1}{mN}\sum_{i=1}^{c}\sum_{j=1}^{N}|S_{i}^{j}|\,. \tag{109}\]
What \(\varsigma\) measures is how much all the constraint equations in sum differ from zero as an average over all grid points. For the tensor sector, there is no constraint, and we can verify that the QNM problem is solved at least to the level of \(10^{-15}\) by directly putting the solution back into the equation. For the vector sector we keep only modes for which \(\varsigma<10^{-12}\). The scalar sector is more challenging. There, we are able to find solutions for which \(\varsigma\lesssim 10^{-4}\) for \(r_{+}=10\), \(\mathcal{J}=25\), \(a/\ell=1/2\) and \(N=20\). At smaller \(a\) or \(J\), or near the large BH limit the convergence improves significantly.33 Work to improve this is underway.
Footnote 33: Two independent codes were constructed for the purpose of computing QNMs in this work. One was initially created for the study [37] and did not feature a verification of the constraint violation. The second was created independently by another author of the present work during the preparation of [43, 44], and it was later improved to measure constraint violation. In this work, primary results were obtained using the first code and then independently verified using the second code.
_Critical points_-- Unfortunately, obtaining roots to eq. (70) is not possible analytically, and numerical method is required. As discussed in [43], first the relevant fields are discr
truncated Chebyshev pseudo-spectral representation of \(N\)th order on a Gauss-Lobatto grid of size \(N\). This means that the differential operator \(M\) is now a dense, \(nN\times nN\) matrix. To have the numerical computation feasible, the matrix \(M\) is next \(LU\)-decomposed, and the spectral curve is then given by
\[P(\nu,j)=\det(M(\nu,j))=\det(LU)=\det(L)\det(U)=\det(U)\,, \tag{100}\]
as \(\det(L)=1\) by construction. This is now computationally much easier to do as \(U\) is an upper triangular matrix whose determinant is simply the product of the diagonal entries. To extract the critical points, a two-dimensional Newton-Raphson root finder can be used to search for the value of \((j,\nu)\) satisfying the two conditions in eq. (70). Doing so requires a definition of the derivative of the numerical determinant \(P\), and this can be done e.g. with finite differences:
\[\frac{\partial P}{\partial\nu}=\frac{\det(U(\nu+\delta,j))-\det(U(\nu,j))}{ \delta}\,, \tag{101}\]
where \(\delta\) is a small number. For the results presented in [43] and discussed here, \(\delta=10^{-15}\).
## Appendix C Null coordinates
### Infalling Eddington-Finkelstein coordinates
The calculation of the QNM displayed in section III.1 can be done in any coordinate system. However, a particularly useful coordinate system to choose is one based on a null slicing of the spacetime in which the coordinate induced singularity at the outer horizon radius is removed. The infalling Eddington-Finkelstein like coordinates used in the calculations are defined as follows
\[d\hat{t} =dt+\frac{\sqrt{\beta(r)}}{G(r)}dr\,, \tag{102}\] \[d\hat{\psi} =d\psi+\frac{2h(r)\sqrt{\beta(r)}}{G(r)}dr\,,\] (103) \[d\hat{r} =dr\,, \tag{104}\]
where
\[\beta(r) =1-\frac{a^{2}r_{+}^{4}\left(r_{+}^{2}+L^{2}\right)}{r^{4}\left(a ^{2}r_{+}^{2}+a^{2}L^{2}-r_{+}^{2}L^{2}\right)}\,, \tag{105}\] \[h(r) =\frac{ar_{+}^{4}\left(r_{+}^{2}+L^{2}\right)}{a^{2}\left(r_{+}^ {2}+L^{2}\right)\left(r_{+}^{4}-r^{4}\right)+r^{4}r_{+}^{2}L^{2}}\,. \tag{106}\]
For convenience, we will suppress the hat on \(r\) unless necessary. Making this transformation of coordinates induces a frame transformation
\[\hat{\sigma}^{3}=\sigma^{3}+\frac{2h(r)\sqrt{\beta(r)}}{G(r)}dr\,. \tag{107}\]
Expressed in terms of the null coordinates and the new frame, the metric given in eq. (20) now takes the form
\[ds^{2} =-\frac{G(r)}{\beta(r)}d\hat{t}^{2}+\frac{d\hat{t}dr}{\sqrt{\beta (r)}}+\frac{r^{2}}{4}\left((\sigma^{1})^{2}+(\sigma^{2})^{2}\right) \tag{108}\] \[+\frac{r^{2}}{4}\beta(r)\left(\hat{\sigma}^{3}-2h(r)d\hat{t} \right)^{2}\]
From eq. 102 one can construct the dual vectors as the following
\[\frac{\partial}{\partial\hat{t}} =\frac{\partial}{\partial t} \tag{109}\] \[\frac{\partial}{\partial\hat{\psi}} =\frac{\partial}{\partial\psi}\] \[\frac{\partial}{\partial\hat{r}} =\frac{\partial}{\partial r}-\frac{\sqrt{\beta(r)}}{G(r)}\left( \frac{\partial}{\partial t}+2h(r)\frac{\partial}{\partial\psi}\right)\]
Hence the notation \(d\hat{t}\) and \(d\hat{\psi}\) is justified because their associated vectors' Lie brackets still vanish. Furthermore, the commutation relations in eq. (35) are preserved.
### Kruskal coordinates
A further set of null coordinates is necessary for the study of the gravitational shockwaves in section III.3, the analogue of Kruskal coordinates. We review the transformation to this coordinate system, the full details can be found in [44]. We begin moving to the co-rotating angular coordinate \(\tilde{\psi}\) and the associated one form \(\tilde{\sigma}_{3}\) by
\[\tilde{\psi}=\psi-\overline{\Omega}t,\qquad\qquad\qquad\tilde{ \sigma}^{3}=d\tilde{\psi}+\cos(\theta)d\phi, \tag{102}\]
with \(\overline{\Omega}=-2\widetilde{\Omega}\) for \(\widetilde{\Omega}\) defined in eq. (30). In the co-rotating coordinates \((t,r,\tilde{\psi},\phi,\theta)\) the radial and temporal coordinates \((t,r)\) take the form
\[ds^{2}_{(t,r)}=-F(r)dt^{2}+\frac{dr^{2}}{G(r)}, \tag{103}\]
where \(F(r)\) is given as
\[F(r)=1+\frac{r^{2}}{L^{2}}-\frac{r_{+}^{2}}{r^{2}}\bigg{(}1+\frac {r_{+}^{2}}{L^{2}}\bigg{)}-\frac{a^{2}\Sigma^{2}(r^{4}-r_{+}^{4})}{L^{4}r^{2} r_{+}^{4}}\,, \tag{104}\]
which vanishes at \(r=r_{+}\) as expected in co-rotating coordinates. In eq. (104) we have introduced a new parameter \(\Sigma\) defined as
\[\Sigma=L^{2}+r_{+}^{2}\,. \tag{105}\]
As is familiar we can introduce the infalling \((v)\) and outgoing \((u)\) Eddington=Finkelstein coordinates by
\[dv =dt+\frac{dr}{\sqrt{F(r)G(r)}}, \tag{106a}\] \[du =dt-\frac{dr}{\sqrt{F(r)G(r)}}. \tag{106b}\]
Finally, define the Kruskal-like coordinates, \(U\) and \(V\)
\[U=-e^{-\alpha u},\qquad\qquad\qquad V=e^{\alpha v}, \tag{107}\]
where \(\alpha=2\pi T\), for which the metric is then given by
\[ds^{2}=A(UV)dUdV+B(UV)(UdV-VdU)\tilde{\sigma}^{3}+\frac{r^{2}}{4 }((\sigma^{1})^{2}+(\sigma^{2})^{2}+(\tilde{\sigma}^{3})^{2})+\frac{\mu a^{2} }{2r^{2}}(\tilde{\sigma}^{3})^{2}\,. \tag{108}\]
The function \(r=r(UV)\) is defined implicitly through (107) and
\[A(UV)=\frac{F(r)}{\alpha^{2}UV},\qquad B(UV)=\frac{a}{2\alpha UV }\frac{\Sigma(r_{+}^{4}-r^{4})}{L^{2}r^{2}r_{+}^{2}}. \tag{109}\]
The metric (108) is manifestly smooth at both the \(U=0\) and \(V=0\) horizons at which \(r(0)=r_{+}\).
## Appendix D Hydrodynamic dispersion relations and conventions
In this appendix, the hydrodynamic dispersion relation conventions, expected values for transport coefficients and expected scaling powers are presented in the example of the momentum diffusion channel. At vanishing rotation, in the momentum diffusion channel the lowest of the eigenmodes is the hydrodynamic diffusion mode with the dispersion
\[\omega=-i\mathcal{D}q^{2}\,, \tag{110}\]
where the frequency \(\omega\) and momentum \(q\) are chosen to agree with the definitions from [46]. In this case, \(\mathcal{N}=4\) SYM theory has been shown to have \(\mathcal{D}=1/(4\pi T)\)[101, 64]. This diffusion coefficient is furthermore related to the shear viscosity over entropy ratio as follows (via an Einstein relation): \(\mathcal{D}=\eta/(\epsilon+P)=\frac{1}{T}\frac{\eta}{s}=\frac{1}{T}\frac{1}{4\pi}\), where we used that \(\mathcal{N}=4\) SYM has \(\eta/s=1/(4\pi)\) and that \(\epsilon+P=sT\).
In this article, we consider different frequency \(\nu=\omega/(2\pi T\ell)\) and momentum \(j=q/(2\pi T\ell)\). For the rest of this appendix, we assume \(\ell=1\), as is done in the main text unless stated otherwise. We also consider rotating states, which will modify the dispersion of the diffusion mode, so for the longitudinal diffusion mode we write
\[\nu=v_{||}j-i\mathcal{D}_{||}j^{2}+\mathcal{O}(j^{3})\,, \tag{101}\]
with the longitudinal speed \(v_{||}\) of this former diffusion mode and its modified diffusion coefficient \(\mathcal{D}_{||}\). The temperature at vanishing angular momentum \(a=0\) takes the value \(T=r_{+}/\pi\). Therefore, at \(a=0\), we expect
\[\mathcal{D}_{||}(a=0)=2\pi T\mathcal{D}=\frac{1}{2}\,. \tag{102}\]
Let us rewrite the dispersion relation in different frequency and momentum variables:
\[\omega= -i\mathcal{D}q^{2}\,, \tag{103}\] \[\Leftrightarrow 2\pi T\nu= -i\mathcal{D}(2\pi Tj)^{2}\,,\] (104) \[\Leftrightarrow\nu= -i\underbrace{\mathcal{D}(2\pi T)}_{\mathcal{D}_{||}}j^{2}\,. \tag{105}\]
Now, let us write the dispersion relation in terms of the variables \(\omega\) and \(\mathcal{J}\), which appear in the ansatz (51) for the metric fluctuations:
\[\omega= -iD\mathcal{J}^{2}\,, \tag{106}\] \[\Leftrightarrow\omega= -iD(\pi Tj)^{2}\,,\] (107) \[\Leftrightarrow\omega= -iD(\pi T)^{2}(\frac{q}{2\pi T})^{2}=-i\underbrace{\frac{D}{4}}_ {\mathcal{D}}q^{2}\,, \tag{108}\]
such that we expect \(D=1/(\pi T)=1/r_{+}\).
Moving on to the case of \(a\neq 0\), we write
\[\omega= v\mathcal{J}-iD\mathcal{J}^{2}\,, \tag{109}\] \[\Leftrightarrow\omega= -iD(\pi Tj)^{2}\,,\] (110) \[\Leftrightarrow\omega= -iD(\pi T)^{2}(\frac{q}{2\pi T})^{2}=-i\underbrace{\frac{D}{4}}_ {\mathcal{D}}q^{2}\,, \tag{111}\]
where \(D\) and \(v\) now depend on the angular momentum parameter \(a\). In the large BH limit \(r_{+}\to\infty\) (more precisely (52) and (53)), we expect \(D(a)=2\frac{\mathcal{D}_{||}}{\pi T^{2}}=\frac{1}{r_{+}}(1-a^{2})^{3/2}\) and \(v=2v_{||}=2a\)[37], which are the values expected for a fluid boosted with boost parameter \(a\)[65, 66]. Thus, for example, the values given in Table 10 are expected in this limit. This form (109) in terms of \(\omega\) and \(\mathcal{J}\) and allowing general exponents, \(\omega=v\mathcal{J}^{\beta}-iD\mathcal{J}^{\alpha}\), was used to fit \(D\), \(v\), \(\alpha\) and \(\beta\) and the result of that fit is displayed in Fig. 8.
## Appendix E Limits of the Wigner D function
An important consideration which has not yet been discussed is the relation of the Wigner d functions to Fourier transformations. To begin, we consider the decomposition of the Wigner-\(D\) functions in terms of the Wigner-\(d\) functions 34.
Footnote 34: In this appendix we drop the notation \((\mathcal{J},\mathcal{K},\mathcal{M})\) in favor of \((J,m,k)\)
\[D^{J}_{\ \ mk}(\theta,\phi,\psi)=e^{-i(k\phi+m\psi)}d^{J}_{\ \ mk}(\theta) \tag{112}\]
The explicit form of the Wigner-\(d\) functions can be written using the Wigner formula [102; 103],
\[{d^{J}}_{mk}(\theta)=\sum_{n}(-1)^{n}w_{n}^{Jmk}\cos(\theta/2)^{2J+k-m-2n}(-\sin( \theta/2))^{m-k-2n} \tag{104}\]
where the coefficients \(w\) can be expressed in terms of a complicated fraction of factorials of its indices. Although this expression is complicated, an important observation is that one can reduce this formula for each value of the indices by repeated operation of the power reduction trigonometric identities to \(\cos(v\theta)\) (\(\sin(\nu\theta)\)), such that,
\[{d^{J}}_{mk}(\theta)=\sum_{\nu=\nu_{min}}^{\nu_{max}=J}t_{\nu}^{Jmk}f(\nu \theta)\,, \tag{105}\]
where,
\[f=\begin{cases}\cos,&m-k\ \ \text{even}\\ \sin,&m-k\ \ \text{odd}\end{cases} \tag{106}\]
Hence we see that expansion for the \(d\) function is in terms of \(\cos\) if \(J-k\) is even and \(\sin\) if \(J-k\) is odd. Recalling the large BH limit studied in [37; 43; 44] in eq. (52) and eq. (53) (repeated here for ease of presentation)
\[J=jr_{+}/L,\quad\omega=2\nu r_{+}/L,\quad r\rightarrow\alpha r\quad r_{+} \rightarrow\alpha r_{+} \tag{107}\]
around \(\alpha\rightarrow\infty\) along with the fact that sectors are distinguished by choices of \(m\) in terms of J, in the large BH limit \(J\approx m\). In the actual calculation all \(k\) dependence of the field equations drops out and hence we are left with a freedom. Here we fix this freedom by assuming \(k\) even and \(J\in\mathbb{Z}\) and hence,
\[{d^{J}}_{mk}(\theta)=\sum_{\mu=0}^{\mu_{max}=J}t_{\mu}^{Jmk}\cos(\mu\theta) \tag{108}\]
From orthogonality,
\[t_{\mu}^{Jmk}=\frac{1}{2\pi(1+\delta_{\mu 0})}\int_{0}^{4\pi}\mathrm{d} \theta d_{mk}^{J}(\theta)f(\mu\theta),\qquad\delta_{\mu 0}=\begin{cases}1&\text{for }\mu=0\\ 0&\text{else}\end{cases} \tag{109}\]
one sees that \(t_{\mu}^{Jmk}=t_{-\mu}^{Jmk}\) and hence one can rewrite eq. (108) as,
\[{d^{J}}_{mk}(\theta)=\frac{1}{2}\sum_{\mu=-J}^{J}t_{\mu}^{Jmk}e^{I\mu\theta} \tag{110}\]
Therefore, the Wigner \(d\) functions can be written as Fourier sine and cosine series [103].
Now, let us diverge from [103] and investigate briefly what happens to this Fourier representation in the large BH limit, keeping mind that in this limit \(J\approx m\). We have the following expression,
\[{d^{\frac{j\alpha r_{+}}{L}}}_{\frac{j\alpha r_{+}}{L}k}(\theta)=\frac{1}{2} \sum_{\mu=-J}^{\frac{j\alpha r_{+}}{L}}\frac{\Delta\mu}{2\pi}\frac{2\pi}{ \Delta\mu}t_{\mu}^{\frac{j\alpha r_{+}}{L}\frac{j\alpha r_{+}}{L}k}e^{I\mu\theta} \tag{111}\]
where we have cleverly inserted a factor of 1 in the summand. If we formally extend the range of \(\theta\) to \((-\infty,\infty)\) and define,
\[T_{k}(\mu)=\lim_{\Delta\mu\to 0}\lim_{\alpha\to\infty}\frac{2\pi}{\Delta\mu}t_{\mu}^{ \frac{ja\tau_{+}}{L}\frac{ja\tau_{+}}{L}k} \tag{102}\]
while using the definition from calculus that,
\[\lim_{\Delta\mu\to 0}\lim_{\alpha\to\infty}\sum_{\mu=-\frac{ja\tau_{+}}{L}}^{ \frac{ja\tau_{+}}{L}}\frac{\Delta\mu}{2\pi}=\frac{1}{2\pi}\int_{-\infty}^{ \infty}\mathrm{d}\mu \tag{103}\]
we find,
\[\lim_{\Delta\mu\to 0}\lim_{\alpha\to\infty}d^{\frac{ja\tau_{+}}{L}} \frac{ja\tau_{+}}{L}k(\theta)=\frac{1}{4\pi}\int_{-\infty}^{\infty}\mathrm{d} \mu T_{k}(\mu)e^{I\mu\theta} \tag{104}\]
and hence the Wigner \(d\) function can be interpreted as a Fourier transformation. In these equations \(\theta\) should now be thought of as a coordinate along the real line \(\mathbb{R}\).
|
2302.05870 | Three-dimensional exponential sums under constant perturbation | In this paper, by generalizing Bombieri and Iwaniec's double large sieve
inequality, we obtain an estimation on a type of three-dimensional exponential
sums with constant perturbation. As an application of our estimation, we obtain
an asymptotic formula for a sum involving the Mangoldt function and the
integral part function. This type of sum was originally studied by
Bordelles-Dai-Heyman-Pan-Shparlinski. | Jiamin Li, Jing Ma | 2023-02-12T07:59:11Z | http://arxiv.org/abs/2302.05870v1 | # Three-dimensional exponential sums under constant perturbation
###### Abstract.
By generalizing Bombieri and Iwaniec's double large sieve inequality, we obtain an estimation on three-dimensional exponential sums with constant perturbation. As an application of our estimation on the three-dimensional exponential sums, we get that for any \(\epsilon>0\),
\[\sum_{n\leqslant x}\Lambda\biggl{(}\biggl{[}\frac{x}{n}\biggr{]}\biggr{)}=x \sum_{d\geqslant 1}\frac{\Lambda(d)}{d(d+1)}+O_{\epsilon}\bigl{(}x^{\frac{17}{36}+ \epsilon}\bigr{)}\]
as \(x\to\infty\). This improves a recent result of Liu-Wu-Yang, which requires \(9/19\) in place of \(17/36\).
Key words and phrases:Exponential sum, Constant perturbation, Double large sieve inequality
## 1. Introduction
The exponential sums of type
\[\sum_{n_{1}}\cdots\sum_{n_{j}}e(xn_{1}^{\alpha_{1}}\cdots n_{j}^{\alpha_{j}}), \tag{1.1}\]
where \(n_{1},\ldots,n_{j}\) ranges over integers from an interval, have been widely studied and applied in analytic number theory. In this paper, we consider the following three-dimensional exponential sums
\[S_{\delta}(H,M,N):=\sum_{h\sim H}\sum_{m\sim M}\sum_{n\sim N}a(h,m)b(n)e \biggl{(}X\frac{M^{\beta}N^{\gamma}}{H^{\alpha}}\frac{h^{\alpha}}{m^{\beta}n^{ \gamma}+\delta}\biggr{)}, \tag{1.2}\]
where \(a\sim A\) indicates \(A<a\leqslant 2A\), \(e(t)=e^{2\pi it}\), \(H,M,N\) are positive integers, \(X>1\) is a real number, \(a(h,m)\) and \(b(n)\) are complex numbers such that \(|a(h,m)|,|b(n)|\leqslant 1\), and \(\alpha,\beta,\gamma\in\mathbb{R}\) are fixed positive numbers. Note that \(\delta>0\) is the constant perturbation we are concerned with.
Review the history, the aforementioned exponential sum (1.2) with \(\delta=0\) has been widely studied and applied. Applying Bombieri and Iwaniec's double large sieve inequality [1], Fouvry and Iwaniec [4, Theorem 3] proved that for \(\alpha\beta(\beta+1)\gamma\neq 0\),
\[S_{0}(H,M,N)\ll_{\epsilon}(HMN)^{1+\epsilon}\biggl{(}\biggl{(}\frac{X}{HMN^{ 2}}\biggr{)}^{\frac{1}{4}}+\frac{1}{N^{\frac{3}{10}}}+\frac{1}{(HM)^{\frac{1 }{4}}}+\frac{N^{\frac{1}{10}}}{X^{\frac{1}{4}}}\biggr{)}. \tag{1.3}\]
Latter, Sargos and Wu [14, Theorem 7] proved that for \(\alpha(\alpha-1)(\alpha-2)\beta\gamma\neq 0\),
\[S_{0}(H,M,N)\ll_{\epsilon}(HMN)^{1+\epsilon}\biggl{(}\biggl{(}\frac{X^{4}}{H ^{4}M^{4}N^{11}}\biggr{)}^{\frac{1}{26}}+\biggl{(}\frac{X}{HMN^{2}}\biggr{)}^{ \frac{1}{4}}+\frac{1}{N^{\frac{7}{18}}}+\frac{1}{(HM)^{\frac{1}{4}}}+\frac{1 }{X^{\frac{1}{2}}}\biggr{)}.\]
Robert and Sargos [13, Theorem 1] proved that
\[S_{0}(H,M,N)\ll_{\epsilon}(HMN)^{1+\epsilon}\bigg{(}\bigg{(}\frac{X}{HMN^{2}} \bigg{)}^{\frac{1}{4}}+\frac{1}{(HM)^{\frac{1}{4}}}+\frac{1}{N^{\frac{1}{2}}}+ \frac{1}{X^{\frac{1}{2}}}\bigg{)}. \tag{1.4}\]
In [7], Liu, Wu and Yang studied the exponential sums of type (1.2) with constant perturbation \(\delta\) and used it to prove an asymptotic formula on a sum involving the Mangoldt function. In Proposition 2.2, we generalize Bombieri and Iwaniec's double large sieve inequality by generalizing one of the constant sequences to a sequence of functions. This generalization, combining with the three lemmas we get on Diophantine problems, finally enable us to prove the following theorem.
**Theorem 1.1**.: _Let \(H,M,N\) be positive integers, \(X>1\) be a real number, \(a(h,m)\), \(b(n)\) be complex numbers such that \(|a(h,m)|,|b(n)|\leqslant 1\), and \(\alpha,\beta,\gamma>0\) be fixed. Let \(\delta>0\) be a constant perturbation. If \(X\leqslant(8\delta)^{-1}KM^{\beta}N^{\gamma}\) for some \(K\geqslant 1\), then_
\[S_{\delta}(H,M,N)\ll_{\epsilon}(HMN)^{1+\epsilon}\bigg{(}\bigg{(}\frac{KX}{ HMN^{2}}\bigg{)}^{\frac{1}{4}}+\Big{(}\frac{K^{2}}{HM}\bigg{)}^{\frac{1}{4}}+ \Big{(}\frac{K}{N}\Big{)}^{\frac{1}{2}}+\frac{K}{X^{\frac{1}{2}}}\bigg{)}. \tag{1.5}\]
Note that [13, Theorem 1] implies (1.5) for \(\delta=0\).
As an application of Theorem 1.1, we consider the asymptotic behaviour of the following sum
\[S_{\Lambda}(x):=\sum_{n\leqslant x}\Lambda\bigg{(}\bigg{[}\frac{x}{n}\bigg{]} \bigg{)}\]
as \(x\to\infty\), where \(\Lambda\) is the Mangoldt function. This type of sum is a generalization of Dirichlet divisor problem, and the study on it is initiated in Bordelles-Dai-Heyman-Pan-Shparlinski [3]. Many authors studied this type of sum for different arithmetic functions, see [6, 8, 9, 10, 12, 15, 17, 18, 19, 20]. For the Mangoldt function, Bordelles-Dai-Heyman-Pan-Shparlinski's result [3] implies
\[S_{\Lambda}(x)=x\sum_{d\geqslant 1}\frac{\Lambda(d)}{d(d+1)}+O_{\epsilon}(x^{ 2/3+\epsilon}). \tag{1.6}\]
Wu [18] reduced the exponent in (1.6) to \(1/2\). With the help of the Vaughan identity and the method of one-dimensional exponential sum, Ma and Wu [11] broke the \(1/2\)-barrier and reduced the exponent in (1.6) to \(35/71\). Latter, using a result of Baker on two-dimensional exponential sums, Bordelles [2] sharpened the exponent in (1.6) to \(97/203\). Recently, using the method of multiple exponential sums, Liu-Wu-Yang [7] revised the exponent in (1.6) to \(9/19\). Applying our estimation on the three-dimensional exponential sums with constant perturbation we get in Theorem 1.1, we are able to revise the exponent in (1.6) to \(17/36\).
**Theorem 1.2**.: _For any \(\epsilon>0\), we have_
\[\sum_{n\leqslant x}\Lambda\bigg{(}\bigg{[}\frac{x}{n}\bigg{]}\bigg{)}=x\sum_{d \geqslant 1}\frac{\Lambda(d)}{d(d+1)}+O_{\epsilon}\big{(}x^{\frac{17}{36}+ \epsilon}\big{)}\qquad(x\to\infty).\]
## 2. A generalization of Bombieri and Iwaniec's double large sieve inequality
The exponential sums (1.1) can be regraded as a special case of bilinear forms
\[\mathscr{B}(\mathscr{X},\mathscr{Y}):=\sum_{x_{r}\in\mathscr{X}}\sum_{y_{s}\in \mathscr{Y}}a(r)b(s)e(x_{r}y_{s}),\]
where \(\mathscr{X}\), \(\mathscr{Y}\) are two set of real numbers with \(|x_{r}|\leqslant X\), \(|y_{s}|\leqslant Y\), \(a(r)\) and \(b(s)\) are complex numbers for \(x_{r}\in\mathscr{X}\), \(y_{s}\in\mathscr{Y}\).
To estimate the exponential sums with perturbation, we need to consider the following more general bilinear forms
\[\mathscr{B}(a,b;\mathscr{X},\mathscr{Y}):=\underset{\varphi\in\mathscr{X}}{ \sum}\underset{y\in\mathscr{Y}}{\sum}a(\varphi)b(y)e(\varphi(y)y),\]
where \(a(\varphi),b(y)\in\mathbb{C}\) for \(\varphi\in\mathscr{X}\) and \(y\in\mathscr{Y}\) respectively; \(\mathscr{Y}\) is a set of real numbers with \(|y|\leqslant Y\) for each \(y\in\mathscr{Y}\), \(\mathscr{X}\) is a set of real-valued functions \(\varphi\in C([-Y,Y],\mathbb{R})\) with \(|\varphi|\leqslant X\). Here we have used the notation
\[|\varphi-\varphi^{*}|:=\underset{y_{1},y_{2}\in[-Y,Y]}{\sup}\,|\varphi(y_{1})- \varphi^{*}(y_{2})|,\]
in particular, \(|\varphi-c|:=\underset{y\in[-Y,Y]}{\sup}\,|\varphi(y)-c|\) for constant \(c\in\mathbb{R}\) and \(|\varphi|:=\underset{y\in[-Y,Y]}{\sup}\,|\varphi(y)|\).
The following lemma is a direct corollary of [1, Lemma 2.3].
**Lemma 2.1**.: _Let \(\mathscr{Y}\) and \(b(y)\) be as above. Then we have_
\[\int_{-T}^{T}\bigg{|}\sum_{y\in\mathscr{Y}}b(y)e(yt)\bigg{|}^{2}dt\leqslant(2 T+\eta^{-1})\underset{\begin{subarray}{c}y\in\mathscr{Y}\\ |y-y^{*}|\leqslant\eta\end{subarray}}{\sum}|b(y)b(y^{*})|\]
_for any positive numbers \(\eta\) and \(T\)._
Using this lemma, we will generalize Bombieri and Iwaniec's double large sieve inequality and get the following proposition, which will be applied to prove Theorem 1.1.
**Proposition 2.2**.: _Let \(\mathscr{X}\), \(\mathscr{Y}\), \(X\), \(Y\), \(a(\varphi)\), \(b(y)\) be as above, and \(K\geqslant 1\). If_
\[\underset{y_{1},y_{2}\in[-Y,Y]}{\sup}\,|\varphi(y_{1})-\varphi(y_{2})|<\frac{ K}{4Y} \tag{2.1}\]
_for any \(\varphi\in\mathscr{X}\), then_
\[|\mathscr{B}(a,b;\mathscr{X},\mathscr{Y})|^{2}\ll(1+KXY)\mathscr{B}(b; \mathscr{X})\mathscr{B}(a;\mathscr{Y}), \tag{2.2}\]
_where_
\[\mathscr{B}(b;\mathscr{X}):=\underset{\begin{subarray}{c}y\in\mathscr{Y}\\ |y-y^{*}|\leqslant X^{-1}\end{subarray}}{\sum}|b(y)b(y^{*})|,\qquad\mathscr{B} (a;\mathscr{Y}):=\underset{\begin{subarray}{c}\varphi\in\mathscr{X}\\ |\varphi-\varphi^{*}|\leqslant KY^{-1}\end{subarray}}{\sum}|a(\varphi)a( \varphi^{*})|.\]
Proof.: Put \(\epsilon:=(4Y)^{-1}\), \(T:=X+\epsilon\), and
\[\omega(y):=\frac{\pi y}{\sin(2\pi\epsilon y)},\qquad b^{*}(y):=b(y)\omega(y)\]
for \(y\in\mathscr{Y}\). Then
\[|\omega(y)|\leqslant\pi Y \tag{2.3}\]
for \(|y|\leqslant Y\). For \(|t|\leqslant T\) and \(\varphi\in\mathscr{X}\), if there is a \(y\in\mathscr{Y}\) such that \(|\varphi(y)-t|\leqslant\epsilon\), then (2.1) implies \(|\varphi-t|\leqslant K/(2Y)\). Thus, using
\[e(\varphi(y)y)=\frac{\pi y}{\sin(2\pi\epsilon y)}\int_{\varphi(y)-\epsilon}^{ \varphi(y)+\epsilon}e(ty)dt,\]
we get
\[\begin{split}|\mathscr{B}(a,b;\mathscr{X},\mathscr{Y})|& =\left|\sum_{\varphi\in\mathscr{X}}\!a(\varphi)\!\!\sum_{y\in \mathscr{Y}}\!b^{*}(y)\int_{\varphi(y)-\epsilon}^{\varphi(y)+\epsilon}e(ty)dt \right|\\ &=\left|\int_{-T}^{T}\sum_{\begin{subarray}{c}\varphi\in\mathscr{ X}\\ \exists\,y\in\mathscr{Y}\text{ such that }|\varphi(y)-t|\leqslant\epsilon \end{subarray}}a(\varphi)\sum_{\begin{subarray}{c}y\in\mathscr{Y}\\ |\varphi(y)-t|\leqslant\epsilon\end{subarray}}b^{*}(y)e(ty)dt\right|\\ &\leqslant\int_{-T}^{T}\sum_{\begin{subarray}{c}\varphi\in \mathscr{X}\\ |\varphi-t|\leqslant\frac{K}{2Y}\end{subarray}}|a(\varphi)|\sum_{\begin{subarray} {c}y\in\mathscr{Y}\\ |\varphi(y)-t|\leqslant\epsilon\end{subarray}}b^{*}(y)e(ty)\Bigg{|}dt\\ &\leqslant\left(\int_{-T}^{T}\Big{(}\sum_{\begin{subarray}{c} \varphi\in\mathscr{X}\\ |\varphi-t|\leqslant\frac{K}{2Y}\end{subarray}}|a(\varphi)|\Big{)}^{2}dt \right)^{\frac{1}{2}}\Bigg{(}\int_{-T}^{T}\Big{|}\sum_{y\in\mathscr{Y}_{0}}b^ {*}(y)e(ty)\Big{|}^{2}dt\Bigg{)}^{\frac{1}{2}}\end{split} \tag{2.4}\]
by Cauchy's inequality, where \(\mathscr{Y}^{\prime}\) runs over all the subsets of \(\mathscr{Y}\) and \(\mathscr{Y}_{0}\) is a subset of \(\mathscr{Y}\) such that the absolute value in the second factor is maximal. For the first factor, we have
\[\begin{split}\int_{-T}^{T}\bigg{(}\sum_{\begin{subarray}{c} \varphi\in\mathscr{X}\\ |\varphi-t|\leqslant\frac{K}{2Y}\end{subarray}}|a(\varphi)|\bigg{)}^{2}dt& \leqslant\int_{-T}^{T}\sum_{\begin{subarray}{c}\varphi\in \mathscr{X}\\ |\varphi-\varphi^{*}|\leqslant\frac{K}{Y}\end{subarray}}\sum_{\varphi^{*}\in \mathscr{X}}|a(\varphi)a(\varphi^{*})|dt\\ &\leqslant 4K\epsilon\sum_{\begin{subarray}{c}\varphi\in \mathscr{X}\\ |\varphi-\varphi^{*}|\leqslant\frac{K}{Y}\end{subarray}}\sum_{\varphi^{*}\in \mathscr{X}}|a(\varphi)a(\varphi^{*})|.\end{split}\]
For the second factor, using Lemma 2.1 and (2.3) we get
\[\int_{-T}^{T}\Big{|}\sum_{y\in\mathscr{Y}_{0}}b^{*}(y)e(ty)\Big{|}^{2}dt \leqslant(2T+\eta^{-1})(\pi Y)^{2}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
## 3. Proof of Theorem 1.1
To prove Theorem 1.1, we will use Proposition 2.2 and the following lemmas on Diophantine problems.
### Some diophantine problems
The first one is a direct corollary of [4, Lemma 1].
**Lemma 3.1**.: _Let \(\alpha,\beta\in\mathbb{R}^{*}\) be fixed. For \(H\), \(M\geqslant 1\) and \(X>1\), define_
\[\mathscr{B}_{1}:=\Big{|}\Big{\{}(h_{1},h_{2},m_{1},m_{2})\ \Big{|}\ h_{i}\sim H,m_{i} \sim M,i=1,2,\Big{|}\frac{h_{1}^{\alpha}m_{1}^{\beta}}{H^{\alpha}M^{\beta}}- \frac{h_{2}^{\alpha}m_{2}^{\beta}}{H^{\alpha}M^{\beta}}\Big{|}\leqslant\frac{ 1}{X}\Big{\}}\Big{|}.\]
_Then_
\[\mathscr{B}_{1}\ll_{\epsilon}(HM)^{2+\epsilon}\bigg{(}\frac{1}{HM}+\frac{1}{X }\bigg{)}.\]
The second one is [13, Theorem 2] which will be used in the proof of our Lemma 3.3.
**Lemma 3.2**.: _Let \(\beta\in\mathbb{R}^{*}\), \(\beta\neq 1\), be fixed. For \(N\geqslant 2\) and \(X>0\), define_
\[\mathscr{B}_{0}:=\Big{|}\Big{\{}(n_{1},n_{2},n_{3},n_{4})\ \Big{|}\ n_{i}\sim N,i=1,2,3,4,\ \Big{|}\frac{n_{1}^{\beta}}{N^{\beta}}+\frac{n_{2}^{\beta}}{N^{\beta}}-\frac{ n_{3}^{\beta}}{N^{\beta}}-\frac{n_{4}^{\beta}}{N^{\beta}}\Big{|}\leqslant\frac{1}{X} \Big{\}}\Big{|}.\]
_Then_
\[\mathscr{B}_{0}\ll_{\epsilon}N^{4+\epsilon}\bigg{(}\frac{1}{N^{2}}+\frac{1}{X }\bigg{)}.\]
Define
\[\varphi_{n_{s},n_{t}}(m):=\frac{N^{\gamma}}{n_{s}^{\gamma}+\mu(m)}-\frac{N^{ \gamma}}{n_{t}^{\gamma}+\mu(m)},\qquad\psi_{n}(m):=\frac{N^{\gamma}}{n^{ \gamma}+\nu(m)}, \tag{3.1}\]
and write
\[|\varphi_{n_{1},n_{2}}-\varphi_{n_{3},n_{4}}|:=\sup_{m_{1},m_{2}\sim M}| \varphi_{n_{1},n_{2}}(m_{1})-\varphi_{n_{3},n_{4}}(m_{2})|,\]
\[|\psi_{n_{1}}-\psi_{n_{2}}|:=\sup_{m_{1},m_{2}\sim M}|\psi_{n_{1}}(m_{1})-\psi _{n_{1}}(m_{2})|,\]
where \(0<\mu(m)<U\leqslant 1\) and \(0<\nu(m)<V\leqslant 1\) for \(m\sim M\).
Applying Lemma 3.2 we get the following lemma.
**Lemma 3.3**.: _Let \(\gamma>0\) be fixed, \(M,N,X\geqslant 1\) and \(\varphi_{n_{s},n_{t}}\) be defined in (3.1). Define_
\[\mathscr{B}_{2}:=|\{(n_{1},n_{2},n_{3},n_{4})\ |\ n_{i}\sim N,i=1,2,3,4,| \varphi_{n_{1},n_{2}}-\varphi_{n_{3},n_{4}}|\leqslant X^{-1}\}|.\]
_where \(\varphi_{n_{s},n_{t}}\) is defined in (3.1). If \(X\leqslant U^{-1}N^{\gamma}\), then_
\[\mathscr{B}_{2}\ll_{\epsilon}N^{4+\epsilon}\bigg{(}\frac{1}{N^{2}}+\frac{1}{X }\bigg{)}.\]
Proof.: The condition in \(\mathscr{B}_{2}\) is equivalent to
\[\sup_{m_{i}\sim M,n_{i}\sim N}|S_{1}Q_{1}-S_{2}Q_{2}|\leqslant\frac{1}{X}, \tag{3.2}\]
where
\[S_{1}:=S(n_{1},n_{2}):=\frac{N^{\gamma}}{n_{1}^{\gamma}}-\frac{N^{\gamma}}{n_{ 2}^{\gamma}},\qquad Q_{1}:=Q(n_{1},n_{2},m_{1}):=\frac{n_{1}^{\gamma}n_{2}^{ \gamma}}{(n_{1}^{\gamma}+\mu(m_{1}))(n_{2}^{\gamma}+\mu(m_{1}))},\]
\[S_{2}:=S(n_{3},n_{4}):=\frac{N^{\gamma}}{n_{3}^{\gamma}}-\frac{N^{\gamma}}{n_{4}^{ \gamma}},\qquad Q_{2}:=Q(n_{3},n_{4},m_{2}):=\frac{n_{3}^{\gamma}n_{4}^{\gamma}}{ (n_{3}^{\gamma}+\mu(m_{2}))(n_{4}^{\gamma}+\mu(m_{2}))}\]
temporarily in this proof. It is easy to see that
\[0<1/4\leqslant(1+UN^{-\gamma})^{-2}\leqslant Q_{i}<1 \tag{3.3}\]
for \(i=1,2\) and
\[\sup_{m_{1}\sim M}|S_{1}Q_{1}|=\sup_{m_{1}\sim M}\frac{|n_{2}^{\gamma}-n_{1}^ {\gamma}|N^{\gamma}}{(n_{1}^{\gamma}+\mu(m_{1}))(n_{2}^{\gamma}+\mu(m_{1}))} \leqslant 1. \tag{3.4}\]
Using the simple identity \(S_{1}-S_{2}=(S_{1}Q_{1}-S_{2}Q_{2})Q_{2}^{-1}+S_{1}Q_{1}(Q_{1}^{-1}-Q_{2}^{-1})\) and noticing that \(Q_{1},Q_{2}>0\), we get
\[\sup_{n_{i}\sim N}|S_{1}-S_{2}|\leqslant\sup|S_{1}Q_{1}-S_{2}Q_{2}|(\inf Q_{2} )^{-1}+\sup|S_{1}Q_{1}|\big{(}(\inf Q_{1})^{-1}-(\sup Q_{2})^{-1}\big{)}.\]
Thus, if \(n_{i}\sim N\), \(i=1,2,3,4\), satisfy the condition in \(\mathscr{B}_{2}\), then (3.2), (3.3), (3.4) and \(X\leqslant U^{-1}N^{\gamma}\) imply
\[\sup_{n_{i}\sim N}|S_{1}-S_{2}|\leqslant X^{-1}(1+UN^{-\gamma})^{2}+\big{(}(1 +UN^{-\gamma})^{2}-1\big{)}\leqslant 7X^{-1},\]
which is a weaker condition than (3.2). Finally, Lemma 3.2 completes the proof.
**Lemma 3.4**.: _Let \(\gamma>0\) be fixed, \(M,N,X\geqslant 1\) and \(\psi_{n}\) be defined in (3.1). Define_
\[\mathscr{B}_{3}:=\big{|}\{(n_{1},n_{2})\ |\ n_{i}\sim N,i=1,2,|\psi_{n_{1}}- \psi_{n_{2}}|\leqslant X^{-1}\}\big{|},\]
_where \(\psi_{n}\) is defined in (3.1). If \(X\leqslant V^{-1}N^{\gamma}\), then_
\[\mathscr{B}_{3}\ll N^{2}\bigg{(}\frac{1}{N}+\frac{1}{X}\bigg{)}.\]
Proof.: For \(i=1,2\), write
\[S_{i}:=S(n_{i}):=\frac{N^{\gamma}}{n_{i}^{\gamma}},\qquad Q_{i}:=Q(n_{i},m_{i}) :=\frac{n_{i}^{\gamma}}{n_{i}^{\gamma}+\nu(m_{i})}\]
temporarily in this proof. Then the condition in \(\mathscr{B}_{3}\) is equivalent to
\[\sup_{m_{i}\sim M,n_{i}\sim N}|S_{1}Q_{1}-S_{2}Q_{2}|\leqslant X^{-1}. \tag{3.5}\]
An argument similar to the one used in the proof of Lemma 3.3 shows that \(|S_{1}-S_{2}|\leqslant 3/X\), if (3.5) is satisfied. Thus our problem reduces to the following diophantine problem
\[\sum_{\begin{subarray}{c}n_{1},n_{2}\sim N\\ |(n_{1}/N)^{-\gamma}-(n_{2}/N)^{-\gamma}|\leqslant 3/X\end{subarray}}1.\]
Using Lagrange mean value theorem, we have
\[2^{-\gamma-1}\gamma\frac{|n_{1}-n_{2}|}{N}\leqslant\Big{|}\Big{(}\frac{n_{1}} {N}\Big{)}^{-\gamma}-\Big{(}\frac{n_{2}}{N}\Big{)}^{-\gamma}\Big{|}\leqslant \gamma\frac{|n_{1}-n_{2}|}{N}.\]
Thus, there are \(O(1+N/X)\) eligible \(n_{2}\) for each fixed \(n_{1}\), which complete the proof.
### Proof of Theorem 1.1
We write the exponential sum in (1.2) as
\[S_{\delta}:=S_{\delta}(H,M,N)=\sum_{h\sim H}\sum_{m\sim M}a(h,m)\sum_{n\sim N}b(n )e\left(X\frac{H^{-\alpha}M^{\beta}}{h^{-\alpha}m^{\beta}}\frac{N^{\gamma}}{n^{ \gamma}+\delta m^{-\beta}}\right). \tag{3.6}\]
Firstly, suppose \(X\gg HM\). By Cauchy's inequality, we have
\[\begin{split}|S_{\delta}|^{2}&\leqslant HM\sum_{h \sim H}\sum_{m\sim M}\left|\sum_{n\sim N}b(n)e\left(X\frac{H^{-\alpha}M^{\beta }}{h^{-\alpha}m^{\beta}}\frac{N^{\gamma}}{n^{\gamma}+\delta m^{-\beta}}\right) \right|^{2}\\ &\leqslant HM\sum_{h\sim H}\sum_{m\sim M}\sum_{n_{1}\sim N}\sum_{n _{2}\sim N}b(n_{1})\overline{b(n_{2})}e\bigg{(}X\frac{H^{-\alpha}M^{\beta}}{h^ {-\alpha}m^{\beta}}\varphi_{n_{1},n_{2}}(m)\bigg{)},\end{split} \tag{3.7}\]
where \(\varphi_{n_{1},n_{2}}(m)\) is defined in (3.1) with \(\mu(m)=\delta m^{-\beta}.\) Set
\[\mathscr{X}=\left\{\varphi_{n_{1},n_{2}}\mid n_{1},n_{2}\sim N\right\},\qquad \mathscr{Y}=\left\{X\frac{H^{-\alpha}M^{\beta}}{h^{-\alpha}m^{\beta}}\ \bigg{|}\ h\sim H,m\sim M\right\}.\]
It is easy to see that \(\sup\mathscr{X}\leqslant 1\) and \(\sup\mathscr{Y}\leqslant 2^{\alpha}X\). Note that for \(m_{1},m_{2}\sim M\),
\[\begin{split}|\varphi_{n_{1},n_{2}}(m_{1})-\varphi_{n_{1},n_{2}} (m_{2})|&=\left|\frac{N^{\gamma}}{n_{1}^{\gamma}+\mu(m_{1})}- \frac{N^{\gamma}}{n_{2}^{\gamma}+\mu(m_{1})}-\frac{N^{\gamma}}{n_{1}^{\gamma}+ \mu(m_{2})}+\frac{N^{\gamma}}{n_{2}^{\gamma}+\mu(m_{2})}\right|\\ &\leqslant\left|\frac{N^{\gamma}}{n_{1}^{\gamma}+\mu(m_{1})}- \frac{N^{\gamma}}{n_{1}^{\gamma}+\mu(m_{2})}\right|+\left|\frac{N^{\gamma}}{n_ {2}^{\gamma}+\mu(m_{1})}-\frac{N^{\gamma}}{n_{2}^{\gamma}+\mu(m_{2})}\right| \\ &\leqslant\frac{N^{\gamma}|\mu(m_{2})-\mu(m_{1})|}{n_{1}^{2\gamma }}+\frac{N^{\gamma}|\mu(m_{2})-\mu(m_{1})|}{n_{2}^{2\gamma}}\\ &<\frac{2\delta M^{-\beta}}{N^{\gamma}}\leqslant\frac{K}{4X}, \end{split}\]
since \(X\leqslant(8\delta)^{-1}KM^{\beta}N^{\gamma}\). So we can apply Proposition 2.2 to (3.7) to get
\[|S_{\delta}|^{2}\ll HMK^{\frac{1}{2}}X^{\frac{1}{2}}\mathscr{B}_{1}^{\frac{1} {2}}\mathscr{B}_{2}^{\frac{1}{2}}, \tag{3.8}\]
where \(\mathscr{B}_{1}\) and \(\mathscr{B}_{2}\) are as in Lemma 3.1 and Lemma 3.3. Applying Lemma 3.1 and Lemma 3.3 with \(U=\delta M^{-\beta}\), we get
\[\mathscr{B}_{1}\ll_{\epsilon}(HM)^{2+\epsilon}\frac{1}{HM},\qquad\mathscr{B} _{2}\ll_{\epsilon}N^{4+\epsilon}\left(\frac{1}{N^{2}}+\frac{K}{X}\right),\]
since \(X\gg HM\). Inserting this into (3.8), we get
\[S_{\delta}(H,M,N)\ll_{\epsilon}(HMN)^{1+\epsilon}\bigg{(}\Big{(}\frac{KX}{ HMN^{2}}\Big{)}^{\frac{1}{4}}+\Big{(}\frac{K^{2}}{HM}\Big{)}^{\frac{1}{4}} \bigg{)} \tag{3.9}\]
which implies (1.5) for \(X\gg HM\).
Now, we suppose \(X\ll HM\). We will directly apply the generalized double large sieve inequality we get in Proposition 2.2 to (3.6). Set
\[\mathscr{X}=\{\psi_{n}\mid n\sim N\},\qquad\mathscr{Y}=\left\{X\frac{H^{- \alpha}M^{\beta}}{h^{-\alpha}m^{\beta}}\ \bigg{|}\ h\sim H,m\sim M\right\},\]
where \(\psi_{n}\) is defined in (3.1) with \(\nu(m)=\delta m^{-\beta}.\) It is easy to see that \(\sup\mathscr{X}\leqslant 1\) and \(\sup\mathscr{Y}\leqslant 2^{\alpha}X\). Note that for \(m_{1},m_{2}\sim M\),
\[|\psi_{n}(m_{1})-\psi_{n}(m_{2})|\leqslant\frac{N^{\gamma}|\nu(m_{2})-\nu(m_{1} )|}{n^{2\gamma}}<\frac{2\delta M^{-\beta}}{N^{\gamma}}\leqslant\frac{K}{4X},\]
since \(X\leqslant(8\delta)^{-1}KM^{\beta}N^{\gamma}\). Then (2.1) is satisfied so that we can apply Proposition 2.2 to (3.6) to get
\[S_{\delta}\ll K^{\frac{1}{2}}X^{\frac{1}{2}}\mathscr{B}_{1}^{\frac{1}{2}} \mathscr{B}_{3}^{\frac{1}{2}}, \tag{3.10}\]
where \(\mathscr{B}_{1}\) and \(\mathscr{B}_{3}\) are as in Lemma 3.1 and Lemma 3.4. Applying Lemma 3.1 and Lemma 3.4 with \(V=\delta M^{-\beta}\), we get
\[\mathscr{B}_{1}\ll_{\epsilon}(HM)^{2+\epsilon}\frac{1}{X},\qquad\mathscr{B}_{ 3}\ll N^{2}\left(\frac{1}{N}+\frac{K}{X}\right),\]
since \(X\ll HM\). Therefore,
\[S_{\delta}(H,M,N)\ll_{\epsilon}(HMN)^{1+\epsilon}\bigg{(}\Big{(}\frac{K}{N} \Big{)}^{\frac{1}{2}}+\frac{K}{X^{\frac{1}{2}}}\bigg{)}. \tag{3.11}\]
Combining (3.9) and (3.11), we get Theorem 1.1.
## 4. Proof of Theorem 1.2
To prove Theorem 1.2, we need to cite some useful lemmas and then prove a key inequality in Proposition 4.5.
### Preliminary lemmas
In this subsection, we shall cite three lemmas, which will be needed in the the proof of Proposition 4.5. The first one is a direct corollary of [7, Proposition 3.1].
**Lemma 4.1**.: _Let \(\alpha,\beta,\gamma>0\) and \(\delta\in\mathbb{R}\) be some constants. For \(X>0\) and \(H,M,N\geqslant 1\), define_
\[S_{\delta}=S_{\delta}(H,M,N):=\sum_{h\sim H}\sum_{m\sim M}\sum_{n\sim N}a_{h,n }b_{m}e\bigg{(}X\frac{M^{\beta}N^{\gamma}}{H^{\alpha}}\frac{h^{\alpha}}{m^{ \beta}n^{\gamma}+\delta}\bigg{)},\]
_where \(a_{h,m},b_{n}\in\mathbb{C}\) such that \(|a_{h,m}|,|b_{n}|\leqslant 1\). For any \(\epsilon>0\), we have_
\[S_{\delta}\ll((X^{\kappa}H^{2+\kappa}M^{2+\kappa}N^{1+\kappa+\lambda})^{1/(2 +2\kappa)}+HMN^{1/2}+(HM)^{1/2}N+X^{-1/2}HMN)X^{\epsilon} \tag{4.1}\]
_uniformly for \(H\leqslant M^{\beta-1}N^{\gamma}\) and \(0\leqslant\delta\leqslant\epsilon^{-1}\), where \((\kappa,\lambda)\) is an exponent pair and the implied constant depends on \((\alpha,\beta,\gamma,\epsilon)\) only._
The second one is due to Vaughan ([16, (3)]).
**Lemma 4.2**.: _There are six real arithmetical functions \(\alpha_{k}(n)\) verifying \(|\alpha_{k}(n)|\ll_{\epsilon}n^{\epsilon}\) for \(n>1\)\((1\leqslant k\leqslant 6)\) such that, for all \(D>100\) and any arithmetical function \(g\), we have_
\[\sum_{D<d\leqslant 2D}\Lambda(d)g(d)=S_{1}+S_{2}+S_{3}+S_{4}, \tag{4.2}\]
_where_
\[S_{1} :=\sum_{m\leqslant D^{1/3}}\alpha_{1}(m)\sum_{D<mn\leqslant 2D}g(mn),\] \[S_{2} :=\sum_{m\leqslant D^{1/3}}\alpha_{2}(m)\sum_{D<mn\leqslant 2D}g(mn) \log n,\] \[S_{3} :=\sum_{\begin{subarray}{c}D^{1/3}<m,n\leqslant D^{2/3}\\ D<mn\leqslant 2D\end{subarray}}\alpha_{3}(m)\alpha_{4}(n)\sum_{D<mn\leqslant 2D}g(mn),\] \[S_{4} :=\sum_{\begin{subarray}{c}D^{1/3}<m,n\leqslant D^{2/3}\\ D<mn\leqslant 2D\end{subarray}}\alpha_{5}(m)\alpha_{6}(n)\sum_{D<mn\leqslant 2D}g(mn).\]
_The sums \(S_{1}\) and \(S_{2}\) are called as type I, \(S_{3}\) and \(S_{4}\) are called as type II._
The third one is due to Vaaler ( [5, Theorem A.6]).
**Lemma 4.3**.: _Let \(\psi(t)=\{t\}-\frac{1}{2}\), where \(\{t\}\) means the fractional part of the real number \(t\). For \(x\geqslant 1\) and \(H\geqslant 1\), we have_
\[\psi(x)=-\sum_{1\leqslant|h|\leqslant H}\Phi\bigg{(}\frac{h}{H+1}\bigg{)}\frac {e(hx)}{2\pi ih}+R_{H}(x), \tag{4.3}\]
_where \(\Phi(t):=\pi t(1-|t|)\cot(\pi t)+|t|\), and the error term \(R_{H}(x)\) satisfies_
\[|R_{H}(x)|\leqslant\frac{1}{2H+2}\sum_{0\leqslant|h|\leqslant H}\bigg{(}1- \frac{|h|}{H+1}\bigg{)}e(hx). \tag{4.4}\]
### Two key inequalities
Denote
\[\mathfrak{S}_{\delta}(x,D):=\sum_{D<d\leqslant 2D}\Lambda(d)\psi\bigg{(}\frac{x }{d+\delta}\bigg{)}, \tag{4.5}\]
where \(\delta\) is a non-negative constant. For \(\mathfrak{S}_{\delta}(x,D)\), [7, Proposition 4.1] gives the following bound which will be used in the proof of Theorem 1.2 in the case that \(D\) is small.
**Lemma 4.4**.: _Let \(\delta\geqslant 0\) be a constant. Then_
\[\mathfrak{S}_{\delta}(x,D)\ll_{\epsilon}(x^{2}D^{7})^{1/12}x^{\epsilon} \tag{4.6}\]
_uniformly for \(x\geqslant 3\) and \(x^{6/13}\leqslant D\leqslant x^{2/3}\)._
We still need another bound of \(\mathfrak{S}_{\delta}(x,D)\) in the proof of Theorem 1.2 in the case that \(D\) is big.
**Proposition 4.5**.: _Let \(\delta\geqslant 0\) be a constant. Then_
\[\mathfrak{S}_{\delta}(x,D)\ll_{\epsilon}(D^{17/19}+x^{1/6}D^{329/570})x^{\epsilon} \tag{4.7}\]
_uniformly for \(x\geqslant 3\) and \(x^{11/21}\leqslant D\leqslant x^{3/4}\)._
Proof.: We apply the Vaughan identity (4.2) with \(g(d)=\psi\bigg{(}\dfrac{x}{d+\delta}\bigg{)}\) to write
\[\mathfrak{S}_{\delta}(x,D)=\mathfrak{S}_{\delta,1}+\mathfrak{S}_{\delta,2}+ \mathfrak{S}_{\delta,3}+\mathfrak{S}_{\delta,4}, \tag{4.8}\]
where
\[\mathfrak{S}_{\delta,1} :=\sum_{m\leqslant D^{1/3}}\alpha_{1}(m)\sum_{D<mn\leqslant 2D} \psi\bigg{(}\dfrac{x}{mn+\delta}\bigg{)},\] \[\mathfrak{S}_{\delta,2} :=\sum_{m\leqslant D^{1/3}}\alpha_{2}(m)\sum_{D<mn\leqslant 2D} \psi\bigg{(}\dfrac{x}{mn+\delta}\bigg{)}\log n,\] \[\mathfrak{S}_{\delta,3} :=\sum_{\begin{subarray}{c}D^{1/3}<m,n\leqslant D^{2/3}\\ D<mn\leqslant 2D\end{subarray}}\alpha_{3}(m)\alpha_{4}(n)\sum_{D<mn\leqslant 2D} \psi\bigg{(}\dfrac{x}{mn+\delta}\bigg{)},\] \[\mathfrak{S}_{\delta,4} :=\sum_{\begin{subarray}{c}D^{1/3}<m,n\leqslant D^{2/3}\\ D<mn\leqslant 2D\end{subarray}}\alpha_{5}(m)\alpha_{6}(n)\sum_{D<mn\leqslant 2D} \psi\bigg{(}\dfrac{x}{mn+\delta}\bigg{)}.\]
A. Estimation of \(\mathfrak{S}_{\delta,3}\) and \(\mathfrak{S}_{\delta,4}\).
Using Lemma 4.3 and splitting the interval of summation into dyadic intervals, we can write
\[\begin{split}\mathfrak{S}_{\delta,3}=&-\dfrac{1}{2 \pi i}\sum_{H^{\prime}}\sum_{M}\sum_{N}(\mathfrak{S}_{\delta,3}^{\flat}(H^{ \prime},M,N)+\overline{\mathfrak{S}_{\delta,3}^{\flat}(H^{\prime},M,N)})\\ &+\sum_{M}\sum_{N}\mathfrak{S}_{\delta,3}^{\dagger}(M,N)\end{split} \tag{4.9}\]
with
\[\mathfrak{S}_{\delta,3}^{\flat}(H^{\prime},M,N):=\dfrac{1}{H^{\prime}}\sum_{h \sim H^{\prime}}a_{h}\sum_{\begin{subarray}{c}m\sim M\\ D<mn\leqslant 2D\end{subarray}}\hskip-14.226378pt\sum_{\begin{subarray}{c}n\sim N \\ D<mn\leqslant 2D\end{subarray}}\hskip-14.226378pt\alpha_{3}(m)\alpha_{4}(n)e \bigg{(}\dfrac{hx}{mn+\delta}\bigg{)},\]
\[\mathfrak{S}_{\delta,3}^{\dagger}(M,N):=\sum_{\begin{subarray}{c}m\sim M\\ D<mn\leqslant 2D\end{subarray}}\hskip-14.226378pt\sum_{\begin{subarray}{c}n\sim N \\ D<mn\leqslant 2D\end{subarray}}\hskip-14.226378pt\alpha_{3}(m)\alpha_{4}(n)R_{H} \bigg{(}\dfrac{x}{mn+\delta}\bigg{)},\]
where \(1\leqslant H^{\prime}\leqslant H\leqslant D^{2/19}\), \(a_{h}:=\frac{H^{\prime}}{h}\Phi(\frac{h}{H+1})\ll 1\), \(MN\asymp D\), and \(D^{1/3}\leqslant M\leqslant D^{1/2}\leqslant N\leqslant D^{2/3}\) in view of the symmetry of the variables \(m\) and \(n\).
Firstly, we bound \(\mathfrak{S}_{\delta,3}^{\flat}(H^{\prime},M,N)\). We can remove the multiplicative condition \(D<mn\leqslant 2D\) at the cost of a factor \(\log x\) to get
\[\mathfrak{S}_{\delta,3}^{\flat}(H^{\prime},M,N)\ll\dfrac{x^{\epsilon}}{H^{ \prime}}\sum_{h\sim H^{\prime}}\sum_{m\sim M}\sum_{n\sim N}a_{h}\frac{\alpha_ {3}(m)}{M^{\epsilon}}\frac{\alpha_{4}(n)}{N^{\epsilon}}e\bigg{(}\dfrac{hx}{mn +\delta}\bigg{)}. \tag{4.10}\]
Let \(K=D^{7/380}\) and \(1/3<\theta<1/2\) be a parameter to be chosen later. For the case \(D^{\theta}\leqslant M\leqslant D^{1/2}\leqslant N\leqslant D^{1-\theta}\), note that \(H\leqslant KD^{13/380}\) and \(x^{11/21}\leqslant D\) imply that
\(xH^{\prime}/MN=o(DK)\), then we can apply Theorem 1.1 with \(\alpha=\beta=\gamma=1\), \(K=D^{7/380}\), \((X,H,M,N)=(xH^{\prime}/MN,H^{\prime},M,N)\) to get
\[\mathfrak{S}^{\flat}_{\delta,3}(H^{\prime},M,N) \ll\big{(}x^{\frac{1}{4}}K^{\frac{1}{4}}M^{\frac{1}{2}}N^{\frac{ 1}{4}}+H^{\prime-\frac{1}{4}}K^{\frac{1}{2}}M^{\frac{3}{4}}N+K^{\frac{1}{2}}MN ^{\frac{1}{2}}+x^{-\frac{1}{2}}KH^{\prime-\frac{1}{2}}M^{\frac{3}{2}}N^{\frac{ 3}{2}}\big{)}x^{\epsilon} \tag{4.11}\] \[\ll\big{(}x^{1/4}D^{3/8}K^{1/4}+D^{1-\theta/4}K^{1/2}+D^{3/4}K^{1/ 2}+x^{-1/2}D^{3/2}K\big{)}x^{\epsilon},\]
where we have used \(H^{\prime}\geqslant 1\), \(MN\asymp D\) and \(D^{\theta}\leqslant M\leqslant D^{1/2}\leqslant N\leqslant D^{1-\theta}\). For the other case \(D^{1/3}\leqslant M\leqslant D^{\theta}\leqslant D^{1-\theta}\leqslant N\leqslant D ^{2/3}\), apply Lemma 4.1 with \(\alpha=\beta=\gamma=1\), \((X,H,M,N)=(xH^{\prime}/MN,H^{\prime},M,N)\) and \((\kappa,\lambda)=(1/2,1/2)\) to (4.10) to get
\[\begin{split}\mathfrak{S}^{\flat}_{\delta,3}(H^{\prime},M,N)& \ll\big{(}x^{1/6}M^{2/3}N^{1/2}+MN^{1/2}+M^{1/2}N+x^{-1/2}D^{3/2 }\big{)}x^{\epsilon}\\ &\ll\big{(}x^{1/6}D^{1/2+\theta/6}+D^{5/6}\big{)}x^{\epsilon}, \end{split} \tag{4.12}\]
where we have used \(M\leqslant D^{\theta}\), \(N\leqslant D^{2/3}\), \(H^{\prime}\geqslant 1\), \(D\leqslant x^{3/4}\). Summarize (4.11) and (4.12), we get
\[\mathfrak{S}^{\flat}_{\delta,3}(H^{\prime},M,N)\ll\big{(}x^{\frac{1}{4}}D^{ \frac{3}{8}}K^{\frac{1}{4}}+D^{1-\frac{\theta}{4}}K^{\frac{1}{2}}+x^{\frac{1} {6}}D^{\frac{1}{2}+\frac{\theta}{6}}+D^{\frac{8}{9}}\big{)}x^{\epsilon}, \tag{4.13}\]
where we have used \(1\leqslant x^{11/21}\leqslant D\leqslant x^{3/4}\).
Secondly, we bound \(\mathfrak{S}^{\dagger}_{\delta,3}(M,N)\). Using (4.4), we have
\[\begin{split}\mathfrak{S}^{\dagger}_{\delta,3}(M,N)& \ll\frac{x^{\epsilon}}{H}\bigg{(}MN+\sum_{1\leqslant|h|\leqslant H }\sum_{m\sim M}\sum_{n\sim N}\bigg{(}1-\frac{|h|}{H+1}\bigg{)}e\bigg{(}\frac{ hx}{mn+\delta}\bigg{)}\bigg{)}\\ &\ll\frac{x^{\epsilon}}{H}\Big{(}MN+\max_{1\leqslant H^{\prime} \leqslant H}\big{|}\tilde{\mathfrak{S}}^{\dagger}_{\delta,3}(H^{\prime},M,N) \big{|}\Big{)},\end{split}\]
where
\[\tilde{\mathfrak{S}}^{\dagger}_{\delta,3}(H^{\prime},M,N):=\sum_{h\sim H^{ \prime}}\sum_{m\sim M}\sum_{n\sim N}\bigg{(}1-\frac{|h|}{H+1}\bigg{)}e\bigg{(} \frac{hx}{mn+\delta}\bigg{)}.\]
Clearly we can bound \(\tilde{\mathfrak{S}}^{\dagger}_{\delta,3}(H^{\prime},M,N)\) in the same way as \(\mathfrak{S}^{\flat}_{\delta,3}(H^{\prime},M,N)\) and get
\[\mathfrak{S}^{\dagger}_{\delta,3}(M,N)\ll\big{(}DH^{-1}+x^{\frac{1}{4}}D^{ \frac{3}{8}}K^{\frac{1}{4}}+D^{1-\frac{\theta}{4}}K^{\frac{1}{2}}+x^{\frac{1} {6}}D^{\frac{1}{2}+\frac{\theta}{6}}+D^{\frac{8}{9}}\big{)}x^{\epsilon}. \tag{4.14}\]
Inserting (4.13) and (4.14) into (4.9) gives
\[\mathfrak{S}_{\delta,3}\ll\big{(}DH^{-1}+x^{\frac{1}{4}}D^{\frac{3}{8}}K^{ \frac{1}{4}}+D^{1-\frac{\theta}{4}}K^{\frac{1}{2}}+x^{\frac{1}{6}}D^{\frac{1}{2 }+\frac{\theta}{6}}+D^{\frac{8}{9}}\big{)}x^{\epsilon}.\]
Taking \(H=D^{2/19}\), \(K=D^{7/380}\) and \(\theta=44/95\) gives
\[\mathfrak{S}_{\delta,3}\ll\big{(}D^{17/19}+x^{1/6}D^{329/570}\big{)}x^{\epsilon}, \tag{4.15}\]
where we have used \(1\leqslant x^{11/21}\leqslant D\leqslant x^{3/4}\).
Clearly the same estimate also holds for \(\mathfrak{S}_{\delta,4}\).
B. Estimation of \(\mathfrak{S}_{\delta,1}\) and \(\mathfrak{S}_{\delta,2}\).
Since a simple partial summation allows us to treat \(\mathfrak{S}_{\delta,1}\) and \(\mathfrak{S}_{\delta,2}\) in the same way. So, we shall bound only \(\mathfrak{S}_{\delta,1}\). Lemma 4.3 allows us to write
\[\mathfrak{S}_{\delta,1}=-\frac{1}{2\pi\mathbb{i}}\big{(}\mathfrak{S}^{\flat}_{ \delta,1}+\overline{\mathfrak{S}^{\flat}_{\delta,1}}\big{)}+\mathfrak{S}^{ \dagger}_{\delta,1}, \tag{4.16}\]
where
\[\mathfrak{S}_{\delta,1}^{\flat} :=\sum_{m\leqslant D^{1/3}}\alpha_{1}(m)\sum_{D<mn\leqslant 2D} \sum_{1\leqslant l\leqslant L}\frac{1}{l}\Phi\Big{(}\frac{l}{L+1}\Big{)} \mathrm{e}\Big{(}\frac{lx}{mn+\delta}\Big{)},\] \[\mathfrak{S}_{\delta,1}^{\dagger} :=\sum_{m\leqslant D^{1/3}}\alpha_{1}(m)\sum_{D<mn\leqslant 2D}R_{L }\Big{(}\frac{x}{mn+\delta}\Big{)},\]
and \(L\geqslant 1\). Applying the exponent pair \((\kappa^{\prime},\lambda^{\prime})\) to the sum over \(n\), we can derive
\[\begin{split}\mathfrak{S}_{\delta,1}^{\flat}& \ll x^{\varepsilon}\sum_{m\leqslant D^{1/3}}\sum_{1\leqslant l \leqslant L}\frac{1}{l}\bigg{(}\Big{(}\frac{lx}{(D^{2}/m)}\Big{)}^{\kappa^{ \prime}}\Big{(}\frac{D}{m}\Big{)}^{\lambda^{\prime}}+\frac{(D^{2}/m)}{lx} \bigg{)}\\ &\ll x^{\varepsilon}\sum_{m\leqslant D^{1/3}}\big{(}x^{\kappa^{ \prime}}D^{-2\kappa^{\prime}+\lambda^{\prime}}m^{\kappa^{\prime}-\lambda^{ \prime}}L^{\kappa^{\prime}}+x^{-1}D^{2}m^{-1}\big{)}\\ &\ll\big{(}x^{\kappa^{\prime}}D^{(-5\kappa^{\prime}+2\lambda^{ \prime}+1)/3}L^{\kappa^{\prime}}+x^{-1}D^{2}\big{)}x^{\varepsilon}\end{split} \tag{4.17}\]
for all \(L\geqslant 1\). To bound \(\mathfrak{S}_{\delta,1}^{\dagger}\), apply (4.4) we get
\[\big{|}\mathfrak{S}_{\delta,1}^{\dagger}\big{|}\ll x^{\varepsilon}\sum_{m \leqslant D^{1/3}}\sum_{D<mn\leqslant 2D}\Big{|}R_{L}\Big{(}\frac{x}{mn+ \delta}\Big{)}\Big{|}\ll\big{(}DL^{-1}+\big{|}\widetilde{\mathfrak{S}}_{ \delta,1}^{\dagger}\big{|}\big{)}x^{\varepsilon},\]
where
\[\widetilde{\mathfrak{S}}_{\delta,1}^{\dagger}:=\frac{1}{L}\sum_{m\leqslant D^{ 1/3}}\sum_{D<mn\leqslant 2D}\sum_{1\leqslant|l|\leqslant L}\Big{(}1- \frac{|l|}{L+1}\Big{)}\mathrm{e}\Big{(}\frac{lx}{mn+\delta}\Big{)}.\]
Clearly we can treat \(\widetilde{\mathfrak{S}}_{\delta,1}^{\dagger}\) in the same way as \(\mathfrak{S}_{\delta,1}^{\flat}\) and obtain
\[\mathfrak{S}_{\delta,1}^{\dagger}\ll\big{(}DL^{-1}+x^{\kappa^{\prime}}D^{(-5 \kappa^{\prime}+2\lambda^{\prime}+1)/3}L^{\kappa^{\prime}}+x^{-1}D^{2}\big{)}x ^{\varepsilon} \tag{4.18}\]
for \(L\geqslant 1\). Inserting (4.17) and (4.18) into (4.16), it follows that
\[\mathfrak{S}_{\delta,1}\ll\big{(}DL^{-1}+x^{\kappa^{\prime}}D^{(-5\kappa^{ \prime}+2\lambda^{\prime}+1)/3}L^{\kappa^{\prime}}+x^{-1}D^{2}\big{)}x^{\varepsilon}\]
for \(L\geqslant 1\). Optimising \(L\) over \([1,\infty)\), we get
\[\mathfrak{S}_{\delta,1}\ll\big{(}(x^{3\kappa^{\prime}}D^{-2\kappa^{\prime}+2 \lambda^{\prime}+1})^{1/(3\kappa^{\prime}+3)}+x^{\kappa^{\prime}}D^{(-5\kappa ^{\prime}+2\lambda^{\prime}+1)/3}+x^{-1}D^{2}\big{)}x^{\varepsilon}\]
The same estimate also holds for \(\mathfrak{S}_{\delta,2}\). Taking \((\kappa^{\prime},\lambda^{\prime})=(\frac{1}{2},\frac{1}{2})\) implies
\[\mathfrak{S}_{\delta,j}\ll\big{(}x^{1/3}D^{2/9}+x^{1/2}D^{-1/6}+x^{-1}D^{2} \big{)}x^{\epsilon}\ll D^{17/19} \tag{4.19}\]
for \(j=1,2\), since \(1\leqslant x^{11/21}\leqslant D\leqslant x^{3/4}\).
Inserting (4.19) and (4.15) into (4.8) completes the proof.
### Proof of Theorem 1.2
Let \(E\in[x^{8/17},x^{1/2}]\) be a parameter to be chosen later. Write
\[\sum_{n\leqslant x}\Lambda\bigg{(}\bigg{[}\frac{x}{n}\bigg{]}\bigg{)}:=S_{1}(x )+S_{2}(x) \tag{4.20}\]
with
\[S_{1}(x):=\sum_{n\leqslant E}\Lambda\bigg{(}\bigg{[}\frac{x}{n}\bigg{]}\bigg{)},\qquad S_{2}(x):=\sum_{E<n\leqslant x}\Lambda\bigg{(}\bigg{[}\frac{x}{n} \bigg{]}\bigg{)}.\]
**Acknowledgements**. This work is supported in part by the National Natural Science Foundation of China (Grant No. 11771252).
|
2305.10564 | Counterfactually Comparing Abstaining Classifiers | Abstaining classifiers have the option to abstain from making predictions on
inputs that they are unsure about. These classifiers are becoming increasingly
popular in high-stakes decision-making problems, as they can withhold uncertain
predictions to improve their reliability and safety. When evaluating black-box
abstaining classifier(s), however, we lack a principled approach that accounts
for what the classifier would have predicted on its abstentions. These missing
predictions matter when they can eventually be utilized, either directly or as
a backup option in a failure mode. In this paper, we introduce a novel approach
and perspective to the problem of evaluating and comparing abstaining
classifiers by treating abstentions as missing data. Our evaluation approach is
centered around defining the counterfactual score of an abstaining classifier,
defined as the expected performance of the classifier had it not been allowed
to abstain. We specify the conditions under which the counterfactual score is
identifiable: if the abstentions are stochastic, and if the evaluation data is
independent of the training data (ensuring that the predictions are missing at
random), then the score is identifiable. Note that, if abstentions are
deterministic, then the score is unidentifiable because the classifier can
perform arbitrarily poorly on its abstentions. Leveraging tools from
observational causal inference, we then develop nonparametric and doubly robust
methods to efficiently estimate this quantity under identification. Our
approach is examined in both simulated and real data experiments. | Yo Joong Choe, Aditya Gangrade, Aaditya Ramdas | 2023-05-17T20:46:57Z | http://arxiv.org/abs/2305.10564v2 | # Counterfactually Comparing Abstaining Classifiers
###### Abstract
Abstaining classifiers have the option to abstain from making predictions on inputs that they are unsure about. These classifiers are becoming increasingly popular in high-stake decision-making problems, as they can withhold uncertain predictions to improve their reliability and safety. When _evaluating_ black-box abstaining classifier(s), however, we lack a principled approach that accounts for what the classifier would have predicted on its abstentions. These missing predictions are crucial when, e.g., a radiologist is unsure of their diagnosis or when a driver is inattentive in a self-driving car. In this paper, we introduce a novel approach and perspective to the problem of evaluating and comparing abstaining classifiers by treating abstentions as _missing data_. Our evaluation approach is centered around defining the _counterfactual score_ of an abstaining classifier, defined as the expected performance of the classifier had it not been allowed to abstain. We specify the conditions under which the counterfactual score is identifiable: if the abstentions are stochastic, and if the evaluation data is independent of the training data (ensuring that the predictions are _missing at random_), then the score is identifiable. Note that, if abstentions are deterministic, then the score is unidentifiable because the classifier can perform arbitrarily poorly on its abstentions. Leveraging tools from observational causal inference, we then develop nonparametric and doubly robust methods to efficiently estimate this quantity under identification. Our approach is examined in both simulated and real data experiments.
## 1 Introduction
Abstaining classifiers (Chow, 1957; El-Yaniv and Wiener, 2010), also known as selective classifiers or classifiers with a reject option, are classifiers that have the option to abstain from making predictions on certain inputs. As their use continues to grow in safety-critical applications, such as medical imaging and autonomous driving, it is natural to ask how a practitioner should _evaluate and compare_ the predictive performance of abstaining classifiers under black-box access to their decisions.
In this paper, we introduce the _counterfactual score_ as a new evaluation metric for black-box abstaining classifiers. The counterfactual score is defined as the expected score of an abstaining classifier's predictions, _had it not been allowed to abstain_. This score is of intrinsic importance when the potential predictions on abstaining inputs are relevant. We proceed with an illustrative example:
**Example 1.1** (Free-trial ML APIs).: Suppose that we compare different image classification APIs. Each API has two versions: a free version that abstains, and a paid one that does not. Before paying for the full service, the user can query the free version for up to \(n\) sample predictions on a user-provided dataset, although it may choose to reject any input that it deems as requiring the paid service. Given two such APIs, how can the practitioner determine which of the two paid (non-abstaining) versions would be better on the population data source, given their abstaining predictions on a sample?
Example 1.1 exhibits why a user of a black-box abstaining classifier would be interested in its counterfactual score. Although this is a hypothetical example, we can imagine variants of popular application settings in which the counterfactual score is relevant. Specifically, these are settings in which the hidden prediction of an abstaining classifier is meaningful for future uses, or it may be utilized as a backup option in a failure mode. We include three additional examples in Appendix A.1.
To formally define, identify, and estimate the counterfactual score, we cast the evaluation problem in Rubin (1976)'s missing data framework and treat abstutions as _missing predictions_. This novel viewpoint directly yields nonparametric methods for estimating the counterfactual score of an abstaining classifier, drawing upon methods for causal inference in observational studies (Rubin, 1974; Robins et al., 1994; van der Vaart, 2000), and represents an interesting yet previously unutilized theoretical connection between selective classification, model evaluation, and causal inference.
The identifiability of the counterfactual score is guaranteed under two standard assumptions: the missing at random (MAR) condition, which is satisfied as long as the evaluation data is independent of the classifier (or its training data), and the positivity condition, both of which are provably unavoidable. We later discuss each condition in detail, including when the positivity condition is met and how a policy-level approach may be necessary for safety-critical applications.
The counterfactual score can be viewed as an alternative to the selective score (mean score on nonabstentions) and the coverage (1 minus the abstention rate) (El-Yaniv and Wiener, 2010), which are the main existing metrics for evaluating black-box abstaining classifiers. As a two-dimensional metric, comparison on the basis of these is non-trivial. A common approach is to assume a fixed cost for each abstention (Chow, 1970), but this is not always satisfactory since determining how to weigh abstentions and errors against one another is a nontrivial question. Thus, in settings such as Example 1.1, the notion of counterfactual score becomes necessary. Importantly, selective scores are not representative of the counterfactual performance, except in the (unrealistic) case wherein predictions are missing completely at random (MCAR).1 Figure 1 gives an overview of scenarios where different metrics may be appropriate to compare abstaining classifiers.
Figure 1: A schematic flowchart of comparing abstaining classifiers. In a black-box setting where the evaluator does not have access to the training algorithms or the resources to train them, the task can be viewed as a nontrivial missing data problem. This work proposes the counterfactual score as an evaluation metric.
The counterfactual score also offers practical benefits when comparing abstaining classifiers. Counterfactual scores are comparable even if abstaining classifiers are tuned to different abstention rates or selective scores. Moreover, compared to evaluation methods using the selective score-coverage curve (equivalent to re-training the classifier several times at different score/coverage levels), estimating the counterfactual score does not require re-training the classifier. Instead, we only require estimating a pair of nuisance functions that can be learned using the observed predictions (nonabstentions) in the evaluation set. Let us further note that the setup is applicable generally to any form of prediction that can be scored, including regression and structured prediction. In this paper, we restrict our attention to classification for concreteness, given that it is the most well-studied abstention framework.
Summary of Contributions.We first formalize the problem of comparing abstaining classifiers as a missing data problem and introduce the counterfactual score as a metric for abstaining classifier comparison. Next, we discuss how the counterfactual score can be identified under the MAR and positivity conditions. Then, we develop efficient nonparametric estimators for the counterfactual scores and their differences, namely doubly robust confidence intervals. We analyze our approach in simulated and real-data experiments. Table 1 summarizes the methods developed in our paper.
Related Work on Abstaining Classifiers.The _training_ of abstaining classifiers has seen significant interest in the literature. We refer to Hendrickx et al. (2021); Zhang et al. (2023) for recent surveys.
For _evaluation_, aside from using some combination of selective score and coverage, the most pertinent reference is the work of Condessa et al. (2017), who propose the metric of 'classifier quality' that is a somewhat inverse version of our counterfactual accuracy. This metric is the sum of the prediction accuracy when the classifier predicts, and prediction _inaccuracy_ when it abstains, the idea being that if a classifier is not abstaining needlessly, then it must hold that the underlying predictions on points it abstains on are very poor. While this view is relevant to the training of abstention rules, it is at odds with black-box settings where the underlying predictions may still be executed even when the method abstains, motivating the counterfactual score.
Related Work on Missing Data, Causal Inference, and Doubly Robust Estimation.Our main approach to the estimability of counterfactual scores is driven by a reduction to an inference problem under missing data (or censoring) (Rubin, 1976; Little and Rubin, 2019). A missing data problem can be viewed equivalently as a causal inference problem in an observational study (Rubin, 1976; Pearl, 2000; Shpitser et al., 2015; Ding and Li, 2018), and there exist well-established theories and methods for both identifying the counterfactual quantity of interest and efficiently estimating the identified
\begin{table}
\begin{tabular}{c c c} \hline \hline & **Evaluation** & **Comparison** \\ \hline Classifier(s) & \((f,\pi)\) & \((f^{\mathsf{A}},\pi^{\mathsf{A}})\) \& \((f^{\mathsf{B}},\pi^{\mathsf{B}})\) \\ Target & \(\psi=\mathbb{E}\left[S\right]\) & \(\Delta^{\mathsf{AB}}=\mathbb{E}\left[S^{\mathsf{A}}-S^{\mathsf{B}}\right]\) \\ \hline Identification & MAR \& Positivity \\ Estimation & Doubly Robust CI \\ Optimality & Nonparametrically Efficient \\ \hline \hline \end{tabular}
\end{table}
Table 1: A summary of problem formulations and proposed approaches for evaluation and comparison of abstaining classifiers. Our approaches avoid parametric assumptions and allow for black-box classifiers.
target functional. For _identification_, unlike in some observational settings for estimating treatment effects, our setup straightforwardly satisfies the standard assumption of consistency (non-interference). The MAR assumption is satisfied as long as independent evaluation data is used, and the positivity assumption translates to abstentions being stochastic. We discuss various implications of these conditions in Sections 2.2 and 5. For _estimation_, efficient methods for estimating targets such as the average treatment effect (ATE) have long been studied under semiparametric and nonparametric settings. Doubly robust (DR) estimators, in particular, are known to achieve the asymptotic minimax lower bound on the mean squared error. For details, we refer the reader to Bickel et al. (1993); Robins et al. (1994); van der Vaart (2000, 2002); van der Laan and Robins (2003); Bang and Robins (2005); Tsiatis (2006); Chernozhukov et al. (2018); Kennedy (2022). Unlike the standard ATE estimation setup in observational studies, our setup contrasts two separate causal estimands, the counterfactual scores of the two competing classifiers, that operate under their distinct missingness mechanisms.
## 2 Definition and Identification of the Counterfactual Score
We formulate the problem of evaluating and comparing abstaining classifiers under the missing data framework (Rubin, 1976). We follow the standard approach of defining the target parameter (SS2.1), identifying it with observable quantities (SS2.2), and estimating the identified parameter using data (SS3). In each step, we first consider evaluating one abstaining classifier and then extend to comparing two abstaining classifiers. In the following, \(\mathcal{X}\) denotes the input space and \(\mathcal{Y}=\{1,\ldots,C\}\) is the set of possible classes, while \(\Delta^{C-1}\) denotes the \(C\)-dimensional probability simplex on \(\mathcal{Y}\).
Abstaining Classifiers.We define an abstaining classifier as a pair of functions \((f,\pi)\), representing its _base classifier_\(f:\mathcal{X}\to\Delta^{C-1}\) and _abstention mechanism_\(\pi:\mathcal{X}\to[0,1]\), respectively. Given a query \(X\), the classifier first forms a preliminary (probabilistic) prediction \(f(X)\). Then, potentially using the output \(f(X)\), the classifier determines \(\pi(X)\), i.e., the abstention probability. Using \(\pi(X)\), the classifier then makes the binary abstention decision \(R\mid\pi(X)\sim\mathsf{Ber}(\pi(X))\), so that if \(R=1\) ("rejection"), the classifier abstains on the query, and if \(R=0\), it reveals its prediction \(f(X)\). In some cases, we will explicitly define the source of randomness \(\xi\) (independent of the data) in deciding \(R\), such that \(R=\mathsf{r}(\pi(X),\xi)\) for a deterministic function \(\mathsf{r}\).2 Neither \(f\) nor \(\pi\) is assumed to be known to the evaluator, modeling the black-box access typically available to practitioners.
Footnote 2: Specifically, let \(\xi\sim\mathsf{Unif}[0,1]\) and \(R=\mathbf{1}\left(\xi\leq\pi(X)\right)\). Then, \(R\) is a function of only \(\pi(X)\) and \(\xi\).
Scoring Rules (Higher Scores Are Better).We measure the quality of a prediction \(f(x)\) for a label \(y\) via a positively oriented _scoring rule_\(\mathsf{s}:\Delta^{C-1}\times\mathcal{Y}\to\mathbb{R}\). One simple scoring rule is classification accuracy, i.e., \(\mathsf{s}(f(x),y)=\mathbf{1}\left(\operatorname*{argmax}_{c\in\mathcal{Y}}f( x)_{c}=y\right)\), but a plethora of scores exist in the literature, such as the Brier (1950) score, defined as \(\mathsf{s}(f(x),y)=1-\sum_{c\in\mathcal{Y}}(f(x)_{c}-\mathbf{1}\left(y=c\right) )^{2}\).
The Evaluation Setup.For each labeled data point \((X,Y)\) in an evaluation set, we observe the abstention decision \(R=\mathsf{r}(\pi(X),\xi)\) for some independent source of randomness \(\xi\) used by the abstaining classifier. Then, its prediction \(f(X)\) is observed by the evaluator if and only if \(R=0\). Let \(S:=\mathsf{s}(f(X),Y)\) denote the score of the prediction \(f\) on the query \(X\), irrespective of \(R\). Because \(S\) is not observable when \(R=1\), we refer to \(S\) as the _potential score_ that _would have been seen_ had the classifier not abstained. (See Appendix A.2 for equivalent formulations that explicitly invoke Rubin (1974)'s potential outcomes model.) Since our evaluation is based only on the score \(S\), we can
suppress the role of \(Y\) and assume that \(S\) is observed directly when \(R=0\). Similarly, we can suppress the role of \(\xi\), which is independent of the data. We let \(\mathbb{P}\) denote the law of \(Z:=(X,R,S)\).
### Definition of the Counterfactual Score
We propose to assess an abstaining classifier \((f,\pi)\) through its _(expected) counterfactual score_:
\[\psi:=\mathbb{E}\left[S\right], \tag{1}\]
where the expectation is taken w.r.t. \(\mathbb{P}\). In words, \(\psi\) refers to the expected score of the abstaining classifier had it not been given the option to abstain. The counterfactual score captures the performance of an abstaining classifier via the score of its base classifier, making it suitable for cases where the evaluator is interested in the predictions without using an abstention mechanism.
Note that \(\psi\) does _not_ in general equal the _selective score_, i.e., \(\mathbb{E}\left[S\mid R=0\right]\). For example, when a classifier abstains from making predictions on its "weak points," i.e., inputs on which the classifier performs poorly, the counterfactual score will be lower than the selective score. Also see Appendix A.3 for a direct comparison with Condessa et al. (2017)'s score.
Comparison.Counterfactual scores may also be used to compare two abstaining classifiers, \((f^{\mathsf{A}},\pi^{\mathsf{A}})\) and \((f^{\mathsf{B}},\pi^{\mathsf{B}})\), in the form of their _counterfactual score difference_: \(\Delta^{\mathsf{AB}}:=\psi^{\mathsf{A}}-\psi^{\mathsf{B}}=\mathbb{E}[S^{ \mathsf{A}}-S^{\mathsf{B}}]\). Here, the expectation is now taken over the joint law of \(Z^{\mathsf{AB}}:=(X,R^{\mathsf{A}},S^{\mathsf{A}},R^{\mathsf{B}},S^{\mathsf{ B}})\).
### Identifiability of the Counterfactual Score
Having defined the target parameters \(\psi\) and \(\Delta^{\mathsf{AB}}\), we now discuss the assumptions under which these quantities become identifiable using only the observed random variables. In other words, these assumptions establish when the counterfactual quantity equals a statistical quantity. As in standard settings of counterfactual inference under missing data, the identifiability of counterfactual scores in this setting depends on two standard conditions: (i) the missing at random condition and (ii) positivity.
The _missing at random (MAR)_ condition, also known as the _ignorability_ or _no unmeasured confounding_ condition, requires that the score \(S\) is conditionally independent of the abstention decision \(R\) given \(X\), meaning that there are no unobserved confounders \(U\) that affect both the abstention decision \(R\) as well as the score \(S\). Note that \(S\) is the _potential_ score of what the classifier would get had it not abstained -- it is only observed when \(R=0\). We formally state the MAR condition as follows:
**Assumption 2.1** (Scores are missing at random).: \(S\perp\!\!\!\perp R\mid X\)_._
In standard ML evaluation scenarios, where the evaluation set is independent of the training set for the classifier, Assumption 2.1 is always met. We formalize this sufficient condition for MAR in the following proposition. Let \(\mathcal{D}_{\mathrm{train}}\) denote the collection of any training data used to learn the abstaining classifier \((f,\pi)\) and, as before, \((X,Y)\) denote an (i.i.d.) data point in the evaluation set.
**Proposition 2.2** (Independent evaluation data ensures MAR).: _If \((X,Y)\perp\!\!\!\perp\mathcal{D}_{\mathrm{train}}\), then \(S\perp\!\!\!\perp R\mid X\)._
This result is intuitive: given an independent test input \(X\), the score \(S=\mathsf{s}(f(X),Y)\) is a deterministic function of the test label \(Y\), and the abstention decision \(R\) of a classifier cannot depend on \(Y\) simply because the classifier has no access to it. A short proof is given in Appendix B.1 for completeness. In Appendix C, we also include causal graphs that visually illustrate how the MAR condition is met.
If the evaluation data is not independent of \(\mathcal{D}_{\mathrm{train}}\), then the classifier already has information about the data on which it is tested, so generally speaking, no evaluation score will not accurately reflect its
generalization performance. Although the independence between the training and evaluation data is expected in standard ML applications, it may not be guaranteed when, e.g., using a publicly available dataset that is used during the training of the classifier. These issues can be prevented by ensuring that the evaluation set is held out (e.g., a hospital can use its own patient data to evaluate APIs).
The second condition, the _positivity_ condition, is more substantial in our setting:
**Assumption 2.3** (Positivity).: There exists \(\epsilon>0\) such that \(\pi(X)=\mathbb{P}\left(R=1\mid X\right)\leq 1-\epsilon\).
Assumption 2.3 says that, for each input \(X\), there has to be at least a small probability that the classifier will _not_ abstain (\(R=0\)). Indeed, if the classifier deterministically abstains from making predictions on a specific input that has nonzero marginal density, then we have no hope of estimating an expectation over all possible values that \(X\) can take. When it comes to evaluating abstaining classifiers on safety-critical applications, we argue that this condition may need to be enforced at a policy level -- we elaborate on this point in Section 5 and Appendix D. In practice, the exact value of \(\epsilon\) is problem-dependent, and in Appendix F.4, we include additional experiments illustrating how our methods retain validity as long as the abstention rate is capped at \(1-\epsilon\) for some \(\epsilon>0\).
Another justification for the positivity condition is that stochastically abstaining classifiers can achieve better performances than their deterministic counterparts. Kalai and Kanade (2021) illustrate how stochastic abstentions can improve the out-of-distribution (OOD) performance w.r.t. the Chow (1970) score (i.e., \(\operatorname{selective}\operatorname{score}+\alpha\cdot\operatorname{ coverage}\)). Schreuder and Chzhen (2021) also introduce randomness in their abstaining classifiers, which leverage abstentions as a means to improve their accuracy while satisfying a fairness constraint. The role of random abstentions in these examples mirrors the role of randomization in the fairness literature (Barocas et al., 2019), where the optimal randomized fair predictors are known to outperform their deterministic counterparts (Agarwal and Deshpande, 2022; Grgic-Hlaca et al., 2017). Given the effectiveness of randomized classifiers for fairness, it would not be surprising if a fair abstaining classifier was randomized (in its decisions and abstentions).
With MAR and positivity in hand, we can show that the counterfactual score is indeed identifiable. Define \(\mu_{0}(x):=\mathbb{E}\left[S\mid R=0,X=x\right]\) as the regression function for the score under \(R=0\).
**Proposition 2.4** (Identification).: _Under Assumptions 2.1 and 2.3, \(\psi\) is identified as \(\mathbb{E}_{X}\left[\mu_{0}(X)\right]\)._
The proof, included in Appendix B.2, follows a standard argument in causal inference. The identification of the target parameter \(\psi\) using \(\mu_{0}\) implies that we can estimate \(\psi\), the expectation of a potential outcome, using only the observed inputs and scores. Specifically, the task of estimating \(\psi\) consistently reduces to the problem of estimating the regression function \(\mu_{0}\), which only involves predictions that the classifier did not abstain from making. We note that, as in standard causal inference, the task of identification, which concerns _what_ to estimate, is largely orthogonal to the task of estimation, which concerns _how_ to estimate the quantity. We discuss the latter problem in Section 3.
Comparison.For the comparison task, given \(\Delta^{\mathsf{AB}}=\psi^{\mathsf{A}}-\psi^{\mathsf{B}}\), it immediately follows that if the MAR and positivity assumptions hold for each of \((X,R^{\mathsf{A}},S^{\mathsf{A}})\) and \((X,R^{\mathsf{B}},S^{\mathsf{B}})\), then \(\Delta^{\mathsf{AB}}\) is also identified, and it can be consistently estimated via \(\Delta^{\mathsf{AB}}=\mathbb{E}_{X}[\mu_{0}^{\mathsf{A}}(X)-\mu_{0}^{\mathsf{B }}(X)]\), where \(\mu_{0}^{\bullet}(x):=\mathbb{E}\left[S^{\bullet}|R^{\bullet}=0,X=x\right]\) for \(\bullet\in\{\mathsf{A},\mathsf{B}\}\). A sufficient condition for the identification of \(\Delta^{\mathsf{AB}}\) is that (i) the evaluation data is independent of the training data for either classifier (MAR) and (ii) each classifier has at least a small chance of not abstaining on each input (positivity).
Nonparametric and Doubly Robust Estimation of the Counterfactual Score
Having identified the counterfactual scores, we now focus on the problem of consistently estimating them. We estimate these quantities without resorting to parametric assumptions about the underlying black-box abstention mechanisms. Instead, we reduce the problem to that of functional estimation and leverage techniques from nonparametric statistics. See Kennedy (2022) for a recent review.
### Estimating the Counterfactual Score
Task.Let \(\{(X_{i},R_{i},S_{i})\}_{i=1}^{n}\sim\mathbb{P}\) denote an i.i.d. evaluation set of size \(n\). As before, we assume that we are given access to the censored version of this sample, i.e., that we observe \(S_{i}\) if and only if \(R_{i}=0\). Using the observables, we seek to form an estimate \(\hat{\psi}\) of the counterfactual score \(\psi=\mathbb{E}[S]\).
Doubly Robust Estimation.Under identification (Ppn. 2.4), we can estimate \(\psi\) by estimating the regression function \(\mu_{0}(X)\) on the data \(\{(X_{i},S_{i}):R_{i}=0\}\). However, the naive "plug-in" estimate suffers from an inflated bias due to structure present in the abstention patterns. (See SSA.4 for details.) We instead develop a doubly robust (DR) estimator (Robins et al., 1994; Bang and Robins, 2005), which is known to consistently estimate \(\psi\) at the optimal _nonparametric efficiency rates_, meaning that no other estimator based on the \(n\) observations can asymptotically achieve a smaller mean squared error (van der Vaart, 2002). The derivation below is relatively standard, explaining our brevity.
Formally, the DR estimator is defined using the (uncentered) _efficient influence function (EIF)_ for the identified target functional \(\psi(\mathbb{P})=\mathbb{E}_{\mathbb{P}}[\mu_{0}(X)]\): \(\mathsf{IF}(x,r,s):=\mu_{0}(x)+\frac{1-r}{1-\pi(x)}\left(s-\mu_{0}(x)\right)\) (\(0/0:=0\)). Here, \(\pi\) and \(\mu_{0}\) are the "nuisance" functions, representing the abstention mechanism and the score regression function under \(R=0\), respectively. The EIF can be computed as long as \(s\) is available when \(r=0\). An intuition for the EIF is that it is the first-order "distributional Taylor approximation" (Fisher and Kennedy, 2021) of the target functional, such that its bias is second-order.
Given that \(\pi\) and \(\mu_{0}\) are unknown, we define an estimate of the EIF, denoted as \(\mathsf{IF}\), by plugging in estimates \(\hat{\pi}\) for \(\pi\) and \(\hat{\mu}_{0}\) for \(\mu_{0}\). Then, the DR estimator is simply the empirical mean of the EIF:
\[\hat{\psi}_{\mathsf{dr}}=\frac{1}{n}\sum_{i=1}^{n}\mathsf{lf}(X_{i},R_{i},S_{ i})=\frac{1}{n}\sum_{i=1}^{n}\left[\hat{\mu}_{0}(X_{i})+\frac{1-R_{i}}{1-\hat{ \pi}(X_{i})}\left(S_{i}-\hat{\mu}_{0}(X_{i})\right)\right]. \tag{2}\]
This estimator is well-defined because \(S_{i}\) is available precisely when \(R_{i}=0\). Note that the first term is the (biased) plug-in estimator, and the second term represents the first-order correction term, which involves inverse probability weighting (IPW) (Horvitz and Thompson, 1952; Rosenbaum, 1995). In our experiments, we show how the DR estimator improves upon both the plug-in estimator, in terms of the bias, and the IPW-based estimator, which we formally define in SSA.4, in terms of the variance.
The "double robustness" of \(\hat{\psi}_{\mathsf{dr}}\) translates to the following useful property: \(\hat{\psi}_{\mathsf{dr}}\) retains the parametric rate of convergence, \(O_{\mathbb{P}}(1/\sqrt{n})\), even when the estimators \(\hat{\mu}_{0}\) and \(\hat{\pi}\) themselves converge at slower rates. This allows us to use nonparametric function estimators to estimate \(\mu_{0}\) and \(\pi\), such as stacking ensembles (Breiman, 1996) like the super learner (van der Laan et al., 2007) and regularized estimators like the Lasso (Tibshirani, 1996; Belloni et al., 2014). Even for nonparametric models whose rates of convergence are not fully understood, such as random forests (Breiman, 2001) and deep neural networks (LeCun et al., 2015), we can empirically demonstrate valid coverage and efficiency (SS4).
In practice, the nuisance functions can be estimated via _cross-fitting_(Robins et al., 2008; Zheng and van der Laan, 2011; Chernozhukov et al., 2018), which is a \(K\)-fold generalization of sample
splitting. First, randomly split the data into \(K\) folds; then, fit \(\hat{\pi}\) and \(\hat{\mu}_{0}\) on \(K-1\) folds and use them to estimate the EIF on the remaining "evaluation" fold; repeat the process \(K\) times with each fold being the evaluation fold; finally, average the EIFs across all data points. The key benefit of using cross-fitting is to avoid any complexity restrictions on individual nuisance functions without sacrificing sample efficiency. In the following, we let \(\hat{\psi}_{\mathsf{dr}}\) be the estimator (2) obtained via cross-fitting.
Now we are ready to present our first result, which states the asymptotic validity and efficiency of the DR estimator for \(\psi\) under identification and the DR condition.
**Theorem 3.1** (DR estimation of the counterfactual score for an abstaining classifier).: _Suppose that Assumptions 2.1 and 2.3 hold. Also, suppose that_
\[\|\hat{\pi}-\pi\|_{L_{2}(\mathbb{P})}\,\|\hat{\mu}_{0}-\mu_{0}\|_{L_{2}(\mathbb{ P})}=o_{\mathbb{P}}(1/\sqrt{n}) \tag{3}\]
_and that \(\|\hat{\mathsf{F}}-\mathsf{IF}\|_{L_{2}(\mathbb{P})}=o_{\mathbb{P}}(1)\). Then,_
\[\sqrt{n}\left(\hat{\psi}_{\mathsf{dr}}-\psi\right)\rightsquigarrow\mathcal{N} \left(0,\mathsf{Var}_{\mathbb{P}}\left[\mathsf{IF}\right]\right),\]
_where \(\mathsf{Var}_{\mathbb{P}}\left[\mathsf{IF}\right]\) matches the nonparametric efficiency bound._
The proof adapts a standard argument in causal inference, as found in, e.g., van der Vaart (2002); Kennedy (2022), to the abstaining classifier evaluation setup. We include a proof sketch in Appendix B.3. Theorem 3.1 tells us that, under the identification and the DR condition (3), we can construct a closed-form asymptotic confidence interval (CI) at level \(\alpha\in(0,1)\) as follows:
\[C_{n,\alpha}=\left(\hat{\psi}_{\mathsf{dr}}\pm z_{\alpha/2}\sqrt{n^{-1} \mathsf{Var}_{\mathbb{P}_{n}}\left[\mathsf{IF}\right]}\right), \tag{4}\]
where \(z_{\alpha/2}=\Phi(1-\frac{\alpha}{2})\) is the \((1-\frac{\alpha}{2})\)-quantile of a standard normal (e.g., \(1.96\) for \(\alpha=0.05\)). In SSE, we describe a version that is also valid under continuous monitoring (e.g., as more data is collected).
### Estimating Counterfactual Score Differences
Task.Next, we return to the problem of comparing two abstaining classifiers, \((f^{\mathsf{A}},\pi^{\mathsf{A}})\) and \((f^{\mathsf{B}},\pi^{\mathsf{B}})\), that each makes a decision to make a prediction on each input \(X_{i}\) or abstain from doing so. That is, \(R^{\bullet}_{i}\mid\pi^{\bullet}(X_{i})\sim\mathsf{Ber}(\pi^{\bullet}(X_{i}))\), and we observe \(S^{\bullet}_{i}=\mathsf{s}(f^{\bullet}(X_{i}),Y_{i})\) if and only if \(R^{\bullet}_{i}=0\), for \(\bullet\in\{\mathsf{A},\mathsf{B}\}\). Recall that the target here is the score difference \(\Delta^{\mathsf{AB}}=\psi^{\mathsf{A}}-\psi^{\mathsf{B}}=\mathbb{E}\left[S^{ \mathsf{A}}-S^{\mathsf{B}}\right]\).
Doubly Robust Difference Estimation.If the parameters \(\psi^{\mathsf{A}}\) and \(\psi^{\mathsf{B}}\) are each identified according to Ppn. 2.4, then we can estimate \(\Delta^{\mathsf{AB}}\) as \(\hat{\Delta}^{\mathsf{AB}}=\hat{\psi}^{\mathsf{A}}-\hat{\psi}^{\mathsf{B}},\) for individual estimates \(\hat{\psi}^{\mathsf{A}}\) and \(\hat{\psi}^{\mathsf{B}}\). The resulting EIF is simply the difference in the EIF for \(\mathsf{A}\) and \(\mathsf{B}\): \(\mathsf{IF}^{\mathsf{AB}}(x,r^{\mathsf{A}},r^{\mathsf{B}},s^{\mathsf{A}},s^{ \mathsf{B}})=\mathsf{IF}^{\mathsf{A}}(x,r^{\mathsf{A}},s^{\mathsf{A}})-\mathsf{ IF}^{\mathsf{B}}(x,r^{\mathsf{B}},s^{\mathsf{B}})\), where \(\mathsf{IF}^{\mathsf{A}}\) and \(\mathsf{IF}^{\mathsf{B}}\) denote the EIF of the respective classifier. Thus, we arrive at an analogous theorem that involves estimating the nuisance functions of each abstaining classifier and utilizing \(\mathsf{IF}^{\mathsf{AB}}\) to obtain the limiting distribution of \(\hat{\Delta}^{\mathsf{AB}}_{\mathsf{dr}}=\hat{\psi}^{\mathsf{A}}_{\mathsf{dr} }-\hat{\psi}^{\mathsf{B}}_{\mathsf{dr}}\).
**Theorem 3.2** (DR estimation of the counterfactual score difference).: _Suppose that Assumptions 2.1 and 2.3 hold for both \((X_{i},R^{\mathsf{A}}_{i},S^{\mathsf{A}}_{i})\) and \((X_{i},R^{\mathsf{B}}_{i},S^{\mathsf{B}}_{i})\). Also, suppose that_
\[\|\hat{\pi}^{\mathsf{A}}-\pi^{\mathsf{A}}\|_{L_{2}(\mathbb{P})}\|\hat{\mu}^{ \mathsf{A}}_{0}-\mu^{\mathsf{A}}_{0}\|_{L_{2}(\mathbb{P})}+\|\hat{\pi}^{\mathsf{ B}}-\pi^{\mathsf{B}}\|_{L_{2}(\mathbb{P})}\|\hat{\mu}^{\mathsf{B}}_{0}-\mu^{ \mathsf{B}}_{0}\|_{L_{2}(\mathbb{P})}=o_{\mathbb{P}}(1/\sqrt{n}) \tag{5}\]
_and that \(\|\mathsf{IF}^{\mathsf{AB}}-\mathsf{I}\mathsf{F}^{\mathsf{AB}}\|_{L_{2}(\mathbb{P}) }=o_{\mathbb{P}}(1)\). Then,_
\[\sqrt{n}\left(\hat{\Delta}^{\mathsf{AB}}_{\mathsf{dr}}-\Delta^{\mathsf{AB}} \right)\rightsquigarrow\mathcal{N}\left(0,\mathsf{Var}_{\mathbb{P}}\left[ \mathsf{I}\mathsf{F}^{\mathsf{AB}}\right]\right),\]
_where \(\mathsf{Var}_{\mathbb{P}}[\mathsf{I}\mathsf{F}^{\mathsf{AB}}]\) matches the nonparametric efficiency bound._
A proof is given in Appendix B.4. As with evaluation, Theorem 3.2 yields a closed-form asymptotic CI of the form (4) using the analogous estimate of EIF under MAR, positivity, and DR (5). Inverting this CI in the standard manner further yields a hypothesis test for \(H_{0}:\psi^{\mathsf{A}}=\psi^{\mathsf{B}}\) vs. \(H_{1}:\psi^{\mathsf{A}}\neq\psi^{\mathsf{B}}\).
## 4 Experiments
In our experiments, we first present results on simulated data to examine the validity of our proposed inference methods (CIs and hypothesis tests). We then study three scenarios on the CIFAR-100 dataset that illustrate the practical use of our approach to real data settings.
### Simulated Experiments: Abstentions Near the Decision Boundary
Setup (MAR but Not MCAR).We first consider comparing two abstaining classifiers according to their accuracy, on a simulated binary classification dataset with 2-dimensional inputs. Given \(n=2,000\) i.i.d. inputs \(\{X_{i}\}_{i=1}^{n}\sim\mathsf{Unif}([0,1]^{2})\), each label \(Y_{i}\) is decided using a linear boundary, \(f_{*}(x_{1},x_{2})=\mathbf{1}\left(x_{1}+x_{2}\geq 1\right)\), along with a 15% i.i.d. label noise. Importantly, each classifier abstains near its decision boundary, such that its predictions and scores are _MAR but not MCAR_ (because abstentions depend on the inputs). As a result, while the counterfactual score of \(\mathsf{A}\) (\(\psi^{\mathsf{A}}=0.86\)) is much higher than \(\mathsf{B}\) (\(\psi^{\mathsf{B}}=0.74\)), their selective scores are more similar (\(\mathsf{S}\mathsf{e}^{\mathsf{A}}=0.86\), \(\mathsf{S}\mathsf{e}^{\mathsf{B}}=0.81\)) and coverage is lower for \(\mathsf{A}\) (\(\mathsf{Cov}^{\mathsf{A}}=0.55\)) than for \(\mathsf{B}\) (\(\mathsf{Cov}^{\mathsf{B}}=0.62\)). Another point to note here is that, even though both the outcome model and classifier \(\mathsf{A}\) are linear, both the abstention mechanism3\(\pi^{\mathsf{A}}\) and the selective score function4\(\mu_{0}^{\mathsf{A}}\) are _non_linear functions of the inputs (similarly for \(\pi^{\mathsf{B}}\) and \(\mu_{0}^{\mathsf{B}}\)). More generally, if a classifier abstains near its decision boundary, then both \(\pi\) and \(\mu_{0}\) could be at least as complex as the base classifier \(f\) itself. Further details of the setup, including a plot of the data, predictions, and abstentions, are provided in Appendix F.1.
Footnote 3: The abstention mechanism \(\pi(x)=\mathbb{P}(R=1\mid X=x)\) here separates the region below _and_ above the decision boundary from the region near the boundary. Thus \(\pi\) is nonlinear even when the boundary is linear.
Footnote 4: Given any input \(X\) for which the classifier did not abstain (\(R=0\)) and its output \(Y\), the score \(S=\mathsf{s}(f(X),Y)\) is nonlinear if either \(\mathsf{s}(\cdot,y)\) or \(f\) is nonlinear. Thus, even for linear \(f\), nonlinear scores like the Brier score automatically make the selective score function \(\mu_{0}(x)=\mathbb{E}[S\mid R=0,X=x]\) nonlinear.
Miscoverage Rates and Widths.As our first experiment, we compare the miscoverage rates and widths of the 95% DR CIs (Theorem 3.2) against two baseline estimators: the plug-in and the IPW (Rosenbaum, 1995) estimators (SSA.4). For each method, the miscoverage rate of the CI \(C_{n}\) is approximated via \(\mathbb{P}(\Delta^{\mathsf{AB}}\notin C_{n})\approx m^{-1}\sum_{j=1}^{m} \mathbf{1}(\Delta^{\mathsf{AB}}\notin C_{n}^{(j)})\), where \(m\) is the number of simulations over repeatedly sampled data. If the CI is valid, then this rate should approximately be \(0.05\). The miscoverage rate and the width of a CI, respectively, capture its bias and variance components. For the nuisance functions, we try linear predictors (L2-regularized linear/logistic regression for \(\hat{\mu}_{0}\)/\(\hat{\pi}\)), random forests, and super learners with \(k\)-NN, kernel SVM, and random forests.
We present our results in Table 2. First, using either the random forest or the super learner, the DR CIs consistently achieve the intended coverage level of \(0.95\), over \(m=1,000\) repeated simulations
(standard error \(0.01\)). This validates the asymptotic normality result of (3.2). Note that the version with linear estimators does not achieve the intended coverage level: this is expected as neither \(\hat{\pi}\) nor \(\hat{\mu}_{0}\) can consistently estimate \(\pi\) or \(\mu_{0}\), which are nonlinear functions, and thus violates (5).
Second, when considering both the miscoverage rate and CI width, the DR estimator outperforms both the plug-in and IPW estimators. The plug-in estimator, despite having small CI width, has a very high miscoverage rate (\(0.91\) with the super learner), meaning that it is biased even when flexible nuisance learners are used. On the other hand, the IPW estimator has double the width of the DR estimator (\(0.12\) to \(0.06\), with the super learner), meaning that it is not as efficient. Also, while the IPW estimator achieves the desired coverage level with the super learner (\(0.03\)), it fails with the random forest (\(0.14\)), which tends to make overconfident predictions of the abstention pattern and biases the resulting estimate. In contrast, the DR estimator retains its intended coverage level of \(0.05\) with the random forest, suggesting that it is amenable to overconfident nuisance learners.
Power Analysis.We further conduct a power analysis of the statistical test for \(H_{0}:\Delta^{\text{AB}}=0\) vs. \(H_{1}:\Delta^{\text{AB}}\neq 0\) by inverting the DR CI. The results confirm that the power reaches 1 as either the sample size (\(n\)) or the absolute difference (\(|\Delta^{\text{AB}}|\)) increases. This experiment is included in App. F.2.
### Comparing Abstaining Classifiers on CIFAR-100
To illustrate a real data use case, we compare abstaining classifiers on the CIFAR-100 image classification dataset (Krizhevsky, 2009). Observe that abstaining classifiers can behave differently not just when their base classifiers are different but also when their abstention mechanisms are different. In fact, two abstaining classifiers can have a similarly effective base classifier but substantially different abstention mechanisms (e.g., one more confident than the other). In such a case, the counterfactual score difference between the two classifiers is zero, but their selective scores and coverages are different. We examine such scenarios by comparing image classifiers that use the same pre-trained representation model but have different output layers and abstention mechanisms.
We start with the 512-dimensional final-layer representations of a VGG-16 convolutional neural network (Simonyan and Zisserman, 2015), pre-trained5 on the CIFAR-100 training set, and compare
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Nuisance Function Estimators** & **Plug-in** & **IPW** & **DR** \\ \hline Linear/Logistic Regression & 1.00 \(\pm\) 0.00 & 0.76 \(\pm\) 0.01 & 1.00 \(\pm\) 0.00 \\ & (0.00) & (0.09) & (0.04) \\ \hline Random Forest & 0.64 \(\pm\) 0.02 & 0.14 \(\pm\) 0.01 & **0.05 \(\pm\) 0.01** \\ & (0.02) & (0.13) & **(0.07)** \\ \hline Super Learner & 0.91 \(\pm\) 0.01 & **0.03 \(\pm\) 0.01** & **0.05 \(\pm\) 0.01** \\ & (0.01) & **(0.12)** & **(0.06)** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Miscoverage rates (and widths) of 95% CIs using three estimation approaches and three nuisance function (\(\pi\) and \(\mu_{0}\)) estimators in a simulated experiment. Mean and standard error computed over \(m=1,000\) runs are shown; those within 2 standard errors of the intended level (\(0.05\)) are bold-faced. The sample size is \(n=2,000\) in each run. The mean widths of CIs are shown in parentheses. DR estimation with either a random forest or a super learner achieves control over the miscoverage rate, and the DR-based CI is twice as tight as the IPW-based CI in terms of their width.
different output layers and abstention mechanisms on the validation set. Given a real data setup, we cannot in general verify whether a statistical test or a CI, but in this experiment, we still have access to the base model of each abstaining classifier. This means that (a) if we compare abstaining classifiers that share the base classifier but differ in their abstention patterns, then we actually know that their counterfactual scores are exactly the same (\(\Delta^{\mathsf{AB}}=0\)); (b) if we compare abstaining classifiers with different base classifiers, then we can compute their counterfactual scores accurately up to an i.i.d. sampling error. This estimate is denoted by \(\bar{\Delta}^{\mathsf{AB}}:=n^{-1}\sum_{i=1}^{n}[\mathsf{s}(f^{\mathsf{A}}(X_{ i}),Y_{i})-\mathsf{s}(f^{\mathsf{B}}(X_{i}),Y_{i})]\).
For all comparisons, we use the DR estimator, where the nuisance functions \(\hat{\pi}\) and \(\hat{\mu}_{0}\) for both classifiers are each an L2-regularized linear layer learned on top of the pre-trained VGG-16 features. The use of pre-trained representations for learning the nuisance functions is motivated by Shi et al. (2019), who demonstrated the effectiveness of the approach in causal inference contexts. We also use the Brier score in this experiment. We defer other experiment details to Appendix F.3.
In scenario I, we compare two abstaining classifiers that use the same softmax output layer but use a different threshold for abstentions. Specifically, both classifiers use the softmax response (SR) thresholding (Geifman and El-Yaniv, 2017), i.e., abstain if \(\max_{c\in\mathcal{Y}}f(X)_{c}<\tau\) for a threshold \(\tau>0\), but A uses a more conservative threshold (\(\tau=0.8\)) than B (\(\tau=0.5\)). As a result, while their counterfactual scores are identical (\(\Delta^{\mathsf{AB}}=0\)), A has a higher selective score (\(+0.06\)) and a lower coverage (\(-0.20\)) than B. This is also a deterministic abstention mechanism, potentially challenging the premises of our setup. As shown in Table 3, we see that the 95% DR CI is \((-0.005,0.018)\) (\(n=5,000\)), confirming that there is no difference in counterfactual scores.6
Footnote 6: The reason why the DR CI correctly estimates the true \(\Delta^{\mathsf{AB}}=0\), despite the fact that positivity is violated in this case, is because the two classifiers happen to abstain on similar examples (B abstains whenever A does) _and_ their scores on their abstentions happen to be relatively similar (\(0.604\) for A; \(0.576\) for B).
Scenario II is similar to scenario I, except that the abstention mechanisms are now stochastic: A uses the SR as the probability of making a prediction, i.e., \(\pi^{\mathsf{A}}(x)=1-\max_{c\in[\mathcal{C}]}f(X)_{c}\), while B uses the Gini impurity as the probability of abstention, i.e., \(\pi^{\mathsf{B}}(x)=1-\sum_{c=1}^{C}p_{c}^{2}\), both clipped to \((0.2,0.8)\). This results in a higher coverage for A and a higher selective score for B because the Gini impurity is typically smaller than the SR. The 95% DR CI is \((-0.014,0.008)\), confirming that there is once again no difference in counterfactual scores. These two scenarios correspond to case (a).
In scenario III, we now examine a case where there _is_ a difference in counterfactual scores between the two abstaining classifiers (case (b)). Specifically, we compare the pre-trained VGG-16 model's output layers (512-512-100) with the single softmax output layer that we considered in earlier scenarios. It turns out that the original model's multi-layer output model achieves a worse Brier score (\(0.758\)) than one with a single output layer (\(0.787\)), likely because the probability predictions of the multi-layer model are too confident (miscalibrated). When using the same abstention mechanism
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Scenarios** & \(\bar{\Delta}^{\mathsf{AB}}\) & **95\% DR CI** & **Reject \(H_{0}\)?** \\ \hline I & \(0.000\) & (-0.005, 0.018) & No \\ II & \(0.000\) & (-0.014, 0.008) & No \\ III & \(-0.029\) & (-0.051, -0.028) & Yes \\ \hline \hline \end{tabular}
\end{table}
Table 3: The 95% DR CIs and their corresponding hypothesis tests for \(H_{0}:\Delta^{\mathsf{AB}}=0\) at significance level \(\alpha=0.05\), for three different comparison scenarios on (half of) the CIFAR-100 test set (\(n=5,000\)). The three scenarios compare different abstention mechanisms or predictors, as detailed in text; all comparisons use the Brier score. \(\bar{\Delta}^{\mathsf{AB}}\) is the empirical counterfactual score difference without any abstentions. The result of each statistical test agrees with whether \(\bar{\Delta}^{\mathsf{AB}}\) is \(0\).
(stochastic abstentions using SR, as in II), the overconfident original model correspondingly achieves a higher coverage (and worse selective Brier score) than the single-output-layer model. The Monte Carlo estimate of the true counterfactual score difference is given by \(\bar{\Delta}^{\mathsf{AB}}=-0.029\), and the 95% DR CI falls entirely negative with \((-0.051,-0.028)\), rejecting the null of \(\Delta^{\mathsf{AB}}=0\) at \(\alpha=0.05\).
## 5 Limitations and Discussion
This paper lays the groundwork for addressing the challenging problem of comparing black-box abstaining classifiers in a counterfactual sense. Our solution casts the problem in Rubin (1976)'s missing data framework, where we treat abstentions as missing-at-random predictions of the classifier(s). This allows us to leverage nonparametrically efficient, doubly-robust tools from causal inference.
There are important future directions and limitations stemming from our framework. On a conceptual level, the largest challenge arises from the positivity condition, which requires the classifiers to deploy a non-deterministic abstention mechanism. As mentioned in SS2.2, the counterfactual score is unidentifiable without this assumption. We argue that, especially in auditing scenarios, this issue calls for a policy-level treatment, in which vendors must supply evaluators with a classifier that is can abstain but must have at least an \(\epsilon>0\) chance of nonabstention, a level that can be mutually agreed upon by both parties. This achieves a middle ground where the vendors are not required to fully reveal their proprietary classifiers. In SSD, we discuss this policy-level treatment in further detail.
An alternative is to explore existing techniques that aim to address positivity violations directly (Petersen et al., 2012; Ju et al., 2019; Leger et al., 2022). Due to the unidentifiability result, these techniques have their own limitations. For example, we may consider applying sample trimming (Crump et al., 2009) and making inferences on a subpopulation, although the conclusions would not hold for the entire input domain (even when relevant). Exploring these options is left as future work.
### Acknowledgements
The authors thank Edward H. Kennedy and anonymous reviewers for their helpful comments and suggestions. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562. Specifically, it used the Bridges-2 system, which is supported by NSF award number ACI-1928147, at the Pittsburgh Supercomputing Center (PSC).
|
2307.05076 | Incentive Engineering for Concurrent Games | We consider the problem of incentivising desirable behaviours in multi-agent
systems by way of taxation schemes. Our study employs the concurrent games
model: in this model, each agent is primarily motivated to seek the
satisfaction of a goal, expressed as a Linear Temporal Logic (LTL) formula;
secondarily, agents seek to minimise costs, where costs are imposed based on
the actions taken by agents in different states of the game. In this setting,
we consider an external principal who can influence agents' preferences by
imposing taxes (additional costs) on the actions chosen by agents in different
states. The principal imposes taxation schemes to motivate agents to choose a
course of action that will lead to the satisfaction of their goal, also
expressed as an LTL formula. However, taxation schemes are limited in their
ability to influence agents' preferences: an agent will always prefer to
satisfy its goal rather than otherwise, no matter what the costs. The
fundamental question that we study is whether the principal can impose a
taxation scheme such that, in the resulting game, the principal's goal is
satisfied in at least one or all runs of the game that could arise by agents
choosing to follow game-theoretic equilibrium strategies. We consider two
different types of taxation schemes: in a static scheme, the same tax is
imposed on a state-action profile pair in all circumstances, while in a dynamic
scheme, the principal can choose to vary taxes depending on the circumstances.
We investigate the main game-theoretic properties of this model as well as the
computational complexity of the relevant decision problems. | David Hyland, Julian Gutierrez, Michael Wooldridge | 2023-07-11T07:19:24Z | http://arxiv.org/abs/2307.05076v1 | # Incentive Engineering for Concurrent Games
###### Abstract
We consider the problem of incentivising desirable behaviours in multi-agent systems by way of taxation schemes. Our study employs the concurrent games model: in this model, each agent is primarily motivated to seek the satisfaction of a goal, expressed as a Linear Temporal Logic (LTL) formula; secondarily, agents seek to minimise costs, where costs are imposed based on the actions taken by agents in different states of the game. In this setting, we consider an external principal who can influence agents' preferences by imposing taxes (additional costs) on the actions chosen by agents in different states. The principal imposes taxation schemes to motivate agents to choose a course of action that will lead to the satisfaction of their goal, also expressed as an LTL formula. However, taxation schemes are limited in their ability to influence agents' preferences: an agent will always prefer to satisfy its goal rather than otherwise, no matter what the costs. The fundamental question that we study is whether the principal can impose a taxation scheme such that, in the resulting game, the principal's goal is satisfied in at least one or all runs of the game that could arise by agents choosing to follow game-theoretic equilibrium strategies. We consider two different types of taxation schemes: in a _static_ scheme, the same tax is imposed on a state-action profile pair in all circumstances, while in a _dynamic_ scheme, the principal can choose to vary taxes depending on the circumstances. We investigate the main game-theoretic properties of this model as well as the computational complexity of the relevant decision problems.
## 1 Introduction
_Rational verification_ is the problem of establishing which temporal logic properties will be satisfied by a multi-agent system, under the assumption that agents in the system choose strategies that form a game-theoretic equilibrium [13, 42, 18]. Thus, rational verification enables us to verify which desirable and undesirable behaviours could arise in a system through individually rational choices. This article, however, expands beyond verification and studies methods for incentivising outcomes with favourable properties while mitigating undesirable consequences. One prominent example is the implementation of Pigovian taxes, which effectively discourage agents from engaging in activities that generate negative externalities. These taxes have been extensively explored in various domains, including sustainability and AI for social good, with applications such as reducing carbon emissions, road congestion, and river pollution [25, 23, 33].
We take as our starting point the work of [41], who considered the possibility of influencing one-shot Boolean games by introducing taxation schemes, which impose additional costs onto a game at the level of individual actions. In the model of preferences considered in [41], agents are primarily motivated to achieve a goal expressed as a (propositional) logical formula, and only secondarily motivated to minimise costs. This logical component limits the possibility to influence agent preferences: an agent can never be motivated by a taxation scheme away from achieving its goal. In related work, Wooldridge et al. defined the following implementation problem: given a game \(G\) and an objective \(\Upsilon\), expressed as a propositional logic formula, does there exists a taxation scheme \(\tau\) that could be imposed upon \(G\) such that, in the resulting game \(G^{\tau}\), the objective \(\Upsilon\) will be satisfied in at least one Nash equilibrium [41].
We develop these ideas by applying models of finite-state automata to introduce and motivate the use of history-dependent incentives in the context of _concurrent games_[3]. In a concurrent game, play continues for an infinite number of rounds, where at each round, each agent simultaneously chooses an action to perform. Preferences in such a multiplayer game are defined by associating with each agent \(i\) a Linear Temporal Logic (LTL) goal \(\gamma_{i}\), which agent \(i\) desires to see satisfied. In this work, we also assume that actions incur costs, and that agents seek to minimise their limit-average costs.
Since, in contrast to the model of [41], play in our games continues for an infinite number of rounds, we find there are two natural variations of taxation schemes for concurrent games. In a _static_ taxation scheme, we impose a fixed cost on state-action profiles so that the same state-action profile will always incur the same tax, no matter when it is performed. In a _dynamic_ taxation scheme, the same state-action profile may incur different taxes in different circumstances: it is history-dependent. We first show that dynamic taxation schemes are strictly more powerful than static taxation schemes, making them a more appropriate model of incentives in the context of concurrent games, characterise the conditions under which an LTL objective \(\Upsilon\) can be implemented in a game using dynamic taxation schemes, and begin to investigate the computational complexity of the corresponding decision problems.
## 2 Preliminaries
Where \(S\) is a set, we denote the powerset of \(S\) by \(\mathbf{2}^{S}\). We use various propositional languages to express properties of the systems we consider. In these languages, we will let \(\Phi\) be a finite and non-empty vocabulary of Boolean variables, with typical elements \(p,q,\ldots\). Where \(a\) is a finite word and \(b\) is also a word (either finite or infinite), we denote the word obtained by concatenating \(a\) and \(b\) by \(ab\). Where \(a\) is a finite word, we denote by \(a^{\omega}\) the infinite repetition of \(a\). Finally, we use \(\mathbb{R}_{n}^{+}\) for the set of \(n\)-tuples of non-negative real numbers.
**Concurrent Game Arenas:** We work with concurrent game structures, which in this work we will refer to as _arenas_ (to distinguish them from the game structures that we introduce later in this section) [3]. Formally a _concurrent game arena_ is given by a structure
\[\mathcal{A}=(\mathcal{S},\mathcal{N},Ac_{1},\ldots,Ac_{n},\mathcal{T}, \mathcal{C},\mathcal{L},s_{0}),\]
where: \(\mathcal{S}\) is a finite and non-empty set of _arena states_; \(\mathcal{N}=\{1,\ldots,n\}\) is the set of _agents_ - for any \(i\in\mathcal{N}\), we let \(-i=\mathcal{N}\setminus\{i\}\) denote the set of _all agents excluding_\(i\); for each \(i\in\mathcal{N}\), \(Ac_{i}\) is the finite and non-empty set of unique actions available to agent \(i\) - we let \(Ac=\bigcup_{i\in\mathcal{N}}Ac_{i}\) denote the set of all _actions_ available to all players in the game and \(\vec{Ac}=Ac_{1}\times\cdots\times Ac_{n}\) denote the set of all _action profiles_; \(\mathcal{T}:\mathcal{S}\times Ac_{1}\times\cdots\times Ac_{n}\to\mathcal{S}\) is the _state transformer function_ which prescribes how the state of the arena is updated for each possible action profile - we refer to a pair \((s,\vec{\alpha})\), consisting of a state \(s\in\mathcal{S}\) and an action profile \(\vec{\alpha}\in\vec{Ac}\) as a _state-action profile_; \(\mathcal{C}:\mathcal{S}\times Ac_{1}\times\cdots\times Ac_{n}\to\mathbb{R}_{+} ^{n}\) is the _cost function_ - given a state-action profile \((s,\vec{\alpha})\) and an agent \(i\in\mathcal{N}\), we write \(\mathcal{C}_{i}(s,\vec{\alpha})\) for the \(i\)-th component of \(\mathcal{C}(s,\vec{\alpha})\), which corresponds to the cost that agent \(i\) incurs when \(\vec{\alpha}\) is executed at \(s\); \(\mathcal{L}:\mathcal{S}\to\mathbf{2}^{\Phi}\) is a _labelling function_ that specifies which propositional variables are true in each state \(s\in\mathcal{S}\); and \(s_{0}\in\mathcal{S}\) is the _initial state_ of the arena. In what follows, it is useful to define for every agent \(i\in\mathcal{N}\) the value \(c_{i}^{*}\) to be the maximum cost that \(i\) could incur through the execution of a state-action profile: \(c_{i}^{*}=\max\{\mathcal{C}_{i}(s,\vec{\alpha})\mid s\in\mathcal{S},\vec{ \alpha}\in\vec{Ac}\}\).
**Runs:** Games are played in an arena as follows. The arena begins in its initial state \(s_{0}\), and each agent \(i\in\mathcal{N}\) selects an action \(\alpha_{i}\in Ac_{i}\) to perform; the actions so selected define an action profile,
\(\cdots\times Ac_{n}\). The arena then transitions to a new state \(s_{1}=\mathcal{T}(s_{0},\alpha_{1},\ldots,\alpha_{n})\). Each agent then selects another action \(\alpha_{i}^{\prime}\in Ac_{i}\), and the arena again transitions to a new state \(s_{2}=\mathcal{T}(s_{1},\bar{\alpha}^{\prime})\). In this way, we trace out an infinite interleaved sequence of states and action profiles, referred to as a _run_, \(\rho:s_{0}\stackrel{{\bar{\alpha}_{0}}}{{\longrightarrow}}s_{1} \stackrel{{\bar{\alpha}_{i}}}{{\longrightarrow}}s_{2} \stackrel{{\bar{\alpha}_{2}}}{{\longrightarrow}}\cdots\).
Where \(\rho\) is a run and \(k\in\mathbb{N}\), we write \(s(\rho,k)\) to denote the state indexed by \(k\) in \(\rho\), so \(s(\rho,0)\) is the first state in \(\rho\), \(s(\rho,1)\) is the second, and so on. In the same way, we denote the \(k\)-th action profile played in a run \(\rho\) by \(\bar{\alpha}(\rho,k-1)\) and to single out an individual agent \(i\)'s \(k\)-th action, we write \(\alpha_{i}(\rho,k-1)\).
Above, we defined the cost function \(\mathcal{C}\) with respect to individual state-action pairs. In what follows, we find it useful to lift the cost function from individual state-action pairs to sequences of state-action pairs and runs. Since runs are infinite, simply taking the sum of costs is not appropriate: instead, we consider the cost of a run to be the _average_ cost incurred by an agent \(i\) over the run; more precisely, we define the average cost incurred by agent \(i\) over the first \(t\) steps of the run \(\rho\) as \(\mathcal{C}_{i}(\rho,0:t)=\frac{1}{t+1}\sum_{j=0}^{t}\mathcal{C}_{i}(\rho,j)\) for \(t\geq 1\), whereby \(\mathcal{C}_{i}(\rho,j)\) we mean \(\mathcal{C}_{i}(s(\rho,j),\bar{\alpha}(\rho,j))\). Then, we define the cost incurred by an agent \(i\) over the run \(\rho\), denoted \(\mathcal{C}_{i}(\rho)\), as the _inferior limit of means_: \(\mathcal{C}_{i}(\rho)=\liminf_{t\to\infty}\mathcal{C}_{i}(\rho,0:\,t)\). It can be shown that the value \(\mathcal{C}_{i}(\rho)\) always converges because the sequence of averages \(\mathcal{C}_{i}(\rho,0:t)\) is Cauchy.
**Linear Temporal Logic:** We use the language of Linear Temporal Logic (LTL) to express properties of runs [34, 12]. Formally, the syntax of LTL is defined wrt. a set \(\Phi\) of Boolean variables by the following grammar:
\[\varphi::=\top\mid p\mid\neg\varphi\mid\varphi\lor\varphi\mid\mathbf{X} \varphi\mid\varphi\,\mathbf{U}\varphi \tag{1}\]
where \(p\in\Phi\). Other usual logic connectives ("\(\perp\)", " & ", "\(\to\)", "\(\leftrightarrow\)") are defined in terms of \(\neg\) and \(\lor\) in the conventional way. Given a set of variables \(\Phi\), let \(LTL(\Phi)\) be the set of LTL formulae over \(\Phi\); where the variable set \(\Phi\) is clear from the context, we simply write \(LTL\). We interpret formulae of LTL with respect to pairs \((\rho,t)\), where \(\rho\) is a run, and \(t\in\mathbb{N}\) is a temporal index into \(\rho\). Any given LTL formula may be true at none or multiple time points on a run; for example, a formula \(\mathbf{X}q\) will be true at a time point \(t\in\mathbb{N}\) on a run \(\rho\) if \(q\) is true on a run \(\rho\) at time \(t+1\). We will write \((\rho,t)\models\varphi\) to mean that \(\varphi\in LTL\) is true at time \(t\in\mathbb{N}\) on run \(\rho\). The rules defining when formulae are true (i.e., the semantics of LTL) are defined as follows:
\[(\rho,t)\models\top\] \[(\rho,t)\models p\] iff \[p\in\mathcal{L}(s(\rho,t))\quad\text{(where $p\in\Phi$)}\] \[(\rho,t)\models\neg\varphi\] iff \[\text{it is not the case that $(\rho,t)\models\varphi$}\] \[(\rho,t)\models\varphi\lor\psi\] iff \[(\rho,t)\models\varphi\text{ or $(\rho,t)\models\psi$}\] \[(\rho,t)\models\mathbf{X}\varphi\] iff \[(\rho,t+1)\models\varphi\] \[(\rho,t)\models\varphi\,\mathbf{U}\,\psi\] iff for some
\[t^{\prime}\geq t:\,(\rho,t^{\prime})\models\psi\]
and \[\text{for all $t\leq t^{\prime\prime}<t^{\prime}:(\rho,t^{\prime\prime}) \models\varphi$}\]
We write \(\rho\models\varphi\) as a shorthand for \((\rho,0)\models\varphi\), in which case we say that \(\rho\)_satisfies_\(\varphi\). A formula \(\varphi\) is _satisfiable_ if there is some run satisfying \(\varphi\). Checking satisfiability for LTL formulae is known to be PSpace-complete [39], while the synthesis problem for LTL is 2ExpTime-complete [35]. In addition to the LTL tense operators \(\mathbf{X}\) ("in the next state...") and \(\mathbf{U}\) ("...until..."), we make use of the two derived operators \(\mathbf{F}\) ("eventually...") and \(\mathbf{G}\) ("always..."), which are defined as follows [12]: \(\mathbf{F}\varphi=\top\,\mathbf{U}\,\varphi\) and \(\mathbf{G}\varphi=\neg\mathbf{F}\neg\varphi\).
**Strategies:** We model strategies for agents as finite-state machines with output. Formally, strategy \(\sigma_{i}\) for agent \(i\in\mathcal{N}\) is given by a structure \(\sigma_{i}=(Q_{i},next_{i},do_{i},q_{i}^{0})\), where \(Q_{i}\) is a finite set of machine
states, \(next_{i}:Q_{i}\times Ac_{1}\times\cdots\times Ac_{n}\to Q_{i}\) is the machine's state transformer function, \(do_{i}:Q_{i}\to Ac_{i}\) is the machine's action selection function, and \(q_{i}^{0}\in Q_{i}\) is the machine's initial state. A collection of strategies, one for each agent \(i\in\mathcal{N}\), is a _strategy profile_: \(\vec{\sigma}=(\sigma_{1},\ldots,\sigma_{n})\). A strategy profile \(\vec{\sigma}\) enacted in an arena \(\mathcal{A}\) will generate a unique run, which we denote by \(\rho(\vec{\sigma},\mathcal{A})\); the formal definition is standard, and we will omit it here [18]. Where \(\mathcal{A}\) is clear from the context, we will simply write \(\rho(\vec{\sigma})\). For each agent \(i\in\mathcal{N}\), we write \(\Sigma_{i}\) for the set of all possible strategies for the agent and \(\Sigma=\Sigma_{1}\times\cdots\times\Sigma_{n}\) for the set of all possible strategy profiles for all players.
For a set of distinct agents \(A\subseteq\mathcal{N}\), we write \(\Sigma_{A}=\prod_{i\in A}\Sigma_{i}\) for the set of partial strategy profiles available to the group \(A\) and \(\Sigma_{-A}=\prod_{j\in\mathcal{N}\setminus A}\Sigma_{j}\) for the set of partial strategy profiles available to the set of all agents excluding those in \(A\). Where \(\vec{\sigma}=(\sigma_{1},\ldots,\sigma_{i},\ldots,\sigma_{n})\) is a strategy profile and \(\sigma_{i}^{\prime}\) is a strategy for agent \(i\), we denote the strategy profile obtained by replacing the \(i\)-th component of \(\vec{\sigma}\) with \(\sigma_{i}^{\prime}\) by \((\vec{\sigma}_{-i},\sigma_{i}^{\prime})\). Similarly, given a strategy profile \(\vec{\sigma}\) and a set of agents \(A\subseteq\mathcal{N}\), we write \(\vec{\sigma}_{A}=(\sigma_{i})_{i\in A}\) to denote a partial strategy profile for the agents in \(A\) and if \(\vec{\sigma}_{A}^{\prime}\in\Sigma_{A}\) is another partial strategy profile for \(A\), we write \((\vec{\sigma}_{-A},\vec{\sigma}_{A}^{\prime})\) for the strategy profile obtained by replacing \(\vec{\sigma}_{A}\) in \(\vec{\sigma}\) with \(\vec{\sigma}_{A}^{\prime}\).
**Games, Utilities, and Preferences:** We obtain a _concurrent game_ from an arena \(\mathcal{A}\) by associating with each agent \(i\) a goal \(\gamma_{i}\), represented as an LTL formula. Formally, a concurrent game \(\mathcal{G}\) is given by a structure
\[\mathcal{G}=(\mathcal{S},\mathcal{N},Ac_{1},\ldots,Ac_{n},\mathcal{T}, \mathcal{C},\mathcal{L},s_{0},\gamma_{1},\ldots,\gamma_{n}),\]
where \((\mathcal{S},\mathcal{N},Ac_{1},\ldots,Ac_{n},\mathcal{T},\mathcal{C}, \mathcal{L},s_{0})\) is a concurrent game arena, and \(\gamma_{i}\) is the LTL goal of agent \(i\), for each \(i\in\mathcal{N}\). Runs in a concurrent game \(\mathcal{G}\) are defined over the game's arena \(\mathcal{A}\), and hence we use the notations \(\rho(\vec{\sigma},\mathcal{G})\) and \(\rho(\vec{\sigma},\mathcal{A})\) interchangeably. When the game or arena is clear from the context, we omit the \(\mathcal{G}\) and simply write \(\rho(\vec{\sigma})\). Given a strategy profile \(\vec{\sigma}\), the generated run \(\rho(\vec{\sigma})\) will satisfy the goals of some agents and not satisfy the goals of others, that is, there will be a set \(W(\vec{\sigma})=\{i\in\mathcal{N}:\rho(\vec{\sigma})\models\gamma_{i}\}\) of _winners_ and a set \(L(\vec{\sigma})=\mathcal{N}\setminus W(\vec{\sigma})\) of _losers_.
We are now ready to define preferences for agents. Our basic idea is that, as in [41], agents' preferences are structured: they first desire to accomplish their goal, and secondarily desire to minimise their costs. To capture this idea, it is convenient to define preferences via utility functions \(u_{i}\) over runs, where \(i\)'s utility for a run \(\rho\) is
\[u_{i}(\rho)=\left\{\begin{array}{ll}1+c_{i}^{*}-\mathcal{C}_{i}(\rho)&\text {if }\rho\models\gamma_{i}\\ -\mathcal{C}_{i}(\rho)&\text{otherwise.}\end{array}\right.\]
Defined in this way, if an agent \(i\) gets their goal achieved, their utility will lie in the range \([1,c_{i}^{*}+1]\) (depending on the cost she incurs), whereas if she does not achieve their goal, then their utility will lie within \([-c_{i}^{*},0]\). Preference relations \(\succeq_{i}\) over runs are then defined in the obvious way: \(\rho_{1}\succeq_{i}\rho_{2}\) if and only if \(u_{i}(\rho_{1})\geq u_{i}(\rho_{2})\), with indifference relations \(\sim_{i}\) and strict preference relations \(\succ_{i}\) defined as usual.
**Nash equilibrium:** A strategy profile \(\vec{\sigma}\) is a (pure strategy) Nash equilibrium if there is no agent \(i\) and strategy \(\sigma_{i}^{\prime}\) such that \(\rho(\vec{\sigma}_{-i},\sigma_{i}^{\prime})\succ_{i}\rho(\vec{\sigma})\). If such a strategy \(\sigma_{i}^{\prime}\) exists for a given agent \(i\), we say that \(\sigma_{i}^{\prime}\) is a _beneficial deviation_ for \(i\) from \(\vec{\sigma}\). Given a game \(\mathcal{G}\), let \(\textsc{ne}(\mathcal{G})\) denote its set of Nash equilibria. In general, Nash equilibria in this model of concurrent games may require agents to play infinite memory strategies [9], but we do not consider these in this study 1. Where \(\varphi\) is an LTL formula, we find it useful to define \(\textsc{ne}_{\varphi}(\mathcal{G})\) to be the set of Nash equilibrium strategy profiles that result in \(\varphi\) being satisfied: \(\textsc{ne}_{\varphi}(\mathcal{G})=\{\vec{\sigma}\in\textsc{ne}(\mathcal{G}) \mid\rho(\vec{\sigma})\models\varphi\}\). It is sometimes useful to consider a concurrent game that is modified so that no costs are incurred in it. We call such a game a _cost-free game_. Where \(\mathcal{G}\) is a game, let \(\mathcal{G^{\mathbf{0}}}\) denote
the game that is the same as \(\mathcal{G}\) except that the cost function \(\mathcal{C}^{\mathbf{0}}\) of \(\mathcal{G}^{\mathbf{0}}\) is such that \(\mathcal{C}^{\mathbf{0}}_{i}(s,\vec{\alpha})=0\) for all \(i\in\mathcal{N}\), \(s\in\mathcal{S}\), and \(\vec{\alpha}\in\vec{Acc}\). Given this, the following is readily established (cf., [18]):
**Theorem 1**.: _Given a game \(\mathcal{G}\), the problem of checking whether \(\textsc{ne}(\mathcal{G}^{\mathbf{0}})\neq\emptyset\) is 2ExpTime-complete._
The notion of Nash equilibrium is closely related to the concept of beneficial deviations. Given how preferences are defined in this study, it will be useful to introduce terminology that captures the potential deviations that agents may have [20]. Firstly, given a game \(\mathcal{G}\), we say that a strategy profile \(\vec{\sigma}^{1}\in\Sigma\) is _distinguishable_ from another strategy profile \(\vec{\sigma}^{2}\in\Sigma\) if \(\rho(\vec{\sigma}^{1},\mathcal{G})\neq\rho(\vec{\sigma}^{2},\mathcal{G})\). Then, for an agent \(i\), a strategy profile \(\vec{\sigma}\), and an alternative strategy \(\sigma^{\prime}_{i}\neq\sigma_{i}\), we say that \(\sigma^{\prime}_{i}\) is an _initial deviation_ for agent \(i\) from strategy profile \(\vec{\sigma}\), written \(\vec{\sigma}\to_{i}(\vec{\sigma}_{-i},\sigma^{\prime}_{i})\), if we have \(i\in W(\vec{\sigma})\Rightarrow i\in W(\vec{\sigma}_{-i},\sigma^{\prime}_{i})\) and strategy profile \(\vec{\sigma}\) is distinguishable from \((\vec{\sigma}_{-i},\sigma^{\prime}_{i})\).
## 3 Taxation Schemes
We now introduce a model of incentives for concurrent games. For incentives to work, they clearly must appeal to an agent's preferences \(\succeq_{i}\). As we saw above, incentives for our games are defined with respect to both goals and costs: an agent's primary desire is to see their goal achieved - the desire to minimise costs is strictly secondary to this. We will assume that we cannot change agents' goals: they are assumed to be fixed and immutable. It follows that any incentives we offer an agent to alter their behaviour must appeal to the costs incurred by that agent. Our basic model of incentives assumes that we can alter the cost structure of a game by imposing _taxes_, which depend on the collective actions that agents choose in different states. Taxes may increase an agent's costs, influencing their preferences and rational choices.
Formally, we model static taxation schemes as functions \(\tau:\mathcal{S}\times\vec{Acc}\to\mathbb{R}^{n}_{+}\). A static taxation scheme \(\tau\) imposed on a game \(\mathcal{G}=(\mathcal{S},\mathcal{N},Ac_{1},\ldots,Ac_{n},\mathcal{T}, \mathcal{C},\mathcal{L},s_{0},\gamma_{1},\ldots,\gamma_{n})\) will result in a new game, which we denote by
\[\mathcal{G}^{\tau}=(\mathcal{S},\mathcal{N},Ac_{1},\ldots,Ac_{n},\mathcal{T},\mathcal{G}^{\tau},\mathcal{L},s_{0},\gamma_{1},\ldots,\gamma_{n}),\]
which is the same as \(\mathcal{G}\) except that the cost function \(\mathcal{C}^{\tau}\) of \(\mathcal{G}^{\tau}\) is defined as \(\mathcal{C}^{\tau}(s,\vec{\alpha})=\mathcal{C}(s,\vec{\alpha})+\tau(s,\vec{ \alpha})\). Similarly, we write \(\mathcal{A}^{\tau}\) to denote the arena with modified cost function \(\mathcal{C}^{\tau}\) associated with \(\mathcal{G}^{\tau}\) and \(u^{\tau}_{i}(\rho)\)
Figure 1: (a) Illustration of the lexicographic quantitative and qualitative preferences of agents. (b) A concurrent game where two robots are situated in a grid world and are programmed to 1) never crash into another robot and 2) to secondarily minimise their limit-average costs. The arrows indicate how a run may be decomposed into a non-repeating and an infinitely-repeating component.
to denote the utility function of agent \(i\) over run \(\rho\) with the modified cost function \(\mathcal{C}^{\tau}\). Given \(\mathcal{G}\) and a taxation scheme \(\tau\), we write \(\rho_{1}\succeq_{i}^{\tau}\rho_{2}\) iff \(u_{i}^{\tau}(\rho_{1})\geq u_{i}^{\tau}(\rho_{2})\). The indifference relations \(\sim_{i}^{\tau}\) and strict preference relations \(\succ_{i}^{\tau}\) are defined analogously.
The model of static taxation schemes has the advantage of simplicity, but it is naturally limited in the range of behaviours it can incentivise--particularly with respect to behaviours \(\Upsilon\) expressed as LTL formulae. To overcome this limitation, we therefore introduce a _dynamic_ model of taxation schemes. This model essentially allows a designer to impose taxation schemes that can choose to tax the same action in different amounts, depending on the history of the run to date. A very natural model for dynamic taxation schemes is to describe them using a finite state machine with output--the same approach that we used to model strategies for individual agents. Formally, a _dynamic taxation scheme_\(T\) is defined by a tuple \(T=(Q_{T},next_{T},do_{T},q_{T}^{0})\) where \(Q_{T}\) is a finite set of taxation machine states, \(next_{T}:Q_{T}\times Ac_{1}\times\cdots\times Ac_{n}\to Q_{T}\) is the transition function of the machine, \(q_{T}^{0}\in Q_{T}\) is the initial state, and \(do_{T}:Q_{T}\to(\mathcal{S}\times\tilde{A}c\to\mathbb{R}_{+}^{n})\) is the output function of the machine. With this, let \(\mathcal{T}\) be the set of all dynamic taxation schemes for a game \(\mathcal{G}\). As a run unfolds, we think of the taxation machine being executed alongside the strategies. At each time step, the machine outputs a static taxation scheme, which is applied at that time step only, with \(do_{T}(q_{T}^{0})\) being the initial taxation scheme imposed.
When we impose dynamic taxation schemes, we no longer have a simple transformation \(\mathcal{G}^{\tau}\) on games as we did with static taxation schemes \(\tau\). Instead, we define the effect of a taxation scheme with respect to a run \(\rho\). Formally, given a run \(\rho\) of a game \(\mathcal{G}\), a dynamic taxation scheme \(T\) induces an infinite sequence of static taxation schemes, which we denote by \(t(\rho,T)\). We can think of this sequence as a function \(t(\rho,T):\mathbb{N}\to(\mathcal{S}\times\tilde{A}c\to\mathbb{R}_{+}^{n})\). We denote the cost of the run \(\rho\) in the presence of a dynamic taxation scheme \(T\) by \(\mathcal{C}^{T}(\rho)\):
\[\mathcal{C}^{T}(\rho)=\liminf_{u\to\infty}\frac{1}{u}\sum_{v=0}^{u}\mathcal{C }(\rho,v)+\underbrace{t(\rho,T)(v)(s(\rho,v),\tilde{\alpha}(\rho,v))}_{(*)}\]
The expression \((*)\) denotes the vector of taxes incurred by the agents as a consequence of performing the action profile which they chose at time step \(v\) on the run \(\rho\). The cost \(\mathcal{C}^{T}_{i}(\rho)\) to agent \(i\) of the run \(\rho\) under \(T\) is then given by the \(i\)-th component of \(\mathcal{C}^{T}(\rho)\).
**Example 1**.: _Two robots are situated in a grid world (Figure 1b), where atomic propositions represent events where a robot picks up an apple (label \(a_{ij}\) represents agent \(i\) picking up apple \(j\)), has delivered an apple to the basket (label \(b_{i}\) represents agent \(i\) delivering an apple to the basket), or where the robots have crashed into each other (label \(c\)). Additionally, suppose that both robots are programmed with LTL goals \(\gamma_{1}=\gamma_{2}=\mathbf{G}\neg c\). In this way, the robots are not pre-programmed to perform specific tasks, and it is therefore the duty of the principal to design taxes that motivate the robots to perform a desired function, e.g., pick apples and deliver them to the basket quickly. Because the game is initially costless, there is an infinite number of Nash equilibria that could arise from this scenario and it is by no means obvious that the robots will choose one in which they perform the desired function. Hence, the principal may attempt to design a taxation scheme to eliminate those that do not achieve their objective, thus motivating the robots to collect apples and deliver them to the basket. Clearly, using dynamic taxation schemes affords the principal more control over how the robots should accomplish this than static taxation schemes._
## 4 Nash Implementation
We consider the scenario in which a principal, who is external to the game, has a particular goal that they wish to see satisfied within the game; in a general economic setting, the goal might be intended
to capture some principle of social welfare, for example. In our setting, the goal is specified as an LTL formula \(\Upsilon\), and will typically represent a desirable system/global behaviour. The principal has the power to influence the game by choosing a taxation scheme and imposing it upon the game. Then, given a game \(\mathcal{G}\) and a goal \(\Upsilon\), our primary question is whether it is possible to design a taxation scheme \(T\) such that, assuming the agents, individually and independently, act rationally (by choosing strategies \(\vec{\sigma}\) that collectively form a Nash equilibrium in the modified game), the goal \(\Upsilon\) will be satisfied in the run \(\rho(\vec{\sigma})\) that results from executing the strategies \(\vec{\sigma}\). In this section, we will explore two ways of interpreting this problem.
**E-Nash Implementation:** A goal \(\Upsilon\) is _E-Nash implemented_ by a taxation scheme \(T\) in \(\mathcal{G}\) if there is a Nash equilibrium strategy profile \(\vec{\sigma}\) of the game \(\mathcal{G}^{T}\) such that \(\rho(\vec{\sigma})\models\Upsilon\). The notion of E-Nash implementation is thus analogous to the E-Nash concept in rational verification [15, 16]. Observe that, if the answer to this question is "yes" then this implies that the game \(\mathcal{G}^{T}\) has at least one Nash equilibrium. Let us define the set \(\textsc{eni}(\mathcal{G},\Upsilon)\) to be the set of taxation schemes \(T\) that E-Nash implements \(\Upsilon\) in \(\mathcal{G}\):
\[\textsc{eni}(\mathcal{G},\Upsilon)=\{T\in\mathcal{T}\mid\textsc{ne}_{\Upsilon }(\mathcal{G}^{T})\neq\emptyset\}\.\]
The obvious decision problem is then as follows:
E-Nash Implementation:
_Given_: Game \(\mathcal{G}\), LTL goal \(\Upsilon\).
_Question_: Is it the case that \(\textsc{eni}(\mathcal{G},\Upsilon)\neq\emptyset\)?
This decision problem proves to be closely related to the E-Nash problem [15, 16], and the following result establishes its complexity:
**Theorem 2**.: E-Nash Implementation _is 2ExpTime-complete, even when \(\mathcal{T}\) is restricted to static taxation schemes._
Proof.: For membership, we can check whether \(\Upsilon\) is satisfied on any Nash equilibrium of the cost-free concurrent game \(\mathcal{G}^{\emptyset}\) obtained from \(\mathcal{G}\) by effectively removing its cost function using a static taxation scheme which makes all costs uniform for all agents. This then becomes the E-Nash problem, known to be 2ExpTime-complete. The answer will be "yes" iff \(\Upsilon\) is satisfied on some Nash equilibrium of \(\mathcal{G}^{\emptyset}\); and if the answer is "yes", then observing that \(\textsc{ne}(\mathcal{G}^{T})\subseteq\textsc{ne}(\mathcal{G}^{\emptyset})\) for all taxation schemes \(T\in\mathcal{T}\)[41], the given LTL goal \(\Upsilon\) can be E-Nash implemented in \(\mathcal{G}\). For hardness, we can reduce the problem of checking whether a cost-free concurrent game \(G\) has a Nash equilibrium (Theorem 1). Simply ask whether \(\Upsilon=\top\) can be E-Nash implemented in \(\mathcal{G}^{\emptyset}\).
For the second part of the result, observe that the reduction above only involves removing the costs from the game and checking the answer to E-Nash, which can be done using a simple static taxation scheme. Hardness follows in a similar manner.
**A-Nash Implementation:** The universal counterpart of E-Nash implementation is _A-Nash Implementation_. We say that \(\Upsilon\) is _A-Nash implemented_ by \(T\) in \(\mathcal{G}\) if we have both 1) \(\Upsilon\) is E-Nash implemented by \(T\) in game \(\mathcal{G}\); and 2) \(\textsc{ne}(\mathcal{G}^{T})=\textsc{ne}_{\Upsilon}(\mathcal{G}^{T})\). We thus define \(\textsc{ani}(\mathcal{G},\Upsilon)\) as follows:
\[\textsc{ani}(\mathcal{G},\Upsilon)=\{T\in\mathcal{T}\mid\textsc{ne}(\mathcal{G} ^{T})=\textsc{ne}_{\Upsilon}(G^{T})\neq\emptyset\}\]
The decision problem is then:
A-Nash Implementation:
_Given_: Game \(\mathcal{G}\), LTL goal \(\Upsilon\).
_Question_: Is it the case that \(\textsc{ani}(\mathcal{G},\Upsilon)\neq\emptyset\)?
The following result shows that, unlike the case of E-Nash implementation, dynamic taxation schemes are _strictly more powerful_ than static taxation schemes for A-Nash implementation. It can be verified that the game in Figure 1(a), the taxation scheme in Figure 1(b), and the principal's goal being \(\Upsilon=G(p\leftrightarrow q)\) are witnesses to this result (see Appendix for the full proof):
**Proposition 1**.: _There exists a game \(\mathcal{G}\) and an LTL goal \(\Upsilon\) such that \(\textsc{ani}(\mathcal{G},\Upsilon)\neq\emptyset\), but not if \(\mathcal{T}\) is restricted to static taxation schemes._
Before proceeding with the A-Nash Implementation problem, we will need to introduce some additional terminology and concepts, beginning first with deviation graphs, paths, and cycles. A _deviation graph_ is a directed graph \(\Gamma=(\mathcal{V},E)\), where \(\mathcal{V}\subseteq\Sigma\) is a set of nodes which represent strategy profiles in \(\Sigma\) and \(E\subseteq\{(\vec{\sigma},\vec{\sigma}^{\prime})\in\mathcal{V}\times\mathcal{ V}\mid\vec{\sigma}\to_{i}\vec{\sigma}^{\prime}\text{ for some }i\in\mathcal{N}\}\) is a set of directed edges between strategy profiles that represent initial deviations. Additionally, we say that a dynamic taxation scheme \(T\)_induces_ a deviation graph \(\Gamma=(\mathcal{V},E)\) if for every \((\vec{\sigma},\vec{\sigma}^{\prime})\in\mathcal{V}\times\mathcal{V}\), it holds that \(\vec{\sigma}^{\prime}\succ_{i}^{T}\vec{\sigma}\) for some \(i\in\mathcal{N}\) if and only if \((\vec{\sigma},\vec{\sigma}^{\prime})\in E\). In other words, if the edges in a deviation graph precisely capture all of the beneficial deviations between its nodes under \(T\), then the deviation graph is said to be induced by \(T\).2 Then, a _deviation path_ is simply any path \(P=(\vec{\sigma}^{1},\ldots,\vec{\sigma}^{m})\) within a deviation graph \(\Gamma\) where \((\vec{\sigma}^{j},\vec{\sigma}^{j+1})\in E\) for all \(j\in\{1,\ldots,m-1\}\).
Footnote 2: This definition implies that a taxation scheme may induce many possible deviation graphs in general, depending on the nodes selected to be part of the graph.
Because the principal is only able to observe the actions taken by the agents and not their strategies directly, any taxation scheme that changes the cost of some strategy profile \(\vec{\sigma}\) will also change the cost of all strategy profiles that are indistinguishable from \(\vec{\sigma}\) by the same amount. This naturally suggests that we modify the concept of a deviation path to take indistinguishability into account. To this end, we say that a sequence of runs \(P_{o}=(\rho^{1},\rho^{2},\ldots,\rho^{m})\) is an _observed deviation path_ in a deviation graph \(\Gamma=(\mathcal{V},E)\) if there exists an _underlying tuple_\((\vec{\sigma}^{1},\vec{\sigma}^{2},\ldots,\vec{\sigma}^{m})\) such that for all \(j\in\{1,\ldots,m\}\), it holds that 1) \(\rho^{j}=\rho(\vec{\sigma}^{j})\), and 2) if \(j<m\), then \((\vec{\sigma}^{j},\vec{\sigma}^{j+1^{\prime}})\in E\) for some \(\vec{\sigma}^{j+1^{\prime}}\) such that \(\rho(\vec{\sigma}^{j+1^{\prime}})=\rho(\vec{\sigma}^{j+1})\). Then, a _deviation cycle_ is a deviation path \((\vec{\sigma}^{1},\ldots,\vec{\sigma}^{m})\) where \(\rho(\vec{\sigma}^{1})=\rho(\vec{\sigma}^{m})\). A deviation path \(P=(\vec{\sigma}^{1},\vec{\sigma}^{2},\ldots,\vec{\sigma}^{m})\) is said to _involve_ an agent \(i\) if \(\vec{\sigma}^{j}\to_{i}\vec{\sigma}^{j+1}\) for some \(j\in\{1,\ldots,m-1\}\) and similarly, an observed deviation path \(P_{o}\) in a deviation graph involves agent \(i\) if the analogous property holds for all of its underlying sets. Given a game \(\mathcal{G}\) and a set of strategy profiles \(X\), a taxation scheme \(T\)_eliminates
Figure 2: (a): A two-agent concurrent game \(\mathcal{G}\) with action sets \(Ac_{1}=\{a,b\}\) and \(Ac_{2}=\{c,d\}\) and goals \(\gamma_{1}=\gamma_{2}=\mathbf{GF}p\), where we let \(\vec{\alpha}_{1}=(a,c),\vec{\alpha}_{2}=(a,d),\vec{\alpha}_{3}=(b,c),\vec{ \alpha}_{4}=(b,d)\). Cost vectors associated with sets denote that all action profiles within the set are assigned those costs. (b): A dynamic taxation scheme that could be imposed on the agents in the game from (a). Labels below the states represent a static taxation scheme that applies a uniform tax for all agents and all action profiles.
\(X\) if \(NE(\mathcal{G}^{T})\cap X=\emptyset\). Finally, a set of strategy profiles \(X\) is said to be _eliminable_ if there exists a taxation scheme that eliminates it. With this, we can characterise the conditions under which a finite set of strategy profiles is eliminable:
**Proposition 2**.: _Let \(\mathcal{G}\) be a game and \(X\subset\Sigma\) be a finite set of strategy profiles in \(\mathcal{G}\). Then, \(X\) is eliminable if and only if there exists a finite deviation graph \(\Gamma=(\mathcal{V},E)\) that satisfies the following properties: 1) For every \(\vec{\sigma}\in X\), there is some \(\vec{\sigma}^{\prime}\in\mathcal{V}\) such that \((\vec{\sigma},\vec{\sigma}^{\prime})\in E\); and 2) Every deviation cycle in \(\Gamma\) involves at least two agents._
Proof Sketch.: The forward direction follows by observing that if all deviation graphs fail to satisfy at least one of the two properties, then every deviation graph will either fail to eliminate some \(\vec{\sigma}\in X\) if induced, or will not be inducible by any dynamic taxation scheme. The backward direction can be established by constructing a dynamic taxation scheme \(T^{\Gamma}\) that induces a deviation graph \(\Gamma\) satisfying the two properties. Using these properties, it follows that \(T^{\Gamma}\) eliminates \(X\).
To conclude our study of dynamic taxation schemes, we present a characterisation of the A-Nash implementation problem.3
Footnote 3: Note that, in general, Proposition 2 cannot be directly applied to Theorem 3, because it is assumed that the set to be eliminated is finite, whereas \(\textsc{ne}_{\neg\Gamma}(\mathcal{G}^{\emptyset})\) is generally infinite. However, this can be reconciled if some restriction is placed on the agents’ strategies so that \(\Sigma\) is finite, which is the case in many game-theoretic situations of interest, e.g., in games with memoryless, or even bounded memory, strategies – both used to model bounded rationality.
**Theorem 3**.: _Let \(\mathcal{G}\) be a game and \(\Gamma\) be an LTL formula. Then \(\textsc{ani}(\mathcal{G},\Gamma)\neq\emptyset\) if and only if the following conditions hold:_
1. \(\textsc{eni}(\mathcal{G},\Gamma)\neq\emptyset\)_;_
2. \(\textsc{ne}_{\neg\Gamma}(\mathcal{G}^{\emptyset})\) _is eliminable._
Proof.: For the forward direction, it follows from the definition of the problem that if \(\textsc{eni}(\mathcal{G},\Gamma)=\emptyset\), then \(\textsc{ani}(\mathcal{G},\Gamma)=\emptyset\). Moreover, it is also clear that if \(\textsc{ne}_{\neg\Gamma}(\mathcal{G}^{\emptyset})\) is not eliminable, then it is impossible to design a (dynamic) taxation scheme such that only good equilibria remain in the game and hence, \(\textsc{ani}(\mathcal{G},\Gamma)=\emptyset\).
For the backward direction, suppose that the two conditions hold and let \(T\) be a taxation scheme that only affects the limiting-average costs incurred by agents under strategy profiles in \(\textsc{ne}_{\neg\Gamma}(\mathcal{G}^{\emptyset})\), and eliminates this set. Such a taxation scheme is guaranteed to exist by the assumption that condition \((2)\) holds and because it is known that no good equilibrium is indistinguishable from a bad one. Now consider a static taxation scheme \(\tau\) such that \(c_{i}(s,\vec{\alpha})+\tau_{i}(s,\vec{\alpha})=\hat{c}\) for all \(i\in\mathcal{N}\), \((s,\vec{\alpha})\in\mathcal{S}\times\vec{A}c\), and some \(\hat{c}\geq\max_{i\in\mathcal{N}}c_{i}^{*}\). Combining \(\tau\) with \(T\) gives us a taxation scheme \(T^{*}\) such that for each state \(q\in Q_{T^{*}}=Q_{T}\) and \((s,\vec{\alpha})\in\mathcal{S}\times\vec{A}c\), we have \(do_{T^{*}}(q)(s,\vec{\alpha})=do_{T}(q)(s,\vec{\alpha})+\tau(s,\vec{\alpha})\). Now, because \(T\) eliminates \(\textsc{ne}_{\neg\Gamma}(\mathcal{G}^{\emptyset})\), and \(\textsc{ne}(\mathcal{G}^{\tau})=\textsc{ne}(\mathcal{G}^{\emptyset})\), it follows that \(T^{*}\) eliminates \(\textsc{ne}_{\neg\Gamma}(\mathcal{G}^{\emptyset})\). Finally, note that because the satisfaction of an LTL formula on a given run is solely dependent on the run's trace, it follows that all good equilibria, i.e., strategy profiles in \(\textsc{ne}_{\Gamma}(\mathcal{G}^{\emptyset})\), are distinguishable from all bad equilibria, so we have \(\textsc{ne}_{\Gamma}(\mathcal{G}^{\emptyset})\cap\textsc{ne}(\mathcal{G}^{T^{ *}})\neq\emptyset\).
It is straightforward to see that A-Nash Implementation is 2EXPTIME-hard via a simple reduction from the problem of checking whether a Nash equilibrium exists in a concurrent game - simply ask if the formula \(\top\) can be A-Nash implemented in \(\mathcal{G}^{\emptyset}\). However, it is an open question whether a matching upper bound exists and we conjecture that it does not. This problem is difficult primarily for two reasons. Firstly, it is well documented that Nash equilibria may require infinite memory in games with
lexicographic \(\omega\)-regular and mean-payoff objectives [9], and the complexity of deciding whether a Nash equilibrium even exists in games with our model of preferences has yet to be settled [16]. Secondly, Theorem 3 and Proposition 2 suggest that unless the strategy space is restricted to a finite set, a taxation scheme that A-Nash implements a formula may require reasoning over an infinite deviation graph, and hence require infinite memory. Nevertheless, our characterisation under such restrictions provides the first step towards understanding this problem in the more general setting.
## 5 Related Work and Conclusions
This work was motivated by [41], and based on that work, presents four main contributions: the introduction of _static and dynamic_ taxation schemes as an extension to concurrent games expanding the model in (one-shot) Boolean games [41, 19, 20]; a study of the _complexity_ of some of the most relevant computational decision problems building on previous work in rational verification [15, 18, 16]; evidence (formal proof) of the strict _advantage of dynamic taxation schemes_ over static ones, which illustrates the role of not just observability but also _memory_ to a principal's ability to (dis)incentivise certain outcomes [14, 21]; and a full characterisation of the _eliminability of sets of strategy profiles_ under dynamic taxation schemes and the A-Nash implementation problem.
The incentive design problem has been studied in many different settings, and [36] group existing approaches broadly into those from the economics, control theory, and machine learning communities. However, more recent works in this area adopt multi-disciplinary methods such as automated mechanism design [31, 28, 38, 4], which typically focus on the problem of constructing incentive-compatible mechanisms to optimise a particular objective such as social welfare. Other approaches in this area reduce mechanism design to a program synthesis problem [30] or a satisfiability problem for quantitative strategy logic formulae [26, 29]. The notion of dynamic incentives has also been investigated in (multi-agent) learning settings [8, 27, 37, 43, 11]. These works focus solely on adaptively modifying the rewards for quantitative reward-maximising agents. In contrast, our model of agent utilities more naturally captures fundamental constraints on the principal's ability to (dis)incentivise certain outcomes due to the lexicographic nature of agents' preferences [5].
Another area closely related to incentives is that of norm design [24]. Norms are often modelled as the encouragement or prohibition of actions that agents may choose to take by a regulatory agent. The most closely related works in this area are those of [22, 32, 2], who study the problem of synthesising dynamic norms in different classes of concurrent games to satisfy temporal logic specifications. Whereas norms in these frameworks have the ability to _disable_ actions at runtime, our model confers only the power to _incentivise_ behaviours upon the principal. Finally, other studies model norms with violation penalties, but differ from our work in how incentives, preferences, and strategies are modelled [7, 6, 10].
In summary, a principal's ability to align self-interested decision-makers' interests with higher-order goals presents an important research challenge for promoting cooperation in multi-agent systems. The present study highlights the challenges associated with incentive design in the presence of constraints on the kinds of behaviours that can be elicited, makes progress on the theoretical aspects of this endeavour through an analysis of taxation schemes, and suggests several avenues for further work. Promising directions include extensions of the game model to probabilistic/stochastic or learning settings, finding optimal complexity upper bounds for the A-Nash implementation problems, and consideration of different formal models of incentives. We expect that this and such further investigations will positively contribute to our ability to develop game-theoretically aware incentives in multi-agent systems. |
2305.04511 | Deadlock-Free Collision Avoidance for Nonholonomic Robots | We present a method for deadlock-free and collision-free navigation in a
multi-robot system with nonholonomic robots. The problem is solved by quadratic
programming and is applicable to most wheeled mobile robots with linear
kinematic constraints. We introduce masked velocity and Masked Cooperative
Collision Avoidance (MCCA) algorithm to encourage a fully decentralized
deadlock avoidance behavior. To verify the method, we provide a detailed
implementation and introduce heading oscillation avoidance for
differential-drive robots. To the best of our knowledge, it is the first method
to give very promising and stable results for deadlock avoidance even in
situations with a large number of robots and narrow passages. | Ruochen Zheng, Siyu Li | 2023-05-08T07:10:18Z | http://arxiv.org/abs/2305.04511v1 | # Deadlock-Free Collision Avoidance for Nonholonomic Robots
###### Abstract
We present a method for deadlock-free and collision-free navigation in a multi-robot system with nonholonomic robots. The problem is solved by quadratic programming and is applicable to most wheeled mobile robots with linear kinematic constraints. We introduce masked velocity and Masked Cooperative Collision Avoidance (MCCA) algorithm to encourage a fully decentralized deadlock avoidance behavior. To verify the method, we provide a detailed implementation and introduce heading oscillation avoidance for differential-drive robots. To the best of our knowledge, it is the first method to give very promising and stable results for deadlock avoidance even in situations with a large number of robots and narrow passages.
## I Introduction
Collision and deadlock avoidance in a multi-robot system is a fundamental problem to solve in order to put such systems into application. While a centralized system can simultaneously determine the navigation details for all the robots, curse-of-dimensionality will prevent the system from giving a satisfying solution in acceptable time interval for even only a moderate number of robots. As a result, we believe that it is more promising to search for a decentralized navigation strategy where each robot will fully utilize its accessible information to make decisions for its own benefits [1].
Many works with decentralized reciprocal navigation strategy use a velocity obstacle to represent a time-space area for the velocity to avoid. First conceptualized in [2], velocity obstacles make it easy to represent velocity constraints mathematically. A promising framework, Optimal Reciprocal Collision Avoidance (ORCA) [3], is shown to be able to solve collision avoidance geometrically and optimally. For each robot, a linear programming (LP) problem minimizes the diversion of the chosen velocity to the preferred velocity, with permitted velocities constructed as ORCA half-planes. The maximum distance to violated half-planes is minimized if there is no valid solution. While reciprocal decision-making mechanism is able to guarantee collision avoidance, it is usually not able to avoid or resolve deadlock. The situation is worse with dense obstacles presented in the environment.
Inspired by velocity obstacle and ORCA framework, we propose a quadratic programming (QP) formulation to solve collision avoidance problems for nonholonomic robots with linear kinematic constraints. The formulation is a generalized form with its holonomic case similar to the original ORCA problem. The formulation can be applied to most controllable wheeled mobile robots in the real world with linear kinematic models.
Moreover, this paper introduces masked velocity and Masked Cooperative Collision Avoidance (MCCA) half-plane to encourage deadlock-free navigation. A temporary and local cooperative behavior is motivated by escalating the priority of certain robots and use masked velocity as the deadlock-free velocity intention a robot updates and tries to approach. The propagation of masked velocity among robots will gradually influence the real velocity of other robots and encourage deadlock-free navigation continuously.
We provide an application of the QP formulation and MCCA method using differential-drive robots. It is shown to be able to solve the collision avoidance problem efficiently and the experimentation confirms that masked velocity, MCCA half-planes and robot priority determination combined are able to resolve deadlock in various scenarios reliably with a low level of local cooperation among robots. It also applies to robots with low acceleration capability and difficult to maneuver. Besides, the formulation provides robots with better mobility in narrow passages as there is no need to enlarge the radius of robots significantly.
The rest of this paper is organized as follows. A brief summary of related work is provided in Section II. We present the general QP formulation for solving nonholonomic collision avoidance problems in Section III. In Section IV, we introduce masked velocity and MCCA half-planes and give the algorithms for robot prioritization and deadlock-free navigation. We combine the QP formulation and MCCA to give a working example in Section V for differential-drive robots and report simulation results in Section VI.
## II Related Work
We briefly review related work in velocity obstacles and optimal reciprocal collision avoidance.
As an early effort, [2] conceptualizes Velocity Obstacle (VO), which refers to circular robots and static obstacles. VO makes it easy to represent the velocity constraints mathematically. Later in [4], Reciprocal Velocity Obstacle (RVO) is
proposed, which is an extension of the original VO and solves the oscillation problem induced by VO. Basically, RVO keeps the VO stable from velocity change, such that the original velocity will not be chosen later. It also proves that RVO is complete from inducing collisions. Another effort to improve VO is [5], which defines FVO, allowing a robot to avoid others but only in a pre-determined time interval. The resulted VO is a truncated cone. FVO makes solution space much larger compared to the original VO as it is only necessary to avoid collision for a short time interval.
ORCA [3] utilizes the advantage of both RVO and FVO to solve a linear programming (LP) problem for each robot in the system to determine its optimized velocity. Robots are assumed to be holonomic. For each robot, a LP problem to minimize the diversion of the chosen velocity to the preferred velocity (given from a global planner), with permitted velocities constructed with ORCA half-planes.1 Given the unique structure of the problem, the LP can be easily solved with an incremental insertion of constraints and is guaranteed to be global optimal. Two most referred challenges of the original ORCA are mentioned below.
Footnote 1: As suggested in the original ORCA, it is ideal to set \(\mathbf{v}^{\text{opt}}=\mathbf{v}^{\text{current}}\), which applies to this paper unless specified otherwise.
The first challenge is brought by the holonomic assumption. Most robots in real-world applications are nonholonomic. The solved velocity may not be reached directly and instantly, leaving a gap between the target trace and the realized trace, which could result in collision. A walk-around solution to incorporating kinematics of differential-drive robots is provided in [6], which adopts the kinematic adaption approach described in [7] and introduces effective center with fully controlled velocity at the cost of a larger disc radius. However, it uses a significantly larger effective radius and does not explore the cases where the translational distance \(D\) is much smaller. A similar approach (NH-ORCA) is presented in [8], where the radius of a nonholonomic robot is enlarged by a tracking error \(\epsilon\) between the perfect holonomic motions and nonholonomic motions. Some recent efforts for nonholonomic collision avoidance considering RVO can be found in [9] and [10].
The second challenge is about the occurrence of deadlock and oscillation in a multi-robot system, where two or more robots are stuck together and unable to resolve the situation for a long time period. While a centralized decision-making mechanism (see for example [11], [12] and [13]) may eventually solve the problem, it is too time-consuming and unstable to be applied to situations with only a moderate number of robots; Moreover, the evaluation and simulation of such mechanism in a discrete time-space is questionable. Some decentralized efforts can be found as well, such as [14] and [15]. However these mechanisms are mostly not able to deal with narrow passages. Ideally, the level of global coordination among robots should be minimum yet effective to avoid deadlock and oscillation with various scenarios in real time, and we shall pursue a deadlock-free method in such direction.
## III Quadratic programming formulation
Compared to the LP formulation in ORCA, we would like a collision avoidance formulation to have the following key features.
* it is able to directly optimize the translational velocity \(\mathbf{v}=(v_{x},v_{y})\) to be close to \(\mathbf{v}^{\text{pref}}\).
* If it is not possible to meet all the half-plane constraints, the violation should be minimized.
* Control inputs and nonholonomic kinematic constraints can be incorporated directly.
A corresponding quadratic programming problem can be formulated by
min \[\alpha_{1}\left\|\mathbf{v}-\mathbf{v}^{\text{pref}}\right\|_{2}^{2}+\alpha _{2}\sum_{i\in\mathcal{O}}\delta_{i}^{2}+\alpha_{3}\sum_{j\in\mathcal{R}} \delta_{j}^{2}\] (1) s.t. \[\det\left(\mathbf{B}\right)\leq\delta_{\zeta},\zeta\in\mathcal{O} \cup\mathcal{R},\delta_{\zeta}\geq 0\] (1a) \[\dot{\mathbf{q}}=\mathbf{S}(\mathbf{q})\cdot\mathbf{r}\] (1b) \[\begin{bmatrix}\mathbf{F}&\mathbf{0}\\ \mathbf{0}&\mathbf{G}\end{bmatrix}\begin{bmatrix}\dot{\mathbf{q}}\\ \mathbf{r}\end{bmatrix}=0.\] (1c)
Constraint (1a) describes the half-planes of permitted velocities for the current robot. Each of the half-plane is induced by a robot \(\zeta\in\mathcal{R}\) or a static obstacle \(\zeta\in\mathcal{O}\). Fig. 1 explains the relationship between a half-plane and its permitted velocities. Normally, the collision avoidance constraint would request \(\det\left(\mathbf{B}\right)\leq 0\), which could be difficult to achieve in crowded situations. We introduce an auxiliary variable \(\delta_{\zeta}\geq 0\) for each of the constraint in (1a) and punish positive auxiliary variables heavily in the objective function, such that the collision avoidance constraints are always met with a minimized velocity violation.
Constraint (1b) and (1c) incorporate nonholonomic kinematic constraints. Nonholonomic constraints limit possible velocities of the robot or directions of motion [16]. Consider a robot with motion states described in the generalized coordinates \(\mathbf{q}=[q_{1},\cdots,q_{n}]^{T}\). \(\dot{\mathbf{q}}=[\dot{q}_{1},\cdots,\dot{q}_{n}]^{T}\) is the
corresponding velocity vector. We assume that the robot has \(m\) independent motion control motors or other mechanisms, written as \(\mathbf{r}=[r_{1},\cdots,r_{m}]^{T}\). For example, the left wheel velocity and right wheel velocity are the two independent motion control variables for a differential-drive robot. The deduction of velocity from the control vector (control inputs) is called forward state estimation or direct kinematic model, which holds for each time instance. In our formulation (1b), it is assumed that the relationship is linear and \(\mathbf{S}(\mathbf{q})=[\mathbf{s}_{1}(\mathbf{q}),\cdots,\mathbf{s}_{m}(\mathbf{q})]\) is a matrix of reachable motion vectors. A linear combination of these vectors by the control vector results in the velocity in the generalized coordinates. The linear assumption is true for most controllable wheeled mobile robots with different kinematic models, such as differential-drive, tricycle, Ackermann-drive, and robot with a trailer [16]. Therefore, the proposed formulation is applicable to most mobile robots in real-world applications. Constraint (1c) represents customized linear constraints for the velocity vector or the control inputs vector where \(\mathbf{F}\) and \(\mathbf{G}\) are the matrices for parameters. The need for customized constraints may come from dynamic and acceleration limitation, navigation strategy, regulation, etc., which increases the versatility of the formulation.
The first part of the objective function is the square of distance between \(\mathbf{v}=(v_{x},v_{y})\) and \(\mathbf{v}^{\text{pref}}\). \(v_{x}\) and \(v_{y}\) are part of the velocity vector \(\mathbf{\dot{q}}\). \(\mathbf{v}^{\text{pref}}\) is given by a global planner. The second part is the penalty for half-plane violation. The QP objective should minimize the first part on condition that the violation to half-planes is none or minimum. Thus, we let \(\alpha_{2},\alpha_{3}\gg\alpha_{1}>0\) in practice. The proposed formulation is a well-defined convex quadratic programming (CQP) problem, which can be solved efficiently with QP solvers in less than one millisecond. In comparison, the general method proposed in [17] requires extensive numeric computation and may fail to find a feasible solution.
If kinematics constraints (1b) and (1c) are removed from the formulation, it would solve collision avoidance for holonomic robots; Moreover, if ORCA half-planes are applied to Constraint (1a), we say that the holonomic QP formulation tries to approach \(\mathbf{v}^{\text{pref}}\) with ORCA half-planes2, which is similar to the linear formulation in ORCA.
Footnote 2: To generate half-planes in ORCA, while a robot has a reciprocal permitted half-plane with another robot following shared responsibility principle, the half-plane between the robot and a static obstacle is different and the robot should take full responsibility for collision avoidance.
## IV Masked cooperative collision avoidance
Another challenge brought by reciprocal collision avoidance behavior or shared responsibility could result in deadlock or oscillation in crowded situations or with narrow passages. Deadlock avoidance is a global objective that will fail if no robot is to divert from its original path. Therefore, some robots should be compromised for the greater good. Although centralized decision-making is encouraged in such situation, it could easily result in a time-consuming solving process. In this section, we demonstrate how masked velocity, MCCA half-planes and robots priority determination are combined to avoid and resolve possible deadlocks quickly with fully decentralized decision-making and velocity planning process.
### _Masked Velocity and MCCA_
We define masked velocity as the deadlock-free intention that a robot updates and tries to approach, which is also public to all other robots in the system. The updating of the intention is influenced by the state that a robot is either avoiding or being avoided by other robots. Assume that all the robots \(\mathcal{R}\) can be labeled with either high or normal priority, \(A\in\overline{\mathcal{R}}\) and \(B\in\underline{\mathcal{R}}\) represent a head robot with high priority (to be avoided) and a normal robot with normal priority (to avoid), respectively. A robot \(A\in\overline{\mathcal{R}}\) is able to update its masked velocity \(\mathbf{v}_{A}^{\overline{\text{m}}}\) without considering the masked velocity of other robots while a robot \(B\in\underline{\mathcal{R}}\) has to take the masked velocity of other robots into consideration to solve its masked velocity \(\mathbf{v}_{B}^{\overline{\text{m}}}\). Let
\[\mathbf{v}_{A}^{\overline{\text{m}}}=\operatorname*{argmin}_{\mathbf{v}^{\text{m}}\in ORCA ^{\tau}_{A|\mathcal{O}}}\left\|\mathbf{v}^{\text{m}}-\mathbf{v}_{A}^{\text{pref}} \right\|, \tag{2}\]
where \(ORCA^{\tau}_{A|\mathcal{O}}=\bigcap\limits_{o\in\mathcal{O}}ORCA^{\tau}_{A|o}\). And let
\[\mathbf{v}_{B}^{\overline{\text{m}}}=\operatorname*{argmin}_{\mathbf{v}^{\text{m}}\in MCCA ^{\tau}_{B}}\left\|\mathbf{v}^{\text{m}}-\mathbf{v}_{B}^{\text{pref}}\right\|, \tag{3}\]
where \(MCCA^{\tau}_{B}=ORCA^{\tau}_{B|\mathcal{O}}\cap\bigcap\limits_{C\in\mathcal{R }}MCCA^{\tau}_{B|C}\). Note that masked velocity is not bounded by \(D(0,v_{B}^{\text{max}})\) as the original ORCA does. A larger masked velocity would result in more efficient deadlock resolving behavior.
Head masked velocity \(\mathbf{v}_{A}^{\overline{\text{m}}}\) in Equation (2) is solved by a holonomic QP (as mentioned in Section III) to approach \(\mathbf{v}^{\text{pref}}\) with ORCA half-planes for static obstacles \(ORCA^{\tau}_{A|\mathcal{O}}\), which describes the intention that a head robot \(A\in\overline{\mathcal{R}}\) tries to approach its current \(\mathbf{v}^{\text{pref}}\) regardless of all other robots.
Normal masked velocity \(\mathbf{v}_{B}^{\overline{\text{m}}}\) in Equation (3) is solved by a holonomic QP to approach \(\mathbf{v}^{\text{pref}}\) with MCCA half-planes \(MCCA^{\tau}_{B}\) considering static obstacles and all other robots, which describes the intention that a normal robot \(B\in\underline{\mathcal{R}}\) could only hope to approach its preferred velocity on condition that it avoids all the static obstacles \(ORCA^{\tau}_{B|\mathcal{O}}\) and the masked velocities of all other robots.
Masked velocity is solved by holonomic QP for two reasons. First, masked velocity is not the real velocity to derive control inputs thus not constrained by kinematic constraints. Instead, it is used for masked velocity propagation and influence the real velocity of other robots. Second, without considering the kinematic constraints, the masked velocity solved is often larger and usually takes more time steps to achieve, which strengthens the deadlock-free intention of all the other robots in both duration and magnitude that a normal robot has to consider.
The masked velocity of a normal robot \(B\) is influenced by other robots in both \(\overline{\mathcal{R}}\) and \(\underline{\mathcal{R}}\). In Equation (4), we propose the following Masked Cooperative Collision Avoidance (MCCA)
half-plane to represent the permitted masked velocity half-plane for \(B\) against another robot \(C\):
\[MCCA^{\tau}_{B|C}=\left\{\mathbf{v}^{\underline{\mathsf{m}}}[(\mathbf{v}^{\underline{ \mathsf{m}}}-(\mathbf{v}^{\mathsf{opt}}_{B}+\mathbf{u}_{\underline{\mathsf{m}}}))\cdot \mathbf{n}_{\underline{\mathsf{m}}}\geq 0\right\},C\in\mathcal{R}. \tag{4}\]
Fig. 2 demonstrates the concept of Masked Velocity Obstacle (MVO) and MCCA half-plane for normal robots. For each control cycle, a robot will update its masked velocity and MCCA half-planes against all other robots. To derive the real collision avoidance velocity of this robot, we adjust the general QP objective function to
\[\text{min}\ \alpha_{1}\left\|\mathbf{v}-\mathbf{v}^{\mathsf{pred}}\right\|_{2}^{2}+ \alpha_{2}\sum_{i\in\mathcal{O}}\delta_{i}^{2}+\alpha_{3}\sum_{j\in\mathcal{R} }\delta_{j}^{2}+\alpha_{4}\sum_{k\in\mathcal{M}}\delta_{k}^{2}, \tag{5}\]
where \(\mathcal{M}\) is the set of MCCA half-planes against all other robots, \(\mathcal{O}\) and \(\mathcal{R}\) are the set of ORCA half-planes against obstacles and all other robots, respectively. As mentioned earlier, \(\mathcal{M}=\emptyset\) for a head robot since it is not required for the robot to avoid the masked velocity of other robots. Thus the objective in Equation (1) is still true for head robots. For each normal robot, it is required that \(\alpha_{2},\alpha_{3}\gg\alpha_{4}\gg\alpha_{1}\) so that the most prioritized target is to remain clear from static obstacles and other robots, then it is necessary to avoid the masked velocity of all other robots to carry out the intention to avoid deadlock, and finally it could catch up with its preferred velocity. At each control cycle, the masked velocity of all other robots may influence the real velocity of the current robot via MCCA half-planes, and when the control cycle is ended for this robot with updated masked velocity and real velocity, the masked velocity and real velocity of other robots nearby can be influenced in their future control cycles. Consequently, the propagation of masked velocity or deadlock-free intention among robots is possible in both time and space, as shown in Fig. 3.
The \(\mathbf{u}_{\underline{\mathsf{m}}}\) is not divided by \(2\) in Equation (4) to make sure robot \(B\) with normal priority is fully responsible for avoiding the masked velocity of a head robot. Moreover, it encourages the forward and backward propagation of masked velocities among robots with normal priority. Consider the coefficient for \(\mathbf{u}_{\underline{\mathsf{m}}}\) is \(0<\beta<1\), each time the masked velocity of head robot \(A\) is propagated to a group of normal robots \(\mathcal{L}\) not influenced by MVOs of earlier propagated normal robots, the translation of MCCA half-planes of robots in \(\mathcal{L}\) will be decreased by a ratio of \(\beta\), suggesting that the forward propagation will diminish too soon. Conversely, the backward propagation is affected as well because to avoid the head robot, those early propagated normal robots may need more space that \(\mathcal{L}\) fails to provide because masked velocity of robots in \(\mathcal{L}\) is hardly influenced.
### _Robot Priority Determination_
We adopt a fully decentralized method for each robot to determine its priority autonomously at each time step before masked velocity can be applied. Fig. 4 presents the algorithm for robot priority determination. Since the masked velocity of a head robot only tries to avoid ORCA half-planes of obstacles, the conflict of masked velocity between two head robots already in \(\overline{\mathcal{R}}\) cannot be resolved and may result in deadlock. Therefore the conflict should be resolved with priority determination in the first place to ensure that there are no deadlocks among head robots.
In the algorithm, even \(A_{i}\) is in \(\underline{\mathcal{R}}\), if \(A_{i}\) is to become a head robot, it has to make sure its masked velocity solved as a head robot is not in conflict with any head robots in \(\overline{\mathcal{R}}\), Thus the term \(\mathbf{v}^{\overline{\mathsf{m}}}_{A_{i}}\) in Fig. 4 applies to all robots in \(\mathcal{R}\) for priority determination. MVO cone is used to detect conflict between the two robots \(A_{i}\in\mathcal{R}\) and \(A_{j}\in\overline{\mathcal{R}}\) for any time instance from now such that the two robots are guaranteed to be definite deadlock-free in masked velocity space if the masked velocity \(\mathbf{v}^{\overline{\mathsf{m}}}_{A_{i}}\) of \(A_{i}\) is not in \(MVO^{\infty}_{A_{i}|A_{j}}\). Further, even \(\mathbf{v}^{\overline{\mathsf{m}}}_{A_{i}}\) is in \(MVO^{\infty}_{A_{i}|A_{j}}\), if the projection of masked velocity \(\mathbf{v}^{\overline{\mathsf{m}}}_{A_{i}}\) onto masked velocity \(\mathbf{v}^{\overline{\mathsf{m}}}_{A_{j}}\) is in the same direction with \(\mathbf{v}^{\overline{\mathsf{m}}}_{A_{j}}\), or \(\mathbf{v}^{\overline{\mathsf{m}}}_{A_{i}}\). \(\mathbf{v}^{\overline{\mathsf{m}}}_{A_{j}}<0\) equivalently, the deadlock induced by
Fig. 3: Propagation of masked velocity in a deadlock situation. The masked velocity of head robot \(A\) has propagated to normal robot \(B\), and normal robot \(C\) is being propagated at this time instance. When normal robot \(D\) is propagated in the near future, \(MCCA^{\tau}_{C|D}\) will move rightward, which will enlarge the masked velocity of \(C\). We can describe the process as the deadlock avoidance intention of a head robot propagating forward to normal robots from the near to the distant and the latter making room for the head robot and early propagated normal robots (propagating backward).
\(A_{i}\) and \(A_{j}\) is unlikely because the robot behind can follow the one in front. On condition that \(\mathbf{v}_{A_{i}}^{\overline{\mathbf{m}}}\) is in \(MVO_{A_{i}|A_{j}}^{\infty}\) and \(\mathbf{v}_{A_{i}}^{\overline{\mathbf{m}}}\cdot\mathbf{v}_{A_{j}}^{\overline{\mathbf{m}}}\geq 0\), if robot \(A_{i}\) has a larger accumulative importance \(S_{A_{i}}\) than any such \(A_{j}\), the algorithm ensures that the deadlock potential is eliminated by \(A_{i}\) replacing any such \(A_{j}\) into \(\overline{\mathcal{R}}\). The introduction of cumulative importance \(S_{A_{i}}\) prevents frequent switchings between high and normal priority for each robot and maintains the consistency of deadlock resolving.
### _Solving Process_
By combining the QP formulation and the concept of masked velocity, we summarize the solving process of masked cooperative collision avoidance in Fig. 5. The decision-making process of MCCA is fully decentralized where the state of each robot is available to all.
## V An example with differential-drive robots
Given the features introduced above, we present in this section a detailed implementation for differential-drive robots. Fig. 6 provides the basic kinematic model for differential-drive robots. The wheel axis center is not fully controllable for this type of robots and we adopt the effective center and radius as in [7]. However, it is only necessary to ensure \(D>0\) for the sole purpose of kinematic adaption. Instead of choosing the value delicately and resulting in a significantly larger disc radius, we assume \(D/R\approx 0.1\) or less, which is much smaller than the recommended value (\(D=R\)) in [6].
\begin{table}
\begin{tabular}{|c|c|} \hline
**Variable** & **Description** \\ \hline \(v_{l}\) & Left wheel linear velocity \\ \hline \(v_{l}\) & Right wheel linear velocity \\ \hline \(\mathbf{v}=(v_{x},v_{y})\) & Translational velocity vector of \(\mathbf{c}_{e}\) \\ \hline \(\delta_{C}\) & Auxiliary variable for half-plane \(\zeta\) \\ \hline \(\omega\) & Angular velocity of the robot \\ \hline \(\delta^{\omega}\) & Auxiliary variable for angular velocity \\ \hline
**Parameter** & **Description** \\ \hline \(v_{l}^{\mathrm{t}}\) & Current left wheel linear velocity \\ \hline \(v_{l}^{\mathrm{t}}\) & Current right wheel linear velocity \\ \hline \(\theta\) & Current heading orientation \\ \hline \(\mathbf{v}^{\mathrm{test}}\) & Preferred velocity of the robot \\ \hline \(v^{\mathrm{max}}\) & Maximum linear velocity of wheels \\ \hline \(a^{\mathrm{max}}\) & Maximum linear acceleration of wheels \\ \hline \(\Delta t\) & Time step \\ \hline \(\mathbf{v}_{jl}^{\mathrm{seq}}\) & Holonomic optimization velocity \\ \hline \(\theta_{jl}^{\mathrm{seq}}\) & Holonomic optimization heading \\ \hline \(\mu\) & Angular control level \\ \hline \end{tabular}
\end{table} TABLE I: Variables and parameters for the example
Fig. 4: Algorithm for robot priority determination. \(\eta\) is a preset integer representing the initial tabu steps for a newly determined normal robot conflicting with other head robots. The robot’s chance to raise priority is postponed until the remaining tabu steps \(T_{A_{i}}\) is zero. \(S_{A_{i}}\) is the number of control cycles where \(A_{i}\) is of high priority by far, which is an accumulative indicator to label the importance of \(A_{i}\). Both \(T_{A_{i}}\) and \(S_{A_{i}}\) are reset to zero once the robot reaches its current goal.
Fig. 5: Algorithm for Masked Cooperative Collision Avoidance.
Fig. 6: Differential-drive kinematics. Left wheel velocity and right wheel velocity are controlled with separate motors. \(r\) is the wheel radius. \(L\) is the distance between the wheels. Velocity of the wheel axis center \(\mathbf{c}_{a}\) is always perpendicular to the axis thus not fully controllable. The effective center \(\mathbf{c}_{e}\) converts the robot into a disc with radius \(R+D\) and \(\mathbf{v}=(v_{x},v_{y})\). Instantaneous Center of Rotation (ICR) is introduced for the convenience of calculating angular velocity \(\omega\)[16]. We define robot heading as the direction of \(\mathbf{c}_{e}-\mathbf{c}_{a}\) as in the figure.
### _The Nonholonomic QP Formulation for Real Velocity_
Equation (6) presents the differential-drive QP formulation for solving the real velocity and Table I summarizes the variables and parameters it mentions. Its objective function has been adjusted according to Equation (5). In addition to the basic differential-drive kinematic constraints, a special type of angular control constraint (6g) is added as a customized constraint (1c) to solve a problem unique to differential-drive robots, which is introduced in the next subsection.
Constraint (6a) includes the MCCA half-planes for all robots (\(\mathcal{M}=\emptyset\) for head robots) and the ORCA half-planes for all robots and obstacles. Constraints (6b) to (6f) describe the kinematic constraints for differential-drive robots. \(\mathbf{v}=(v_{x},v_{y})\) is the velocity vector for the effective center, which is away from the axis center with distance \(D>0\). Constraints (6b), (6c) and (6d) establish the direct kinematics where \(v_{\text{l}}\) and \(v_{\text{r}}\) are the control inputs. Constraints (6e) and (6f) set the maximum velocity and acceleration for the control inputs. For each robot, \(a^{\text{max}}\Delta t\) is the largest possible change of wheel linear velocity between two consecutive control cycles with time step \(\Delta t\).
\[\text{min} \alpha_{1}\left\|\mathbf{v}-\mathbf{v}^{\text{pref}}\right\|_{2}^{2}+ \tag{6a}\] \[\alpha_{2}\sum_{i\in\mathcal{O}}\delta_{i}^{2}+\alpha_{3}\sum_{ j\in\mathcal{R}}\delta_{j}^{2}+\alpha_{4}\sum_{k\in\mathcal{M}}\delta_{k}^{2}+ \alpha_{5}\delta_{\omega}^{2}\] s.t. \[\det\begin{pmatrix}v_{x}-p_{x}^{\zeta}&d_{x}^{\zeta}\\ v_{y}-p_{y}^{\zeta}&d_{y}^{\zeta}\end{pmatrix}\leq\delta_{\zeta},\zeta\in \mathcal{O}\cup\mathcal{R}\cup\mathcal{M},\delta_{\zeta}\geq 0\] \[v_{x}=\left(\frac{\cos\theta}{2}+\frac{D\sin\theta}{L}\right)v_{ l}+\left(\frac{\cos\theta}{2}-\frac{D\sin\theta}{L}\right)v_{r}\] (6b) \[v_{y}=\left(\frac{\sin\theta}{2}-\frac{D\cos\theta}{L}\right)v_{ l}+\left(\frac{\sin\theta}{2}+\frac{D\cos\theta}{L}\right)v_{r}\] (6c) \[\omega=\frac{v_{r}-v_{l}}{L}\] (6d) \[\left|v_{l}\right|,\left|v_{r}\right|\leq v^{\text{max}}\] (6e) \[\left|v_{l}-v_{l}^{\zeta}\right|,\left|v_{r}-v_{r}^{\zeta}\right| \leq a^{\text{max}}\Delta t\] (6f) \[\int_{0}^{T}\left|\omega\right|-\frac{2a^{\text{max}}\cdot t}{L} \ dt\leq\theta^{\prime}+\delta^{\omega},\delta^{\omega}\geq 0,\] (6g) \[T=\frac{\theta^{\prime}L}{2a^{\text{max}}},\theta^{\prime}=\left\{ \begin{array}{cc}\frac{\angle(\theta,\theta_{H}^{\text{opt}})}{\mu}&\text{ if }\ \mathbf{v}_{\omega}\cdot\mathbf{v}_{H}^{\text{opt}}>0\\ \angle(\theta,\theta_{H}^{\text{opt}})&\text{ otherwise}\end{array}\right.\]
### _Angular Control for Heading Oscillation Avoidance_
We discovered a heading oscillation issue when \(D\) is relatively small and acceleration is taken into consideration. The phenomenon is demonstrated in Fig. 7. The cause of heading oscillation is twofold. First, we consider a differential-drive robot in an empty environment thus \(\mathbf{v}_{H}^{\text{opt}}=\mathbf{v}^{\text{pref}}\) and remains unchanged. The angle between \(\widetilde{\mathbf{v}}\) and \(\mathbf{v}\) in Fig. 7 is \(\Delta\alpha=\arctan\frac{\omega D}{\left|\mathbf{v}\right|}\). A small \(D\) will lead to a angle so small that when the angle between \(\widetilde{\mathbf{v}}\) and \(\mathbf{v}_{H}^{\text{opt}}\) is closing to zero, the robot may still have a large angular velocity and overshoot before the control inputs can decelerate \(\omega\) to zero, which results in heading oscillation. Second, the unchanged \(\mathbf{v}_{H}^{\text{opt}}\) assumption is not appropriate. It changes constantly with \(\mathbf{v}^{\text{pref}}\) and states of other robots varying. If the newer \(\mathbf{v}_{H}^{\text{opt}}\) is solved to be very close to the current robot heading, the robot will overshoot if its angular velocity cannot be decreased to zero in time.
To avoid heading oscillation we introduce constraint (6g). Define \(\angle(\theta,\theta_{H}^{\text{opt}})\) as the angle that the robot heading \(\theta\) needs to turn in the direction of current \(\mathbf{v}_{\omega}\) to reach \(\mathbf{v}_{H}^{\text{opt}}\). Basically, the angular control constraint limits the increase of \(\omega\) such that the current \(\omega\) is able to decrease to zero using the largest angular acceleration \(\frac{2a^{\text{max}}}{L}\) before the travelled angle reaches the current \(\angle(\theta,\mathbf{v}_{H}^{\text{opt}})\).
To further reduce the occurrence of the second cause of heading oscillation due to the changing \(\mathbf{v}_{H}^{\text{opt}}\), we divide \(\theta^{{}^{\prime}}\) by \(\mu>1\) when the angle is less than \(\pi\) or \(\mathbf{v}_{\omega}\cdot\mathbf{v}_{H}^{\text{opt}}>0\) equivalently, which will prevent \(\omega\) from increasing too much. However, \(\theta^{{}^{\prime}}\) is not divided by \(\mu>1\) in the following situation. When the angle \(\angle(\theta,\theta_{H}^{\text{opt}})\) is at least \(\pi\), or \(\mathbf{v}_{\omega}\cdot\mathbf{v}_{H}^{\text{opt}}\leq 0\) equivalently, the differential-drive robot may still have a large angular velocity \(\omega\) S-turn as shown in Fig. 8. When \(\angle(\theta,\theta_{H}^{\text{opt}})\) is closer to \(2\pi\) the robot is more likely to perform such an S-turn maneuver. Thus, the optimal control is achieved by the controller.
Fig. 8: Two cases for \(\theta^{{}^{\prime}}\). With \(\mathbf{v}_{\omega}\cdot\mathbf{v}_{H}^{\text{opt}}\leq 0\) on the right, instead of traveling the shaded angular space as the robot on the left does, the robot may also choose to decrease \(\omega\) to zero at the dashed line, and reversely increase \(\omega\) to reach \(\mathbf{v}_{H}^{\text{opt}}\) faster (like an S-turn). After \(\omega\) reverses, it becomes the first case on the left.
Fig. 7: Heading oscillation with differential-drive robots. \(\mathbf{v}_{H}^{\text{opt}}\) is the solution to QP (6) considering only constraint (6a) at each time step. A nonholonomic robot is not able to reach \(\mathbf{v}_{H}^{\text{opt}}\) instantly but will try to reach it under kinematic constraints. The solved velocity of the robot effective center at each time step can be decomposed into \(\widetilde{\mathbf{v}}=\mathbf{v}_{\omega}+\mathbf{v}\) where \(\mathbf{v}_{\omega}\) is perpendicular to robot heading direction and \(\left|\mathbf{v}_{\omega}\right|=\omega D\). The overshooting phenomenon is that when \(\widetilde{\mathbf{v}}\) is very close to \(\mathbf{v}_{H}^{\text{opt}}\), the robot may still have a large angular velocity \(\omega\) which will divert the heading of robot further away (heading\({}_{4}\)) from the target holonomic velocity and its heading (heading\({}_{3}\)).
we don't reduce \(\theta^{{}^{\prime}}\) by \(\mu>1\) in this case.
## VI Experimentation
We describe the implementation and simulation results of MCCA with differential-drive robots in various scenarios.
### _Implementation Details_
All robots are differential-drive with 950 mm \(\times\) 725 mm \(\times\) 285 mm in three dimensions. Each robot is equipped with two powered wheels of radius \(r=\) 0.1 m and two passive caster wheels to balance the robot. The maximum linear velocity is 2 m/s and the maximum linear acceleration is 2 m/s\({}^{2}\). Each powered wheel is controlled by a servomotor, which given the control inputs will power the wheel with up to the maximum linear acceleration. The distance between \(\mathbf{c}_{a}\) and \(\mathbf{c}_{e}\) is \(D=\) 0.015 m. Robot bounding circle radius \(R=\) 0.485 m. The effective radius is \(R+D=\) 0.5 m, which is only 3 percent larger than \(R\). The time step for each control cycle is \(\Delta t=\) 0.25 s. The time horizon is \(\tau=\) 17 s. The weight parameters in Equation (6) are \(\alpha_{1}=10^{-2}\), \(\alpha_{2}=10^{4}\), \(\alpha_{3}=10^{2}\), \(\alpha_{4}=1\), \(\alpha_{5}=2\times 10^{4}\). The angular control parameter \(\mu=9\). The initial tabu steps \(\eta=30\).
In each simulation scenario, a broadcast service is used to receive the states of all the robots. The states are broadcast to all the robots in the system. Each robot at each time step will use its own state and the broadcast states of other robots to determine its control inputs. For the purpose of this research, the simulation does not include the sensing and localization part, which is typically accomplished in the real world with Simultaneous Localization and Mapping (SLAM). However, we enforce a uniform-distributed noise of location and angle for each robot, with an amplitude of 0.01 m and 1 degree, respectively. We apply a simple method to set the preferred velocity \(\mathbf{v}^{\text{pref}}\) of a robot to point to the goal from its current location with magnitude no more than the maximum velocity. Webots [18], a mobile robotics simulation software, is used for our simulation. We use PROX-QP [19] for solving the proposed QP problem in the simulation.
We design the simulation scenarios such that they should be able to verify the capability of MCCA in two aspects. First, collision avoidance is the minimum requirement. Second, deadlock avoidance is stable and efficient, and is functional even with scenarios where deadlocks are easy to occur. Besides, we also verify the effectiveness of angular control for differential-drive robots to avoid heading oscillation.
### _Simulation Results_
We introduce the scenarios briefly.
* Scenario 1: Two groups each with five robots overtaking the positions on the other side with a one-lane passage.
* Scenario 2: Two groups each with eight robots traveling in a two-lane passage and give way spontaneously and smoothly.
* Scenario 3: Ten robots taking repeated round-trips with multiple one-lane passages.
Fig. 9: Selected simulation results. (a) Scenario 1: The 5 robots on one side overtake the positions of the 5 robots on the other side with a one-lane passage. (b) Scenario 2: The 8 robots on each sides of a two-lane passage try to reach the other side. (c) Scenario 7: Before and after introducing angular control for differential-drive robots with poor maneuverability, whose maximum linear acceleration is reduced to 0.2 m/s\({}^{2}\) from 2 m/s\({}^{2}\).
* Scenario 4: Twenty robots taking repeated round-trips in an environment with densely placed obstacles.
* Scenario 5: Forty robots taking repeated round-trips in a confined and congested area.
* Scenario 6: Thirty-four robots converging into a passage.
* Scenario 7: Ten robots with poor maneuverability taking repeated round-trips.
All the listed scenarios are demonstrated in the attached video. Scenario 3, 4, 5, and 7 involve repeated round-trips and remain collision-free and deadlock-free for hundreds of simulation hours and continue to be so. So does Scenario 1 if robots continuously overtake positions on the other side. Selected results from scenarios 1, 2 and 7 are summarized as follows.
For Scenario 1, Fig. 8(a) shows the trace of each robot. If robots from both sides rush into the passage, deadlock could happen and some robots will move back and forth for a long period of time. Our method is able to avoid the deadlock situation locally and spontaneously. Some robots will detour or wait outside the passage to give way to the robot in the passage. For Scenario 2, Fig. 8(b) provides a series of screenshots when 8 robots on each side encounter in a two-lane passage. Not only do the robots give way to robots on the other side with what appears to be a zipper merge, but they also complete the process smoothly without central control or sudden change of path. For Scenario 7, Fig. 8(c) shows that angular control is effective to handle robots with low acceleration compared to the maximum velocity. Without angular control, we can observe sudden stops, heading oscillation, and circling issues from these robots. The problems are tackled effectively by angular control. Moreover, the radius of robots are only enlarged marginally, allowing these robots with poor maneuverability to travel smoothly even in congested area and narrow passages.
## VII Conclusion
In this paper, we first provide a multi-objective general QP formulation for solving collision avoidance in a multi-robot system. The formulation is a well-defined convex problem that is easy to solve, applicable to most controllable wheeled mobile robots and linear customized constraints can be incorporated. On top of that, we propose the MCCA method to effectively avoid and resolve deadlocks while minimizing the global coordination to maintain a fully decentralized decision-making mechanism. We define masked velocity and MCCA half-planes mathematically to carry out the intention that enables robots to resolve deadlock situations locally and collectively. The general QP works with MCCA to ensure a collision-free, deadlock-free and smooth path for each robot. To illustrate that, we provide a detailed implementation for differential-drive robots and solve the heading oscillation issue by adding a customized angular control constraint into the QP formulation, which allows robots with poor maneuverability to travel smoothly in the simulation without significantly enlarging the effective radius. The simulation shows very promising results for deadlock avoidance in complex and highly difficult environment, which persuades us to believe that the MCCA method has great advantage and potential in real-world applications.
|
2308.14926 | Monus semantics in vector addition systems with states | Vector addition systems with states (VASS) are a popular model for concurrent
systems. However, many decision problems have prohibitively high complexity.
Therefore, it is sometimes useful to consider overapproximating semantics in
which these problems can be decided more efficiently.
We study an overapproximation, called monus semantics, that slightly relaxes
the semantics of decrements: A key property of a vector addition systems is
that in order to decrement a counter, this counter must have a positive value.
In contrast, our semantics allows decrements of zero-valued counters: If such a
transition is executed, the counter just remains zero.
It turns out that if only a subset of transitions is used with monus
semantics (and the others with classical semantics), then reachability is
undecidable. However, we show that if monus semantics is used throughout,
reachability remains decidable. In particular, we show that reachability for
VASS with monus semantics is as hard as that of classical VASS (i.e.
Ackermann-hard), while the zero-reachability and coverability are easier (i.e.
EXPSPACE-complete and NP-complete, respectively). We provide a comprehensive
account of the complexity of the general reachability problem, reachability of
zero configurations, and coverability under monus semantics. We study these
problems in general VASS, two-dimensional VASS, and one-dimensional VASS, with
unary and binary counter updates. | Pascal Baumann, Khushraj Madnani, Filip Mazowiecki, Georg Zetzsche | 2023-08-28T22:55:50Z | http://arxiv.org/abs/2308.14926v2 | # Monus semantics in vector addition systems with states
###### Abstract
Vector addition systems with states (VASS) are a popular model for concurrent systems. However, many decision problems have prohibitively high complexity. Therefore, it is sometimes useful to consider overapproximating semantics in which these problems can be decided more efficiently.
We study an overapproximation, called monus semantics, that slightly relaxes the semantics of decrements: A key property of a vector addition systems is that in order to decrement a counter, this counter must have a positive value. In contrast, our semantics allows decrements of zero-valued counters: If such a transition is executed, the counter just remains zero.
It turns out that if only a subset of transitions is used with monus semantics (and the others with classical semantics), then reachability is undecidable. However, we show that if monus semantics is used throughout, reachability remains decidable. In particular, we show that reachability for VASS with monus semantics is as hard as that of classical VASS (i.e. Ackermann-hard), while the zero-reachability and coverability are easier (i.e. EXPSPACE-complete and NP-complete, respectively). We provide a comprehensive account of the complexity of the general reachability problem, reachability of zero configurations, and coverability under monus semantics. We study these problems in general VASS, two-dimensional VASS, and one-dimensional VASS, with unary and binary counter updates.
Vector addition systems, Overapproximation, Reachability, Coverability 10.4230/LIPIcs.CONCUR.2023.10 [Fulip Mazowiecki]Supported by the ERC grant INFSYS, agreement no. 950398. The authors are grateful to Wojciech Czerwinski and Sylvain Schmitz for discussions, and to Sylvain Schmitz for suggesting the term'monus'.
## 1 Introduction
Vector addition systems with states (VASS) are an established model used in formal verification with a wide range of applications, _e.g._ in concurrent systems [22], business processes [39] and others (see the survey [37]). They are finite automata with transitions labeled by vectors over integers in some fixed dimension \(d\). A configuration of a VASS consists of a pair \((p,\mathbf{v})\), denoted \(p(\mathbf{v})\), where \(p\) is a state and \(\mathbf{v}\) is a vector in \(\mathbb{N}^{d}\). As a result of applying a transition labeled by some \(\mathbf{z}\in\mathbb{Z}^{d}\), the vector in the resulting configuration is \(\mathbf{v}+\mathbf{z}\). Thus in particular \(\mathbf{v}+\mathbf{z}\geq\mathbf{0}\) must hold for the transition to be applicable. The latter requirement is often called the VASS semantics. To avoid ambiguity we will refer to it as the _classical VASS semantics_.
The VASS model is also studied with other semantics. One of the most natural variants of VASS semantics is the _integer semantics_ (or simply \(\mathbb{Z}\)_-semantics_), where configurations are of
the form \(p(\mathbf{v})\), where \(\mathbf{v}\in\mathbb{Z}^{d}\)[25]. There, a transition can always be applied, _i.e._ the resulting configuration is \(\mathbf{v}+\mathbf{z}\) and we do not require \(\mathbf{v}+\mathbf{z}\geq\mathbf{0}\). In this paper we consider VASS with the _monus semantics_, whose behavior partly resembles both classical and integer semantics. There, a transition can always be applied (as in \(\mathbb{Z}\)-semantics), however, if as a result the vector in the new configuration would have negative entries, then these are replaced with \(0\). Thus, vectors in configurations are over the naturals (as in classical semantics). The name monus semantics comes from the monus binary operator, which is a variant of the minus operator.1 Note that every instance of a VASS can be considered with all three semantics. See Figure 1 for an example.
Footnote 1: One can also think that monus semantics is integer semantics, where after every step we apply the ReLU function.
We study classical decision problems for VASS: reachability and coverability. The input for these problems is a VASS \(\mathcal{V}\), an initial configuration \(p(\mathbf{v})\), and a final configuration \(q(\mathbf{w})\). The _reachability_ problem asks whether there is a run from \(p(\mathbf{v})\) to \(q(\mathbf{w})\). A variant of this problem, called _zero reachability_, requires additionally that in the input the final vector is fixed to \(\mathbf{w}=\mathbf{0}\). The _coverability_ problem asks whether there is a run from \(p(\mathbf{v})\) to \(q(\mathbf{w}^{\prime})\), where \(\mathbf{w}^{\prime}\geq\mathbf{w}\). Note that all three problems can be considered with respect to any of the three VASS semantics. As an example consider the VASS in Figure 1. Then for all three semantics \(p(1,2)\) is both reachable and coverable from \(p(2,0)\); and \(p(0,2)\) is not reachable from \(p(2,0)\) (but it is coverable as \((1,2)\geq(0,2)\)).
Contribution l: Arbitrary dimension.Our first contribution is settling the complexities of reachability and coverability for VASS with the monus semantics (see Table 1). We prove that reachability is Ackermann-complete by showing that it is inter-reducible with classical VASS reachability, which is known to be Ackermann-complete [30, 9, 29]. This comes as a surprise, since in monus semantics, every transition can always be applied, just like in \(\mathbb{Z}\)-semantics, where reachability is merely \(\mathsf{NP}\)-complete [25]. Thus, the monus operation encodes enough information in the resulting configuration that reachability remains extremely hard.
The Ackermann-hardness relies crucially on the fact that the final configuration is non-zero: We also show that the zero reachability problem is \(\mathsf{EXPSPACE}\)-complete in monus semantics. This uses inter-reducibility with classical VASS coverability, which is \(\mathsf{EXPSPACE}\)-complete due to seminal results of Lipton and Rackoff [33, 35]. The fact that zero-reachability is significantly easier than general reachability is in contrast to classical semantics, where zero
Figure 1: A VASS in dimension \(2\) with one state \(p\) and one transition \(t\). It has only one transition labeled with \((-1,2)\). We consider possible runs assuming that the initial configuration is \(p(2,0)\). We use different notation for steps in each semantics: \(\rightarrow\), \(\rightarrow\), \(\Rightarrow\). For the classical semantics (\(\rightarrow\)) after reaching the configuration \(p(0,4)\) the transition can no longer be applied. For the integer semantics (\(\xrightarrow{\star}\)) the transition can be applied even in \(p(0,4)\), reaching all configurations of the form \(p(-n,4+2n)\). Similarly for the monus semantics (\(\mathrel{\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\sim$}}\kern-3.0pt\raise 3.0pt \hbox{$\Rightarrow$}}}\)), but there the configurations reachable from \(p(0,4)\) are of the form \(p(0,4+2n)\).
reachability is interreducible with the reachability problem (intuitively, one can modify the input VASS by adding an extra edge that decrements by \(\mathbf{w}\)).
In another unexpected result, the complexity of coverability drops even more: We prove that it is \(\mathsf{NP}\)-complete in monus semantics. We complete these results by showing that mixing classical and monus semantics (_i.e._ each transition is designated to either work in classical or monus semantics) makes reachability undecidable.
Contribution II: Fixed dimension.Understanding the complexity of reachability problems in VASS of fixed dimension has received a lot of attention in recent years and is now well understood. This motivates our second contribution: An almost complete complexity analysis of reachability, zero reachability and coverability for VASS with the monus semantics in dimensions 1 and 2. Here, the complexity depends on whether the counter updates are encoded in unary or binary (see Table 1).
We restrict our attention to dimensions 1 and 2 as most research in fixed dimension for the classical semantics. For the classical semantics not much is known about reachability in dimension \(d\geq 3\). Essentially, the only known results consist of an upper bound of \(\mathbf{F}_{7}\) that follows from the Ackermann upper bound in the general case [30], and a \(\mathsf{PSPACE}\)-lower bound that holds already for \(d=2\)[5]. An intuition as to why the jump from 2 to 3 is so difficult is provided already by Hopcroft and Pansiot [27] who prove that the reachability set is always semilinear in dimension 2, and show an example that this is not the case in dimension 3. In contrast, coverability is well understood, and already Rackoff's construction [35] shows that for fixed dimension \(d\geq 2\) coverability is in \(\mathsf{NL}\) and in \(\mathsf{PSPACE}\), for unary and binary encoding, respectively (with matching lower bounds [5]).
Key technical ideasThe core insights of our paper are characterizations of the reachability and coverability relations in monus semantics, in terms of reachability and coverability in classical and \(\mathbb{Z}\)-semantics (Propositions 3.6 and 3.12 and Lemma 3.10). These allow us to apply a range of techniques to reduce reachability problems for one semantics into problems for other semantics, and thereby transfer existing complexity results. There are three cases where we were unable to ascertain the exact complexity: (i) reachability in 2-VASS with unary counter updates, (ii) zero reachability in 1-VASS with binary updates, and (iii) coverability in 1-VASS with binary counter updates. Concerning (i), this is because for 2-VASS with unary updates, it is known that classical reachability is \(\mathsf{NL}\)-complete [5], but we would need to decide existence of a run that visits intermediate configurations of a certain shape. In the case of 2-VASS with binary updates, the methods from [5] (with a slight extension from [3]) allow this. The other cases, (ii) and (iii), are quite similar to each other. In particular, problem (ii) is logspace-interreducible with classical coverability in 1-VASS with binary updates, for which only an \(\mathsf{NL}\) lower bound and an \(\mathsf{NC}^{2}\) upper bound are known [2].
Monus semantics as an overapproximation.Recall the example in Figure 1. Notice that every configuration reachable in the classical semantics is also reachable in the integer and monus semantics. It is not hard to see that this is true for every VASS model. Such semantics are called _overapproximations_ of the classical VASS semantics. Overapproximations are a standard technique used in implementations of complex problems, in particular for the VASS model (see the survey [4]). They allow to prune the search space of reachable configurations, based on the observation that if a configuration is not reachable by an overapproximation then it cannot be reachable in the classical semantics. This is the core idea behind efficient implementations both of the coverability problem [15, 6] and the reachability problem [12, 7].
The two most popular overapproximations, integer semantics [25] and continuous semantics [20], behave similarly for both reachability and coverability problems, namely both problems are -complete. Note that all of the implementations mentioned above rely on such algorithms in as they can be efficiently implemented via SMT solvers. Interestingly, the monus semantics is an efficient overapproximation only for the coverability problem. (As far as we know this is the first study of a VASS overapproximation with this property.) Therefore, it seems to be a promising approach to try to speed up backward search algorithms using monus semantics (in the same vein as [6]). Whether this leads to improvements in practice remains to be seen in future work.
Related work.We discuss related work for VASS in classical semantics. A lot of research is dedicated to reachability for the flat VASS model, a model that does not allow for nested cycles in runs. In dimension 2 decision problems for VASS reduce to flat VASS, which is crucial to obtain the exact complexities [5]. It is known that in dimensions \(d\geq 3\) such a reduction is not possible, but this raised natural questions of the complexity for flat VASS in higher dimensions [8, 10]. Another research direction is treating the counters in VASS models asymmetrically. For example, it is known that allowing for zero tests in VASS makes reachability and coverability undecidable (they essentially become Minsky machines). However, it was shown that if only one of the 2 counters is allowed to be zero tested then both reachability and coverability remain -complete [31]. A different asymmetric question is when one counter is encoded in binary and the other is encoded in unary. Then recently it was shown that coverability is in [34] but it is unknown whether there is a matching lower bound. Finally, there are two important extensions of the VASS model: branching VASS (where runs are trees, not paths), and pushdown VASS (with one pushdown stack). For branching VASS, coverability is -complete [11]. The complexity of reachability is well understood in dimension 1 [23, 18] but in dimension 2 or higher it is unknown whether it is decidable. For pushdown VASS only coverability in dimension 1 is known to be decidable [32], otherwise decidability of both reachability and coverability remain open problems. Recently some progress was made on restricted pushdown VASS models [14, 21]. The monus semantics is a natural overapproximation that can be studied in all of these variants. Finally, let us mention that VASS with monus semantics fit into the very general framework of G-nets [13], but does not seem to fall into any of the decidable subclasses studied in [13]. However, if we equip VASS with with the usual well-quasi ordering on configurations, it is easy to see that even with monus semantics, they constitute well-structured transition systems (WSTS) [19, 1], which makes available various algorithmic techniques developed for WSTS.
Organization.In Section 2 we formally define the VASS model and the classical, integer and monus semantics. In Section 3 we prove the results in arbitrary dimension. Then in Section 4 and Section 5 we prove the results in dimension 2 and 1, respectively.
## 2 Vector addition systems with monus semantics: Main results
Given a vector \(\mathbf{v}\in\mathbb{Z}^{d}\) we write \(\mathbf{v}[i]\) for the value in the \(i\)-th coordinate, where \(i\in\{1,\ldots,d\}\). We also refer to \(i\) as the \(i\)-th counter and write that it contains \(\mathbf{v}[i]\) tokens. Given two vectors \(\mathbf{v}\) and \(\mathbf{v}^{\prime}\) we write \(\mathbf{v}\geq\mathbf{v}^{\prime}\) if \(\mathbf{v}[i]\geq\mathbf{v}^{\prime}[i]\) for all \(i=1,\ldots,d\). By \(\mathbf{0}^{d}\) we denote the zero vector in dimension \(d\). We also simply write \(\mathbf{0}\) if \(d\) is clear from context.
Vector addition systems with states.A vector addition system with states (VASS) is a triple \(\mathcal{V}=(d,Q,\Delta)\), where \(d\in\mathbb{N}\), \(Q\) is a finite set of states and \(\Delta\subseteq Q\times\mathbb{Z}^{d}\times Q\) is a finite set of transitions. Throughout the paper we fix a VASS \(\mathcal{V}=(d,Q,\Delta)\).
We start with the formal definitions in the _classical semantics_. A configuration of a VASS is a pair \(p(\mathbf{v})\in Q\times\mathbb{N}^{d}\), denoted \(p(\mathbf{v})\). Any transition \(t\in\Delta\) induces a successor (partial) function \(\mathsf{Succ}_{t}:Q\times\mathbb{N}^{d}\to Q\times\mathbb{N}^{d}\) such that \(\mathsf{Succ}_{t}(q(\mathbf{v}))=q^{\prime}(\mathbf{v}^{\prime})\) iff \(t=(q,\mathbf{z},q^{\prime})\) and \(\mathbf{v}^{\prime}=\mathbf{v}+\mathbf{z}\). This successor function can be lifted up to \(\Delta\) to get a step relation \(\neg_{\mathcal{V}}\), such that any pair of configuration \(C\rightarrow_{\mathcal{V}}C^{\prime}\) iff there exists \(t\in\Delta\) with \(\mathsf{Succ}_{t}(C)=C^{\prime}\). A _run_ is a sequence of configurations
\[q_{0}(\mathbf{v}_{0}),q_{1}(\mathbf{v}_{1}),q_{2}(\mathbf{v}_{2}),\ldots,q_{k }(\mathbf{v}_{k})\]
such that for every \(0<j\leq k\), \(q_{j-1}(\mathbf{v}_{j-1})\rightarrow_{V}q_{j}(\mathbf{v}_{j})\). If there exists such a run we say that \(q_{k}(\mathbf{v}_{k})\) is reachable from \(q_{0}(\mathbf{v}_{0})\) and denote it \(C_{0}\xrightarrow{*}_{\mathcal{V}}C_{k}\). We call \(\xrightarrow{*}_{\mathcal{V}}\) the reachability relation in the classical VASS semantics.
In this paper we consider two additional semantics. The first is called the _integer semantics_ (or \(\mathbb{Z}\)_-semantics_). A configuration in this semantics is a pair \(p(\mathbf{v})\in Q\times\mathbb{Z}^{d}\) (hence, values of vector coordinates can drop below zero). The definitions of successor function, step relation and run are analogous as for the classical semantics. By \(\xrightarrow{*}_{\mathbb{Z}}\) and \(\xrightarrow{*}_{\mathbb{Z}}\), we denote the step and reachability relations in the \(\mathbb{Z}\)-semantics, respectively.
The second is called _monus semantics_. The configurations are the same as in the classical semantics. The difference is in the successor function. Every transition \(t\in\Delta\) induces a successor function \(\mathsf{Succ}_{t}:Q\times\mathbb{N}^{d}\to Q\times\mathbb{N}^{d}\) as follows: \(\mathsf{Succ}_{t}(q(\mathbf{v}))=q^{\prime}(\mathbf{v}^{\prime})\) iff \(t=(q,\mathbf{z},q^{\prime})\) and for all \(j\in\{1,2,\ldots d\}\), \(\mathbf{v}^{\prime}[j]=\max(\mathbf{v}|j]+\mathbf{z}[j],0)\). We write in short \(\mathbf{v}^{\prime}=\max(\mathbf{v}+\mathbf{z},\mathbf{0})\). Step relation and runs are defined analogously as in the case of classical semantics. By \(\Rightarrow\)\(\mathcal{V}\) and \(\xrightarrow{*}_{\mathcal{V}}\), we denote the step and reachability relations in the monus semantics, respectively.
We drop the subscript \(\mathcal{V}\) from the above relations when the VASS is clear from context. We write that a run is a _classical run_, a \(\mathbb{Z}\)_run_ or a _monus run_ to emphasize the considered semantics. An example highlighting the differences between the three semantics is in Figure 1.
Decision problems.We study the following decision problems for VASS.
The classical _reachability problem_:
1. A VASS \(\mathcal{V}=(d,Q,\Delta)\) and two configurations \(p(\mathbf{v})\) and \(q(\mathbf{w})\).
2. Does \(p(\mathbf{v})\xrightarrow{*}q(\mathbf{w})\) hold?
The classical _zero reachability problem_:
1. A VASS \(\mathcal{V}=(d,Q,\Delta)\), a configuration \(p(\mathbf{v})\) and a state \(q\).
2. Does \((p,\mathbf{v})\xrightarrow{*}q(\mathbf{0}^{d})\) hold?
The classical _coverability problem_:
1. A VASS \(\mathcal{V}=(d,Q,\Delta)\) and two configurations \(p(\mathbf{v})\) and \(q(\mathbf{w})\).
2. Does \(p(\mathbf{v})\xrightarrow{*}q(\mathbf{w}^{\prime})\) hold for some \(\mathbf{w}^{\prime}\geq\mathbf{w}\)?
Similarly, the above problems in \(\mathbb{Z}\) and classical semantics are defined by replacing \(\xrightarrow{*}\) with \(\xrightarrow{*}_{\mathbb{Z}}\) and \(\xrightarrow{*}\), respectively.
Main resultsThe main complexity results of this work are summarized in Table 1. In Table 2, we recall complexity results for VASS with classical semantics for comparison. We do not split the cases of unary and binary encoding for arbitrary dimensions, since there all lower bounds work for unary, whereas all upper bounds work for binary.
Concerning the _reachability problem_, we note that in all cases where we obtain the exact complexity, it is the same as for the classical VASS semantics. For the other decision problems, there are stark differences: First, while in the classical semantics, _zero reachability_ is easily inter-reducible with general reachability, in the monus semantics, its complexity drops in two cases: In 1-VASS with binary counter updates, monus zero reachability is in \(\mathsf{NC}^{2}\) (thus polynomial time), compared to \(\mathsf{NP}\) in the classical setting. Moreover, in arbitrary dimension, monus zero reachability is \(\mathsf{EXPSPACE}\)-complete, compared to Ackermann in the classical semantics. For the _coverability problem_, the monus semantics also lowers the complexity in two cases: For binary encoded 2-VASS (\(\mathsf{NP}\) in monus semantics, \(\mathsf{PSPACE}\) in classical) and in the general case (\(\mathsf{NP}\) in monus semantics, \(\mathsf{EXPSPACE}\) in classical semantics).
UndecidabilityTo stress the subtle effects of monus semantics, we mention that it leads to undecidability if combined with classical semantics: If one can specify the applied semantics (classical vs. monus) for each transition, then (zero) reachability becomes undecidable.
We sketch the proof using Figure 2. It shows two gadgets, where "\(\rightarrow\)" transitions use classical semantics and "\(\Rightarrow\)" transitions use monus semantics. The two gadgets realize a counter with zero test: The left gadget is a conditional jump ("if zero, then go to \(q\), otherwise decrement and go to \(r\)"), whereas the right gadget is just an increment. In intended runs (i.e. where the left gadget always takes the intended transition), the counter value is stored both in components 1 and 3. (To realize a full two-counter machine, the same gadgets on components 2 and 4 realize the other testable counter.) Thus, initially, all components are zero. Note that if the left gadget always takes the transitions as intended, then the first and third counter will remain equal. If the gadget takes the upper transition when the counter is not actually zero, then the first counter becomes smaller than the third, and will then always stay smaller. Hence, to reach \((0,0,0,0)\), the left gadget must always behave as intended.
However, coverability remains decidable if we can specify the semantics of each transition. Indeed, suppose we order the configurations of a VASS by the usual well-quasi ordering (i.e. the control states have to agree, and the counter values are ordered component-wise). Then it is easy to see that this results in a well-structured transition system (WSTS) [19, 1]. This also implies, e.g. that termination is decidable in this general setting.
## 3 Arbitrary dimension
In this section, we prove the complexity results concerning VASS with arbitrary dimension. This will include the characterizations of monus reachability, monus zero reachability, and monus coverability in terms of classical and \(\mathbb{Z}\)-semantics. We begin with some terminology.
Figure 2: Two gadgets for realizing a zero-testable counter.
Paths.A sequence of transitions \((p_{1},\mathbf{z}_{1},q_{1}),\ldots,(p_{k},\mathbf{z}_{k},q_{k})\) is valid iff \(q_{i}=p_{i+1}\) for every \(1\leq i<k-1\). Furthermore, we say that it is valid from a given configuration \((p,\mathbf{v})\) if \(p=p_{0}\). We call a valid sequence of transitions a _path_.
Given two paths \(\rho_{1}\) and \(\rho_{2}\) if the last state of \(\rho_{1}\) is equal to the first state of \(\rho_{2}\) then by \(\rho=\rho_{1}\rho_{2}\) we denote the path defined as the sequence \(\rho_{1}\) followed by the sequence \(\rho_{2}\). Similarly, we use this notation with more paths, _e.g._\(\rho=\rho_{1}\rho_{2}\ldots\rho_{k}\) means that the path \(\rho\) is composed from \(k\) paths: \(\rho_{1},\ldots\rho_{k}\).
Fix a path \(\rho=(p_{0},\mathbf{z}_{0},p_{1}),\ldots,(p_{k-1},\mathbf{z}_{k-1},p_{k})\). We say that \(\mathbf{z}=\sum_{i=0}^{k-1}\mathbf{z}_{i}\) is the effect of the path \(\rho\). Notice that while for classical and \(\mathbb{Z}\)-semantics the effect of a path can be computed by subtracting the vectors in the last and first configurations, this is not necessarily true for mounis semantics. In Figure 1 consider the path \(\rho=t,t,t\). The effect is \((-3,6)\). In the \(\mathbb{Z}\)-semantics \((2,0)\xrightarrow[Z]{\xrightarrow[Z]{\xrightarrow[Z]{\xrightarrow[Z]{ \xrightarrow[Z]{\xrightarrow[Z]{\xrightarrow[Z]{\xrightarrow[Z]{\xrightarrow[Z]{ \xrightarrow[Z]{\xrightarrow[Z]{\xrightarrow[Z]{\xrightarrow[Z]{\xrightarrow[Z]{ \xrightarrow[Z]{\xrightarrow[Z]{\xrightarrow[Z]{\xrightarrow[Z]{\xrightarrow[Z]{ \xrightarrow[Z]{\xrightarrow[{\xrightarrow[{\xrightarrow[{\xrightarrow[{ \xrightarrow[{\xrightarrow[}_{i}},i},\mathbf{i}] \xrightarrow[Z]{\xrightarrow[Z]{\xrightarrow{\xrightarrow[Z]{\xrightarrow[{\xrightarrow[{ \xrightarrow[{\xrightarrow[{\xrightarrow[{}_{i},i},\mathbf{i}] \xrightarrow{\xrightarrow{\xrightarrow{\xrightarrow[{\xrightarrow[{\xrightarrow[{\xrightarrow[{}_{i},,i},\mathbf{i}]\xrightarrow{\xrightarrow{\xrightarrow{\xrightarrow[{ \xrightarrow{\xrightarrow[{\xrightarrow{\xrightarrow}_{{}_}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)\)\)\)\)\)\)\ \ \ \ \ \ \\\\ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\}\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Consider a run \(R=p_{0}(\mathbf{v}_{0}),\ldots,p_{k}(\mathbf{v}_{k})\) (in any semantics). We say that the counter \(j\in\{1,\cdots,d\}\) hits \(0\) iff \(\mathbf{v}_{i}[j]=0\) for some \(1\leq i\leq k\). Similarly, we say that the counter \(j\in\{1,\cdots,d\}\) goes negative in \(R\) iff \(\mathbf{v}_{i}[j]<0\) for some \(0\leq i\leq k\) (this can happen only in the \(\mathbb{Z}\)-semantics).
Let \(\rho=(p_{0},\mathbf{z}_{0},p_{1})\ldots(p_{k-1},\mathbf{z}_{k-1},p_{k})\) be a path such that \(R\) is the unique run corresponding to \(\rho\) from the initial configuration \(p_{0}(\mathbf{v}_{0})\). We say that \((\rho,R)\) or \(p_{0}(\mathbf{v}_{0})\xRightarrow{\rho}_{k}(\mathbf{v}_{k})\) is lossy for the counter \(j\in\{1,\cdots,d\}\) iff \(\mathbf{v}_{i}[j]-\mathbf{v}_{i-1}[j]\neq\mathbf{z}_{i-1}[j]\) for some \(1\leq i\leq k\) (a lossy run can happen only in the monus semantics). Integer and monus semantics are overapproximations of the classical semantics. That is, \(s(\mathbf{v})\xRightarrow{\rho}t(\mathbf{w})\) implies \(s(\mathbf{v})\xRightarrow{\rho}_{\mathbb{Z}}t(\mathbf{w})\) and \(s(\mathbf{v})\xRightarrow{\rho}t(\mathbf{w})\). The converse is not always the case (see Figure 1). Moreover, \(s(\mathbf{v})\xRightarrow{\rho}t(\mathbf{w})\) implies \(s(\mathbf{v})\xRightarrow{\rho}t(\mathbf{w})\) if \(s(\mathbf{v})\xRightarrow{\rho}t(\mathbf{w})\) is not lossy. Notice that if in \(s(\mathbf{v})\xRightarrow{\rho}t(\mathbf{w})\), none of the counters \(j\in\{1,\ldots,d\}\) hits \(0\) then it is not a lossy run. Similarly, \(s(\mathbf{v})\xRightarrow{\rho}_{\mathbb{Z}}t(\mathbf{w})\) implies \(s(\mathbf{v})\xRightarrow{\rho}t(\mathbf{w})\) if, in the former run, none of the counters \(j\in\{1,\ldots,d\}\) goes negative.
Characterizing Monus Reachability.Our first goal is to characterize the reachability problem for the monus semantics in terms of the classical semantics. We start with some propositions that relate monus runs to \(\mathbb{Z}\) runs and classical runs. Let \(\rho\) be a path and \(s_{0}(\mathbf{v}_{0})\) a configuration. Let \(s_{0}(\mathbf{v}_{0})\ldots s_{k}(\mathbf{v}_{k})\) be the unique \(\mathbb{Z}\) run defined by \(\rho\) and \(s_{0}(\mathbf{v}_{0})\). We define the vector \(\mathbf{m}=\min_{2}(\rho,s_{0},\mathbf{v}_{0})\) by \(\mathbf{m}[i]=\min(\min_{j=0}^{k}\mathbf{v}_{j}[i],0)\). Intuitively, it is the vector of minimal values in the \(\mathbb{Z}\) run, but note that \(\mathbf{m}\leq\mathbf{0}\).
For the next two propositions we fix a configuration \(s_{0}(\mathbf{v}_{0})\in Q\times\mathbb{N}^{d}\), a path \(\rho=(s_{0},\mathbf{z}_{0},s_{1})\ldots(s_{k-1},\mathbf{z}_{k-1},s_{k})\), and \(\mathbf{m}=\min_{\mathbb{Z}}(\rho,s_{0},\mathbf{v}_{0})\).
Consider the unique runs induced by \(\rho\) from \(s_{0}(\mathbf{v}_{0})\) in \(\mathbb{Z}\)-semantics
\[s_{0}(\mathbf{v}_{0}),\ldots,s_{k-1}(\mathbf{v}_{k-1}),s_{k}(\mathbf{v}_{k}),\]
and in monus semantics
\[s_{0}(\mathbf{v}_{0}^{\prime}),\ldots,s_{k-1}(\mathbf{v}_{k-1}^{\prime}),s_{k }(\mathbf{v}_{k}^{\prime}).\]
where \(\mathbf{v}_{0}^{\prime}=\mathbf{v}_{0}\). Then \(\mathbf{v}_{k}^{\prime}=\mathbf{v}_{k}-\mathbf{m}\).
Proof (sketch).: We analyse the behavior of every counter \(j\). Recall that the \(\mathbb{Z}\) run and the monus run have the same value in the counter \(j\) until the first time the value of \(j\) becomes negative in the \(\mathbb{Z}\) run. We denote this as \(\mathbf{v}_{i}[j]=-u\). Note that \(\mathbf{v}_{i}^{\prime}[j]=0\). Hence, \(\mathbf{v}_{i}[j]-\mathbf{v}_{i}^{\prime}[j]=-u\). It is not hard to see that every time the value of the counter \(j\) reaches a new minimum in the \(\mathbb{Z}\)-semantics, the difference \(\mathbf{v}_{i}^{\prime}[j]-\mathbf{v}_{i}[j]\) will be equal to it. We prove this formally by induction on \(k\). Refer to Appendix A.1.1 for the formal proof.
Let \(\mathbf{z}\in\mathbb{Z}^{d}\). A sequence of configurations \(s_{0}(\mathbf{v}_{0})\ldots s_{k}(\mathbf{v}_{k})\) is a run in \(\mathbb{Z}\)-semantics corresponding to a path \(\rho\) iff \(s_{0}(\mathbf{v}_{0}-\mathbf{z})\ldots s_{k}(\mathbf{v}_{k}-\mathbf{z})\) is a run in \(\mathbb{Z}\)-semantics on the same path \(\rho\). Consider the following unique run corresponding to the path \(\rho\) from \(s_{0}(\mathbf{v}_{0})\) in the monus semantics
\[s_{0}(\mathbf{v}_{0}),\ldots,s_{k-1}(\mathbf{v}_{k-1}),s_{k}(\mathbf{v}_{k}).\]
_Then the following run, induced by \(\rho\), exists in the classical semantics_
\[s_{0}(\mathbf{v}_{0}^{\prime}),\ldots,s_{k-1}(\mathbf{v}_{k-1}^{\prime}),s_{k}( \mathbf{v}_{k}^{\prime}).\]
_where \(\mathbf{v}_{0}^{\prime}=\mathbf{v}_{0}-\mathbf{m}\) and \(\mathbf{v}_{k}^{\prime}=\mathbf{v}_{k}\)._
Proof.: This essentially follows from the definition of \(\mathbf{m}\) and Remark 3.4. One just needs to observe that the \(\mathbb{Z}\) run with configurations shifted by the vector \(-\mathbf{m}\) does not go below zero, hence it is a classical run. See SectionA.1.2 for the formal proof.
We now characterize monus reachability in terms of classical reachability.
Let \(\mathcal{V}=(d,Q,\Delta)\) be a VASS, let \(s(\mathbf{v})\) and \(t(\mathbf{w})\) be configurations of \(\mathcal{V}\), and let \(\rho\) be a path of \(\mathcal{V}\). Then, \(s(\mathbf{v})\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{$\rho$}}\limits}t( \mathbf{w})\) if and only if there is a subset \(Z\subseteq\{1,\ldots,d\}\) and a vector \(\mathbf{v}^{\prime}\geq\mathbf{v}\) such that
1. \(s(\mathbf{v}^{\prime})\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{$\rho$}} \limits}t(\mathbf{w})\),
2. For every \(z\in Z\), the coordinate \(z\) hits \(0\) in \(s(\mathbf{v}^{\prime})\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{$\rho$}} \limits}t(\mathbf{w})\),
3. For every \(j\in\{1,\ldots,d\}\setminus Z\), we have \(\mathbf{v}^{\prime}[j]=\mathbf{v}[j]\).
Proof.: ( \(\implies\) ) Let \(\mathbf{m}=\min_{\mathbb{Z}}(\rho,s,\mathbf{v})\). This direction is implied by Section3.5 along with the following argument. Every counter \(j\in\{1,\ldots,d\}\) hits \(0\) in \(s(\mathbf{v})\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{$\rho$}}\limits}t( \mathbf{w})\) if and only if it hits \(0\) in \(s(\mathbf{v}-\mathbf{m})\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{$\rho$}} \limits}t(\mathbf{w})\). Moreover, if \(j\) does not hit \(0\) in \(s(\mathbf{v})\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{$\rho$}} \limits}t(\mathbf{w})\) then \(\mathbf{m}[j]=0\).
( \(\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{$\langle\Leftarrow$}}}\) ) Let \(\mathbf{v}^{\prime}\geq\mathbf{v}\) be a vector as in the statement and let \(s(\mathbf{v}^{\prime})\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{$\rho$}} \limits}t(\mathbf{w})\). We define \(Z\subseteq\{1\ldots d\}\) such that \(i\in Z\) if it hits \(0\). Moreover, let \(s(\mathbf{v})\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{$\rho$}} \limits}t(\mathbf{w}^{\prime\prime})\). It suffices to show that \(\mathbf{w}=\mathbf{w}^{\prime\prime}\). We write \(s(\mathbf{v}^{\prime})=p_{0}(\mathbf{v}_{0}^{\prime})\ldots p_{k}(\mathbf{v}_{ k}^{\prime})=t(\mathbf{w})\) and \(s(\mathbf{v})=p_{0}(\mathbf{v}_{0})\ldots p_{k}(\mathbf{v}_{k})=t(\mathbf{w}^{ \prime\prime})\) for the corresponding runs in the classical and monus semantics, respectively. Note that \(\mathbf{v}^{\prime}\geq\mathbf{v}\) implies \(\mathbf{v}_{i}^{\prime}\geq\mathbf{v}_{i}\) for all \(0\leq i\leq k\). By definition of \(\mathbf{v}^{\prime}\) it suffices to consider counters \(j\) that hit zero, _i.e._\(\mathbf{v}_{i}^{\prime}[j]=0\) for some \(0\leq i\leq k\). Since \(\mathbf{v}_{i}^{\prime}\geq\mathbf{v}_{i}\) we get \(\mathbf{v}_{i}^{\prime}[j]=0=\mathbf{v}_{i}[j]\). Hence, from \(i\) onward both runs agree on the value in counter \(j\). Thus \(\mathbf{w}=\mathbf{w}^{\prime\prime}\).
See SectionA.1.3 for an extended version of this proof.
The reachability problem.We begin with the Ackermann-completeness proof.
Reachability in monus semantics is Ackermann-complete.
For the upper bound we show how to reduce reachability in monus semantics to reachability in classical semantics. Let \(\mathcal{V}=(d,Q,\Delta)\), \(s(\mathbf{v})\), and \(t(\mathbf{w})\) be the input of the reachability problem in monus semantics. We rely on Section3.6. Intuitively, we have to guess a subset \(Z\subseteq\{1,\ldots,d\}\) and a permutation \(\sigma\colon[1,k]\to Z\) (where \(k=|Z|\)). Then we check whether there exists a run as described in Section3.6 with \(z_{i}=\sigma(i)\) for \(i\in[1,k]\). To detect the latter run, we construct the VASS \(\mathcal{V}_{\sigma}=(d+k,Q^{\prime},T^{\prime})\) as follows. It simulates \(\mathcal{V}\), but it has \(k\) extra counters to freeze the values of the counter in \(Z\) at the points where the coordinates \(\sigma(k),\ldots,\sigma(1)\) hit \(0\) as mentioned in Section3.6.
To remember which counters have already been frozen the set of control states is \(Q^{\prime}=\{q_{i}\mid q\in Q,\ i\in[0,k]\}\). Intuitively, the index \(i\in[0,k]\) stores the information how many counters are frozen. The index \(i\) can only increment. Note that guessing the permutation \(\sigma\) allows us to assume that we know the order in which the counters are frozen.
Since we deal with vectors in dimension \(d\) and \(d+k\) we introduce some helpful notation. We write \(\mathbf{e}_{j}\in\mathbb{Z}^{d}\) for the unity vector with \(\mathbf{e}_{j}[j]=1\) and with \(0\) on other coordinates. Given a vector \(\mathbf{z}\in\mathbb{Z}^{d}\) we define \(\mathsf{copy}(\mathbf{z})\in\mathbb{Z}^{d+k}\) as \(\mathsf{copy}(\mathbf{z})[j]=\mathbf{z}[j]\) for \(1\leq j\leq d\) and
\(\mathsf{copy}(\mathbf{z})[j]=\mathbf{z}[\sigma(j-d)]\) for \(d<j\leq d+k\). Intuitively, it simply copies the behaviors of the corresponding counters. We generalise this notation to allow to also remove the effect on some coordinates (_i.e._ "freeze" them). Given \(\mathbf{z}\in\mathbb{Z}^{d}\) and \(0\leq i\leq k\) we define \(\mathsf{copy}_{i}(\mathbf{z})\in\mathbb{Z}^{d+k}\) as \(\mathsf{copy}_{i}(\mathbf{z})[j]=\mathsf{copy}(\mathbf{z})[j]\) for \(1\leq j\leq d+k-i\) and \(\mathsf{copy}_{i}(\mathbf{z})[j]=0\) for \(d+k-i<j\leq d+k\). In particular \(\mathsf{copy}_{0}(\mathbf{z})=\mathsf{copy}(\mathbf{z})\) and \(\mathsf{copy}_{i}(\mathbf{z})\) is \(0\) in the last \(i\) counters.
It remains to define the set of transitions \(T^{\prime}\). In the beginning there are transitions in \(T^{\prime}\) that can arbitrarily increment each counter that belongs to \(Z\) and its extra copy: \((s_{0},\mathsf{copy}(\mathbf{e}_{j}),s_{0})\in T^{\prime}\) for every \(j\in Z\). Moreover, the counter in the control state can spontaneously be incremented: \((p_{i},\mathbf{0},p_{i+1})\) for every \(p\in Q\) and \(0\leq i<k\). For every transition \((p,\mathbf{z},q)\in T\) and \(0\leq i\leq k\) we define \((p_{i},\mathsf{copy}_{i}(\mathbf{z}),q_{i})\in T^{\prime}\).
The following claim is straightforward by Proposition 3.6:
\(\rhd\) Claim 3.8. We have \(s(\mathbf{v})\xrightarrow{*}_{\mathcal{V}}t(\mathbf{w})\) if and only if there exists a subset \(Z\subseteq\{1,\ldots,d\}\) and bijection \(\sigma\colon[1,k]\to Z\) such that \(s_{0}(\mathsf{copy}_{0}(\mathbf{v}))\xrightarrow{*}_{\mathcal{V}_{\sigma}}t_ {k}(\mathsf{copy}_{k}(\mathbf{w}))\).
This implies that we can decide monus reachability by guessing a subset \(Z\subseteq[1,d]\), guessing a bijection \(\sigma\colon[1,k]\to Z\), and deciding reachability in \(\mathcal{V}_{\sigma}\). This yields the upper bound.
For the lower bound we reduce classical reachability to monus reachability. Let \(\mathcal{V}=(d,Q,\Delta)\), \(s(\mathbf{0})\) and \(t(\mathbf{0})\) be the input of the reachability problem in classical semantics (without loss of generality the input vectors can be \(\mathbf{0}\)). We construct the VASS \(\mathcal{V}^{\prime}=(d+2,Q^{\prime},T^{\prime})\) as follows. The states are \(Q^{\prime}=Q\cup\{t^{\prime}\}\), where \(t^{\prime}\) is a fresh copy of \(t\).
Again to deal with vectors in different dimension we introduce the following notation. Given \(\mathbf{z}\in\mathbb{Z}^{d}\) we write \(\Delta(\mathbf{z})\in\mathbb{Z}\) for \(\Delta(\mathbf{z})=\sum_{j=1}^{d}\mathbf{z}[j]\), _i.e._ the sum of all components. Based on this we define \(\mathsf{extend}(\mathbf{z})\in\mathbb{Z}^{d+2}\) as: \(\mathsf{extend}(\mathbf{z})=(z,\Delta(z),0)\) if \(\Delta(\mathbf{z})\geq 0\), and \(\mathsf{extend}(\mathbf{z})=(z,0,-\Delta(z))\) otherwise.
We define \(T^{\prime}\) as follows. For every \((p,\mathbf{z},q)\in T\): \((p,\mathsf{extend}(\mathbf{z}),q)\in T^{\prime}\). Thus, in the \((d+1)\)-th counter, we collect the sum of all non-negative entry sums of the added vectors. Analogously, in the \((d+2)\)-th counter, we collect the sum of all negative entry sums (with a flipped sign). We also add the transition \((t,\mathbf{0},t^{\prime})\in T^{\prime}\), and a "count down" loop: \((t^{\prime}(\mathbf{0},-1,-1),t^{\prime})\), where \((\mathbf{0},-1,-1)\) is \(0\) in the first \(d\) components and \(-1\) otherwise. The following claim completes the proof of Ackermann-hardness.
\(\rhd\) Claim 3.9. We have \(s(\mathbf{0},1,1)\xrightarrow{*}t^{\prime}(\mathbf{0},1,1)\) in \(\mathcal{V}^{\prime}\) if and only if \(s(\mathbf{0})\xrightarrow{*}t(\mathbf{0})\) in \(\mathcal{V}\).
Proof. (\(\iff\) ) This is obvious, because every run in classical semantics yields a run in monus semantics between the same configurations.
\((\implies)\) Suppose there is a monus run from \(s(\mathbf{0},1,1)\) to \(t^{\prime}(\mathbf{0},1,1)\). Then for some \(m\in\mathbb{N}\), there is a transition sequence \(\rho\) leading in monus semantics from \(s(\mathbf{0},1,1)\) to \(t(\mathbf{0},m,m)\). Now let us execute \(\rho\) in \(\mathbb{Z}\)-semantics. This execution will arrive at some configuration \(t(\mathbf{v},m,m)\) (note that the last two counters are never decreased, except for the final loop). We shall prove that (i) \(\mathbf{v}=\mathbf{0}\) and (ii) this execution never drops below zero. First, according to Proposition 3.3, the resulting counter values in monus semantics are always at least the values from \(\mathbb{Z}\)-semantics. This implies \(\mathbf{v}\leq\mathbf{0}\). Next observe that since the right-most components have the same value \(m\), the total sum of all entry sums of added vectors (in the first \(d\) entries) must be zero. Thus, \(\Delta(\mathbf{v})=0\). Together with \(\mathbf{v}\leq\mathbf{0}\), this implies \(\mathbf{v}=\mathbf{0}\), which shows (i). Second, if the execution in \(\mathbb{Z}\)-semantics ever drops below zero in some counter \(i\), then by Proposition 3.3 and the fact that in \(\mathbb{Z}\)-semantics we reach \(\mathbf{v}=\mathbf{0}\), this would imply that \(\rho\) in monus semantics ends up in a strictly positive value in counter \(i\), which is not true. This shows (ii). Hence, we have shown that the run in \(\mathbb{Z}\)-semantics is actually a run in classical VASS semantics. Therefore, \(s(\mathbf{0})\xrightarrow{*}t(\mathbf{0})\) in \(\mathcal{V}\).
Characterizing zero-reachability.Monus zero-reachability has a simple characterization in terms of classical coverability. Here, \(\mathcal{V}^{\mathsf{rev}}\) is obtained by reversing all transitions in \(\mathcal{V}\) and their effects. Formally, there is a transition \((p,\mathbf{z},q)\) in \(\mathcal{V}^{\mathsf{rev}}\) iff there is a transition \((q,-\mathbf{z},p)\) in \(\mathcal{V}\).
For any \(\mathbf{v}\), we have \(s(\mathbf{v})\mathrel{\mathop{\Rightarrow}\limits_{\mathcal{V}}}t(\mathbf{0})\) iff \(t(\mathbf{0})\mathrel{\mathop{\Rightarrow}\limits_{\mathcal{V}\succ}}s( \mathbf{v}^{\prime})\) for some \(\mathbf{v}^{\prime}\geq\mathbf{v}\).
Proof.: By Proposition 3.1, \(s(\mathbf{v})\mathrel{\mathop{\Rightarrow}\limits_{\mathcal{V}}}t(\mathbf{0})\) yields a \(\mathbf{v}^{\prime}\geq\mathbf{v}\) with \(s(\mathbf{v}^{\prime})\mathrel{\mathop{\Rightarrow}\limits_{\mathcal{V}}}t( \mathbf{0})\). Conversely, if \(s(\mathbf{v}^{\prime})\mathrel{\mathop{\Rightarrow}\limits_{\mathcal{V}}}t( \mathbf{0})\), then we can pick \(Z=[1,d]\) in Proposition 3.1 to obtain \(s(\mathbf{v})\mathrel{\mathop{\Rightarrow}\limits_{\mathcal{V}}}t(\mathbf{0})\).
This together with the known complexity of classical coverability [33, 35] immediately implies:
The monus zero-reachability problem is \(\mathsf{EXPSPACE}\)-complete.
Characterizing coverability.Our third characterization describes coverability in monus semantics in terms of reachability in \(\mathbb{Z}\)-semantics:
Let \(\mathcal{V}=(d,Q,\Delta)\) be a VASS and let \(s(\mathbf{v})\) and \(t(\mathbf{w})\) be configurations. Then \(s(\mathbf{v})\mathrel{\mathop{\Rightarrow}\limits_{\mathcal{V}}}t(\mathbf{w}^ {\prime\prime})\) for some \(\mathbf{w}^{\prime\prime}\geq\mathbf{w}\) if and only if there is a permutation \(\sigma\) of \(\{1,\ldots,d\}\) and \(\mathbb{Z}\)-configurations \(p_{d}(\mathbf{v}_{d}),\ldots,p_{1}(\mathbf{v}_{1})\), \(t(\mathbf{w}^{\prime})\) so that \(\mathbf{1}\). \(s(\mathbf{v})\mathrel{\mathop{\Rightarrow}\limits_{\mathcal{V}}}p_{d}( \mathbf{v}_{d})\mathrel{\mathop{\Rightarrow}\limits_{\mathcal{V}}}p_{d-1}( \mathbf{v}_{d-1})\mathrel{\mathop{\Rightarrow}\limits_{\mathcal{V}}}\cdots \mathrel{\mathop{\Rightarrow}\limits_{\mathcal{V}}}p_{1}(\mathbf{v}_{1}) \mathrel{\mathop{\Rightarrow}\limits_{\mathcal{V}}}t(\mathbf{w}^{\prime})\), \(\mathbf{2}\). for each \(j\in\{1,\ldots,d\}\), we have \(\mathbf{w}^{\prime}[j]+|\min(\mathbf{v}_{\sigma^{-1}(j)}[j],0)|\geq\mathbf{w}[j]\).
Proof.: (\(\implies\)) Let \(\rho\) be any path such that \(s(\mathbf{v})\mathrel{\mathop{\Rightarrow}\limits_{\mathcal{V}}}t(\mathbf{w}^ {\prime\prime})\) and \(\mathbf{w}^{\prime\prime}\geq\mathbf{w}\). Then, by Proposition 3.1\(s(\mathbf{v})\mathrel{\mathop{\Rightarrow}\limits_{\mathcal{V}}}t(\mathbf{w} ^{\prime\prime}+\mathbf{m})\), where \(\mathbf{m}\) is the vector of minimum values in the \(\mathbb{Z}\) run. The required permutation \(\sigma\) represents the order \(\sigma(d),\ldots,\sigma(1)\) in which these coordinates reach their corresponding minimum values. Hence, \(s(\mathbf{v})\mathrel{\mathop{\Rightarrow}\limits_{\mathcal{V}}}t(\mathbf{w} ^{\prime\prime}+\mathbf{m})\) is the same as \(s(\mathbf{v})\mathrel{\mathop{\Rightarrow}\limits_{\mathcal{V}}}p_{d}( \mathbf{v}_{d})\mathrel{\mathop{\Rightarrow}\limits_{\mathcal{V}}}p_{d-1}( \mathbf{v}_{d-1})\mathrel{\mathop{\Rightarrow}\limits_{\mathcal{V}}}\cdots \mathrel{\mathop{\Rightarrow}\limits_{\mathcal{V}}}p_{1}(\mathbf{v}_{1}) \mathrel{\mathop{\Rightarrow}\limits_{\mathcal{V}}}t(\mathbf{w}^{\prime})\), such that \(\mathbf{v}_{d}[\sigma(d)]=\mathbf{m}[\sigma(d)],\ldots,\mathbf{v}_{1}[\sigma(1)] =\mathbf{m}[\sigma(1)]\), and \(\mathbf{w}^{\prime\prime}[j]=\mathbf{w}^{\prime}[j]-\mathbf{m}[j]=\mathbf{w} ^{\prime}[j]+|\mathbf{m}[j]|=\mathbf{w}^{\prime}[j]+|\min(\mathbf{v}_{\sigma^{ -1}(j)}[j],0)|\) for all \(1\leq j\leq d\). As \(\mathbf{w}^{\prime\prime}\geq\mathbf{w}\), \(\mathbf{w}^{\prime}[j]+|\min(\mathbf{v}_{\sigma^{-1}(j)}[j],0)|\geq\mathbf{w}[j]\) for all \(1\leq j\leq d\).
(\(\iff\)) This is a direct consequence of Proposition 3.1. It implies that given any permutation \(\sigma\) on \(\{1,\ldots,d\}\) and any run \(s(\mathbf{v})\mathrel{\mathop{\Rightarrow}\limits_{\mathcal{V}}}p_{d}( \mathbf{v}_{d})\mathrel{\mathop{\Rightarrow}\limits_{\mathcal{V}}}p_{d-1}( \mathbf{v}_{d-1})\mathrel{\mathop{\Rightarrow}\limits_{\mathcal{V}}}\cdots \mathrel{\mathop{\Rightarrow}\limits_{\mathcal{V}}}p_{1}(\mathbf{v}_{1}) \mathrel{\mathop{\Rightarrow}\limits_{\mathcal{V}}}t(\mathbf{w}^{\prime})\) such that \(\mathbf{w}^{\prime}[j]-\min(\mathbf{v}_{\sigma^{-1}(j)}[j],0)\geq\mathbf{w}[j]\), there is a run from configuration \(s(\mathbf{v})\) and reaching a configuration \(t(\mathbf{w}^{\prime\prime})\) where \(\mathbf{w}^{\prime\prime}[j]=\mathbf{w}^{\prime}[j]-\mathbf{m}[j]\geq\mathbf{w} ^{\prime}[j]-\min(\mathbf{v}_{\sigma^{-1}(j)}[j],0)\geq\mathbf{w}[j]\) for all \(1\leq j\leq d\).
We conclude the following.
Monus coverability is \(\mathsf{NP}\)-complete.
Proof.: First we show \(\mathsf{NP}\)-hardness. In [28, Prop. 5.11], it is shown that it is \(\mathsf{NP}\)-hard to decide whether a regular language over some alphabet \(\Sigma\), given as an NFA, contains a word in which every letter appears exactly once. Given such an NFA \(\mathcal{A}\) over \(\Sigma=\{a_{1},\ldots,a_{d}\}\), we construct a \(d\)-VASS \(\mathcal{V}\). The VASS \(\mathcal{V}\) simulates \(\mathcal{A}\) such that when \(\mathcal{A}\) reads \(a_{i}\), \(\mathcal{V}\) increments counter \(i\). Moreover, \(\mathcal{V}\) maintains a number \(k\in\{0,\ldots,d\}\) in its state, which always holds the number of letters read so far. Thus, \(\mathcal{V}\) has states \(q_{k}\), where \(q\) is a state of \(\mathcal{A}\) and \(k\in\{1,\ldots,d\}\). Moreover, let \(s\) and \(t\) be the initial and final state of \(\mathcal{A}\), respectively. Then in \(\mathcal{V}\), one can cover \(t_{d}(1,\ldots,1)\) from \(s_{0}(\mathbf{0})\) in monus semantics if and only if \(\mathcal{A}\) accepts some word as above.
We turn to the \(\mathsf{NP}\) upper bound. Suppose we are given a \(d\)-VASS \(\mathcal{V}=(d,Q,\Delta)\) and configurations \(s(\mathbf{u}),t(\mathbf{v})\). We employ Proposition 3.12. First non-deterministically guess a permutation \(\sigma\) of \([1,d]\). We now construct a \(2d\)-VASS \(\mathcal{V}^{\prime}_{\sigma}\) and two configurations \(c^{\prime}_{1},c^{\prime}_{2}\) such that in \(\mathcal{V}^{\prime}_{\sigma}\), we have \(c^{\prime}_{1}\stackrel{{*}}{{\rightarrow}}c^{\prime}_{2}\) if and only if there is a run as in Proposition 3.12 with this \(\sigma\). Since reachability in \(\mathbb{Z}\)-semantics is \(\mathsf{NP}\)-complete [25], this yields the upper bound.
Our VASS \(\mathcal{V}^{\prime}_{\sigma}\) is a slight extension of the VASS \(\mathcal{V}_{\sigma}\) from Theorem 3.7, see Figure 3. Recall that for a permutation \(\sigma\colon[1,k]\to Z\), \(\mathcal{V}_{\sigma}\) keeps \(k\) extra counters that freeze the values of the counters in \(Z\), in the order \(\sigma(k),\sigma(k-1),\ldots,\sigma(1)\). We use this construction, but for our permutation \(\sigma\) of \([1,d]\). Thus, \(\mathcal{V}_{\sigma}\) simulates a run of \(\mathcal{V}\) and then freezes the counters \(\sigma(d),\ldots,\sigma(1)\) in the extra \(d\) counters, in this order. The steps that freeze counters define the vectors \(\mathbf{v}_{d}\),..., \(\mathbf{v}_{1}\) in Proposition 3.12. Note that for each \(\mathbf{v}_{i}\), only \(\mathbf{v}_{i}[\sigma(i)]\) is important.
To verify the second condition in Proposition 3.12, we introduce an extra state \(t^{\prime}\) and extra transitions as depicted in Figure 3. After executing \(\mathcal{V}_{\sigma}\), \(\mathcal{V}^{\prime}_{\sigma}\) then has two types of loops: One to move tokens from the counters \(d+j\) to counters \(\sigma(j)\) (for each \(j\in[1,d]\)), and one to reduce tokens in counters \(1,\ldots,d\). Thus there exists \(\sigma\) such that \(s_{0}(\mathsf{copy}_{0}(\mathbf{u}))\stackrel{{\rightarrow}}{{ \rightarrow}}t^{\prime}(\mathsf{copy}_{d}(\mathbf{v}))\) in \(\mathcal{V}^{\prime}_{\sigma}\) if and only if \(s(\mathbf{u})\stackrel{{*}}{{\Rightarrow}}t(\mathbf{v}^{\prime \prime})\) for some \(\mathbf{v}^{\prime\prime}\geq\mathbf{v}\) in \(\mathcal{V}\). This proves the \(\mathsf{NP}\) upper bound.
## 4 Two-dimensional VASS
In this section we prove the results of Table 1 related to 2-VASS, both for unary and binary encoding. Note that for all three considered problems, reachability, zero reachability, and coverability, we always have an \(\mathsf{NL}\) lower bound, inherited from state reachability in finite automata. The latter is well-known to be \(\mathsf{NL}\)-hard, and a VASS without counters (in all considered semantics) is a finite state automaton.
When dealing with binary/unary updates one needs to be careful with the input size. In all problems suppose a VASS \(\mathcal{V}=(d,Q,T)\) is in the input. If we are interested in the unary encoding its size is defined as \(d+|Q|+\sum_{(p,\mathbf{z},q)\in T}\lVert\mathbf{z}\rVert\), where \(\lVert\mathbf{z}\rVert\) is the absolute value of the maximal coordinate in \(\mathbf{z}\). In the binary encoding one needs to change \(\lVert\mathbf{z}\rVert\) to \(\lceil\log(\lVert\mathbf{z}\rVert+1)\rceil\). From this point onwards, we use the term _succinct_ VASS for VASS where updates are encoded in binary.
We consider each of the three problems separately.
ReachabilityHere we only prove the \(\mathsf{PSPACE}\) upper bound for monus reachability in binary encoded 2-VASS, which implies the same upper bound for unary encoding. The \(\mathsf{PSPACE}\) lower bound for binary encoding is inherited from zero reachability, see Proposition 4.3 below.
In succinct 2-VASS, reachability with monus semantics is in \(\mathsf{PSPACE}\).
According to Proposition 3.6, reachability with monus semantics is equivalent to existence of a run under classical semantics, where said run is subject to some additional constraints.
Recall that _Presburger arithmetic_ is the first-order theory of \((\mathbb{N},+,<,0,1)\). We observe that all the additional constraints of Proposition 3.6 can be expressed by quantifier-free Presburger formulas. This leads us to the so-called _constrained runs problem_ for succinct \(2\)-VASS, which was recently shown to be in \(\mathsf{PSPACE}\)[3], following the fact that classical reachability itself is \(\mathsf{PSPACE}\)-complete for succinct \(2\)-VASS [5].
Formally, the _constrained runs problem_ for succinct \(2\)-VASS is the following:
Given A succinct \(2\)-VASS \(\mathcal{V}\), a number \(m\in\mathbb{N}\), states \(q_{1},\ldots,q_{m}\) in \(\mathcal{V}\), a quantifier-free Presburger formula \(\psi(x_{1},y_{1},\ldots,x_{m},y_{m})\), and numbers \(s,t\in[1,m]\) with \(s\leq t\).
* Does there exist a run \(q_{0}(0,0)\xrightarrow{*}q_{1}(x_{1},y_{1})\xrightarrow{*}\cdots \xrightarrow{*}q_{m}(x_{m},y_{m})\) that visits a final state between \(q_{s}(x_{s},y_{s})\) and \(q_{t}(x_{t},y_{t})\) and satisfies \(\psi(x_{1},y_{1},\ldots,x_{m},y_{m})\)?
[[3, Prop. 6.5]] The constrained runs problem for succinct \(2\)-VASS is in \(\mathsf{PSPACE}\).
We can now prove Proposition 4.1 by reducing to the constrained runs problem: Let \(\mathcal{V}\) be a \(2\)-VASS with configurations \(s(\mathbf{v})\) and \(t(\mathbf{w})\). According to Proposition 3.6, existence of a run \(s(\mathbf{v})\xrightarrow{*}t(\mathbf{w})\) is equivalent to existence of states \(p_{1},p_{2}\) and a set \(Z\subseteq[1,2]\) such that a run \(s(\mathbf{v}^{\prime})\xrightarrow{*}t(\mathbf{w})\) with \(\mathbf{v}^{\prime}\geq\mathbf{v}\) that is subject to additional requirements enforced by conditions (2) and (3) of the Proposition 3.6. Our \(\mathsf{PSPACE}\) algorithm enumerates all possibilities of \(p_{1},p_{2}\) and \(Z\), constructing an instance of the constrained run problem each time, and checking for a constrained run in \(\mathsf{PSPACE}\) using Lemma 4.2. If such a run exists in at least one of the instances, the algorithm accepts, otherwise it rejects. To construct each instance the algorithm first modifies \(\mathcal{V}\) to ensure that a starting configuration \(s(\mathbf{v}^{\prime})\) is reachable for any \(\mathbf{v}^{\prime}\geq\mathbf{v}\). To this end a new initial state \(q_{0}\) is added, with two loops that increment one of the counters each, and a transition that goes to \(s\) by adding \(\mathbf{v}\). Then the additional requirements of Proposition 3.6 are encoded in quantifier-free Presburger arithmetic, as required by the constrained run problem. Clearly the constructed algorithm runs in \(\mathsf{PSPACE}\) and decides \(s(\mathbf{v})\xrightarrow{*}t(\mathbf{w})\). For more details refer to Appendix B.1.1.
Zero reachability
Monus zero reachability in \(2\)-VASS is \(\mathsf{PSPACE}\)-complete under binary encoding and \(\mathsf{NL}\)-complete under unary encoding.
Proof.: This is a simple consequence of monus zero reachability being interreducible with classical coverability: Classical coverability in \(2\)-VASS under binary encoding is \(\mathsf{PSPACE}\)-complete under binary encoding (in [5, Corollary 3.3], this is deduced from [36, p. 108] and [17, Corollary 10] and \(\mathsf{NL}\)-complete under unary encoding [36, p. 108].
Let \(\mathcal{V}\) be a \(2\)-VASS with configurations \(s(\mathbf{v})\) and \(t(\mathbf{0})\). Then according to Proposition 3.6, we know that \(t(\mathbf{0})\) is monus reachable from \(s(\mathbf{v})\) if and only if in \(\mathcal{V}^{\mathsf{rev}}\) the configuration \(s(\mathbf{v})\) is coverable from \(t(\mathbf{0})\) with classical semantics. On the other hand, given configurations \(s(\mathbf{v})\) and \(t(\mathbf{w})\) of a \(2\)-VASS \(\mathcal{V}\), we add a new state \(s^{\prime}\) and transition \((s^{\prime},\mathbf{v},s)\) to construct the \(2\)-VASS \(\mathcal{V}^{\prime}\). Then classical coverability of \(t(\mathbf{w})\) from \(s(\mathbf{v})\) in \(\mathcal{V}\) is equivalent to the same from \(s^{\prime}(\mathbf{0})\) in \(\mathcal{V}^{\prime}\). Now applying Proposition 3.6 in reverse, the latter is further equivalent to monus reachability of \(s^{\prime}(\mathbf{0})\) from \(t(\mathbf{w})\) in \(\mathcal{V}^{\mathsf{rev}}\).
CoverabilityBy Proposition 3.13, monus coverability is in \(\mathsf{NP}\) in arbitrary dimension. Thus, it remains to show the \(\mathsf{NP}\) lower bound.
Monus coverability in succinct \(2\)-VASS is \(\mathsf{NP}\)-hard.
Proof.: We reduce from the _subset sum_ problem, which is well-known to be NP-hard. Here, we are given binary encoded numbers \(a_{1},\ldots,a_{n},a\in\mathbb{N}\) and are asked whether there is a vector \((x_{1},\ldots,x_{n})\in\{0,1\}^{n}\) such that \(x_{1}a_{1}+\cdots+x_{n}a_{n}=a\). Given such an instance, we construct the \(2\)-VASS in Figure 4. It is clear that we can cover \(t(1,1)\) from \(s(0,0)\) iff the subset-sum instance is positive: Covering \(1\) in the first counter means our sum is at least \(a\), whereas covering \(1\) in the second counter means our sum is at most \(a\).
**Proposition 4.5**.: _Monus coverability in unary-encoded \(2\)-VASS is in NL._
Proof.: This follows using the same construction as for Proposition 3.13: Given a \(2\)-VASS, there are only two permutations \(\sigma\) of \(\{1,2\}\). Thus, we can try both permutations \(\sigma\) and construct the VASS \(\mathcal{V}_{\sigma}^{\prime}\) in logspace. Then, \(\mathcal{V}_{\sigma}\) has dimension \(2d\). Thus, we reduce monus coverability in \(2\)-VASS to reachability in \(\mathbb{Z}\)-semantics in \(4\)-VASS. Since reachability with \(\mathbb{Z}\)-semantics in each fixed dimension can be decided in NL[24], this provides an NL upper bound.
## 5 One-dimensional VASS
ReachabilityWe begin with the proofs regarding reachability.
**Proposition 5.1**.: _Monus reachability in \(1\)-VASS is in NL under unary encoding and in NP under binary encoding._
The proof of Proposition 5.1 relies on the following simple consequence of Proposition 3.6:
**Lemma 5.2**.: _Let \(\mathcal{V}\) be a \(1\)-VASS. Then \(s(m)\xrightarrow{*}_{\mathcal{V}}t(n)\) if and only if (i) \(s(m)\xrightarrow{*}_{\mathcal{V}}t(n)\) or (ii) there exist \(a\) state \(q\) and number \(m^{\prime}\geq m\) with \(s(m^{\prime})\xrightarrow{*}_{\mathcal{V}}q(0)\) and \(q(0)\xrightarrow{*}_{\mathcal{V}}t(n)\)._
For Proposition 5.1, we reduce to reachability in one-counter automata. A _one-counter automaton (OCA)_ is a \(1\)-VASS with zero-tests, i.e. special transitions that test the counter for zero instead of adding a number. For encoding purposes, zero tests take up as much space as a transition adding \(0\) to the counter. In our reduction, the update encoding is preserved: If the input \(1\)-VASS has unary encoding, then the OCA has unary updates as well. If the input \(1\)-VASS has binary updates, then the OCA will too. Then, we can use the fact that in OCA with unary updates, reachability is in NL[38] and for binary updates, it is in NP[26].
The OCA first guesses whether to simulate a run of type (i) or of type (ii) in Lemma 5.2. Then for type (i), it just simulates a classical \(1\)-VASS. For type (ii), it first non-deterministically increments the counter, and then simulates a run of the \(1\)-VASS. However, on the way, it keeps a flag signaling whether the counter has hit \(0\) at some point (which it can maintain using zero tests). Thus, when simulating runs of type (ii), the OCA only accepts if zero has been hit. For a detailed description, refer to Appendix C.1.1.
**Proposition 5.3**.: _Monus reachability in \(1\)-VASS is NP-hard under binary encoding._
Figure 4: \(2\)-VASS to show NP-hardness of coverability in dimension two.
As in Proposition 4.4, we reduce from subset sum. Given \(a_{1},\ldots,a_{n},a\) in binary, we construct the \(1\)-VASS in Figure 5. Then \(q_{0}(0)\xRightarrow{*}q_{f}(1)\) iff this is a positive instance. See Appendix C.1.2.
Zero reachability and coverabilityProposition 4.4.: _Monus zero-reachability in \(1\)-VASS is in \(\mathsf{NL}\) under unary encoding and in \(\mathsf{NC}^{2}\) under binary encoding._
Since monus zero-reachability reduces to classical coverability (Lemma 3.10), this follows from existing \(1\)-VASS results: Coverability in \(1\)-VASS is in \(\mathsf{NL}\) under unary encoding [38] and \(\mathsf{NC}^{2}\) under binary encoding [2].
Monus coverability in \(1\)-VASS is in \(\mathsf{NL}\) under unary encoding and in \(\mathsf{NC}^{2}\) under binary encoding.
The first statement follows from Proposition 5.1 and the fact that monus coverability reduces to monus reachability by simply adding a new final state where we can count down. For the \(\mathsf{NC}^{2}\) bound, we use the following consequence of Lemma 3.10 (see Appendix C.2.1).
Let \(\mathcal{V}\) be a \(1\)-VASS with configurations \(s(m)\) and \(t(n)\). Then \(t(n)\) is monus coverable from \(s(m)\) in \(\mathcal{V}\) if and only if \(t(n)\) is coverable from \(s(m)\) in \(\mathcal{V}\) under classical semantics or there is a state \(q\) of \(\mathcal{V}\) such that \(t(n)\) is coverable from \(q(0)\) in \(\mathcal{V}\) under classical semantics and \(s(m)\) is coverable from \(q(0)\) in \(\mathcal{V}^{\mathsf{rev}}\) under classical semantics.
Proof of Proposition 5.1.: It remains to prove the \(\mathsf{NC}^{2}\) upper bound, for which we check the requirements of Lemma 5.1. Let \(k\) be the number of states of the input \(1\)-VASS. Observe that Lemma 5.1.2 yields a logical disjunction over \(k+1\) disjuncts, where one disjunct consists of a single coverability check and the remaining \(k\) each consist of a logical conjunction over two coverability checks. Classical coverability of binary encoded \(1\)-VASS is in \(\mathsf{NC}^{2}\)[2], and by the definition of this complexity class, we can combine \(2k+1\) such checks according to the aforementioned logical relationship and still yield an \(\mathsf{NC}^{2}\)-algorithm. Note that this is only possible because \(k\) is linear in the size of the input.
|
2304.03157 | Microscopic characterization of the magnetic properties of the itinerant
antiferromagnet La2Ni7 by 139La NMR/NQR measurements | 139La nuclear magnetic resonance (NMR) and nuclear quadrupole resonance (NQR)
measurements have been performed to investigate the magnetic properties of the
itinerant magnet La2Ni7 which shows a series of antiferromagnetic (AFM) phase
transitions at $T_{N1}$=61 K, $T_{N2}$=56 K, and $T_{N3}$=42 K under zero
magnetic field. Two distinct La NMR signals were observed due to the two
crystallographically inequivalent La sites in La2Ni7 (La1 and La2 in the La2Ni4
and the LaNi5 sub-units of the La2Ni7 unit cell, respectively). From the 139La
NQR spectrum in the AFM state below $T_{N3}$, the AFM state was revealed to be
a commensurate state where Ni ordered moments align along the crystalline c
axis. Owing to the two different La sites, we were able to estimate the average
values of the Ni ordered moments ($\sim$0.09-0.10 $\mu_{B}$/Ni and
$\sim$0.17$\mu_{B}$/Ni around La1 and La2, respectively) from 139La NMR
spectrum measurements in the AFM state below $T_{N3}$, suggesting a non-uniform
distribution of the Ni-ordered moments in the AFM state. In contrast, a more
uniform distribution of the Ni-ordered moments in the saturated paramagnetic
state induced by the application of high magnetic fields is observed. The
temperature dependence of the sublattice magnetization measured by the internal
field at the La2 site in the AFM state was reproduced by a local moment model
better than the self-consistent renormalization (SCR) theory for weak itinerant
antiferromagnets. Given the small Ni-ordered moments in the magnetically
ordered state, our results suggest that La2Ni7 has characteristics of both
itinerant and localized natures in its magnetism. With this in mind, it is
noteworthy that the temperature dependence of nuclear spin-relaxation rates in
the paramagnetic state above $T_{N1}$ measured at zero magnetic field can be
explained qualitatively by both the SCR theory and the local-moment model. | Q. -P. Ding, J. Babu, K. Rana, Y. Lee, S. L. Bud'ko, R. A. Ribeiro, P. C. Canfield, Y. Furukawa | 2023-04-06T15:40:32Z | http://arxiv.org/abs/2304.03157v2 | Microscopic characterization of the magnetic properties of the itinerant antiferromagnet La\({}_{2}\)Ni\({}_{7}\) by \({}^{139}\)La NMR/NQR measurements
###### Abstract
\({}^{139}\)La nuclear magnetic resonance (NMR) and nuclear quadrupole resonance (NQR) measurements have been performed to investigate the magnetic properties of the itinerant magnet La\({}_{2}\)Ni\({}_{7}\) which shows a series of antiferromagnetic (AFM) phase transitions at \(T_{\rm N1}=61\) K, \(T_{\rm N2}=56\) K, and \(T_{\rm N3}=42\) K under zero magnetic field. Two distinct La NMR signals were observed due to the two crystallographically inequivalent La sites in La\({}_{2}\)Ni\({}_{7}\) (La1 and La2 in the La\({}_{2}\)Ni\({}_{4}\) and the LaNi\({}_{5}\) subunits of the La\({}_{2}\)Ni\({}_{7}\) unit cell, respectively). From the \({}^{139}\)La NQR spectrum in the AFM state below \(T_{\rm N3}\), the AFM state was revealed to be a commensurate state where Ni ordered moments align along the crystalline \(c\) axis. Owing to the two different La sites, we were able to estimate the average values of the Ni ordered moments (\(\sim\)0.07-0.08 \(\mu_{\rm B}\)/Ni and \(\sim\)0.17\(\mu_{\rm B}\)/Ni around La1 and La2, respectively) from \({}^{139}\)La NMR spectrum measurements in the AFM state below \(T_{\rm N3}\), suggesting a non-uniform distribution of the Ni-ordered moments in the AFM state. In contrast, a more uniform distribution of the Ni-ordered moments in the saturated paramagnetic state induced by the application of high magnetic fields is observed. The temperature dependence of the sublattice magnetization measured by the internal field at the La2 site in the AFM state was reproduced by a local moment model better than the self-consistent renormalization (SCR) theory for weak itinerant antiferromagnets. Given the small Ni-ordered moments in the magnetically ordered state, our results suggest that La\({}_{2}\)Ni\({}_{7}\) has characteristics of both itinerant and localized natures in its magnetism. With this in mind, it is noteworthy that the temperature dependence of nuclear spin-relaxation rates (\(1/T_{1}\)) in the paramagnetic state above \(T_{\rm N1}\) measured at zero magnetic field can be explained qualitatively by both the SCR theory and the local-moment model.
## I Introduction
Fragile magnets with small ordered moments, such as weak itinerant ferromagnets and antiferromagnets, have attracted much interest in studying quantum phase transition since the magnetic states of those compounds can be tuned by perturbations such as magnetic field, pressure, and doping [1; 2]. As a part of our research projects on fragile magnets, we have recently been studying the itinerant antiferromagnet La\({}_{2}\)Ni\({}_{7}\) in detail by using single crystals [3; 4; 5]. La\({}_{2}\)Ni\({}_{7}\) crystallizes in a Ce\({}_{2}\)Ni\({}_{7}\) type hexagonal structure (space group \(P6_{3}/mmc\)) [6; 7] and the lattice parameters are \(a=b=5.06235(11)\) A and \(c=24.6908(8)\) A at room temperature [3]. As shown in Fig. 1(a) [8], the structure consists of La\({}_{2}\)Ni\({}_{4}\) and LaNi\({}_{5}\) sub-units and the unit cell contains two blocks of [La\({}_{2}\)Ni\({}_{4}\) + 2 LaNi\({}_{5}\)] where two different La sites (La1 and La2 in the La\({}_{2}\)Ni\({}_{4}\) and the LaNi\({}_{5}\) sub-units of the La\({}_{2}\)Ni\({}_{7}\) unit cell, respectively) and five different sites for Ni atoms exist [9; 10]. The magnetic properties of the compound La\({}_{2}\)Ni\({}_{7}\) have been investigated since the 1980's, but only in polycrystalline form. Magnetic susceptibility \(\chi(T)\) measurements on La\({}_{2}\)Ni\({}_{7}\) in polycrystalline samples show a maximum around 51-54 K corresponding to an antiferromagnetic (AFM) phase transition [11; 12; 13]. The effective moment and Curie-Weiss temperature in the paramagnetic state estimated from the \(\chi(T)\) measurement were reported to be 0.80 \(\mu_{\rm B}\)/Ni and 67 K, respectively [14]. The saturation moment of Ni ions in the magnetically ordered state was reported to be \(\sim\)0.08 \(\mu_{\rm B}\)/Ni from magnetization curves [15], which show the temperature-dependent metamagnetic behaviors, suggesting the existence of several magnetic phases [14; 16].
In fact, from the recent detailed specific heat, magnetization, and electrical resistivity measurements on single crystals, Ribeiro _et al._[3] reported the complete, anisotropic magnetic field (\(H\)) - temperature (\(T\)) phase diagrams for the magnetic fields applied parallel and perpendicular to the crystallographic \(c\) axis shown in Fig. 1(c). In zero applied magnetic field, there are a series of antiferromagnetic (AFM) phase transitions at \(T_{\rm N1}\)\(\sim\) 61 K, \(T_{\rm N2}\)\(\sim\) 56.5 K, and \(T_{\rm N3}\)\(\sim\) 42 K, making three different magnetically ordered phases named C, B, and A phases for \(T_{\rm N2}\)\(<\)\(T<\)\(T_{\rm N1}\), \(T_{\rm N3}\)\(<\)\(T<\)\(T_{\rm N2}\), and \(T<\)\(T_{\rm N3}\), respectively. It is noted that in the case of C phase, a small ferromagnetic (FM) component was reported in its AFM ordering [3]. Recent neutron diffraction (ND) measurements using single crystals indicate that the B and C phases are incommensurate AFM states where the ordered moments are perpendicular to the \(c\) axis in the C phase while an additional parallel component of the ordered moments along the \(c\) axis also exist in the B phase, and the A phase is commensurate where the ordered moments are parallel to the \(c\) axis [4]. The commensurate AFM state in the lowest temperature A phase reported by the ND measurements is consistent with the theoretical prediction [10] where two ferromagnetic unit blocks with opposite Ni spin directions are separated by the non-magnetic Ni layers in the La\({}_{2}\)Ni\({}_{4}\) units; the Ni ordered moments are proposed to point the \(c\) axis and its magni
tude depends on the Ni positions with minimum in the La\({}_{2}\)Ni\({}_{4}\) sub-units and maximum at the middle of the two LaNi\({}_{5}\) sub-units as shown in Fig 1(b)] [10].
Although detailed studies of the thermodynamic, transport, and structural measurements on La\({}_{2}\)Ni\({}_{7}\) using a single crystal have been performed, further studies such as the microscopic characterization of the static and dynamic magnetic properties would provide important information for characterizing the itinerant antiferromagnet La\({}_{2}\)Ni\({}_{7}\). Nuclear magnetic resonance (NMR) and nuclear quadrupole resonance (NQR) are powerful techniques to investigate the magnetic properties of materials from a microscopic point of view [17]. In particular, one can obtain direct and local information of the magnetic state at nuclear sites, helping to determine the magnetic structure of magnetic systems. In addition, the nuclear spin-lattice relaxation time, \(T_{1}\), provides important information about magnetic fluctuations.
Here we report the results of \({}^{139}\)La NMR and NQR measurements on La\({}_{2}\)Ni\({}_{7}\) single crystals and oriented powder samples. Two distinct La signals with two different values of quadrupolar frequencies of \(\nu_{\rm Q}\)\(\sim\) 0.7 and \(\sim\) 3.6 MHz are observed. Those are assigned to La1 and La2, respectively, based on density functional theory (DFT) calculations. In the antiferromagnetic state of the lowest temperature A phase, the internal magnetic induction \(B_{\rm int}\) at La2 was directly determined to be -0.79 T at 4.2 K by the observation of the splitting of NQR lines. In addition, the direction of the \(B_{\rm int}\) is determined to be parallel to the \(c\) axis from further splittings of NQR lines under small magnetic fields using a single crystal. From NMR measurements using oriented powder sample, we found the \(B_{\rm int}\) values at La1 and La2 are close to -(0.31-0.37) T and -0.75 T at 4.2 K, respectively, in the antiferromagnetic A phase. This result indicates that the Ni-ordered moments are not uniform and the average magnitude of Ni-ordered moments around La2 is greater than that around La1, which is consistent with the ND results and the DFT calculations. In further measurements, in a saturated paramagnetic state under a high magnetic field of 8 T parallel to the \(c\) axis at 4.3 K, \(B_{\rm int}\) at La1 is found to increase to -0.48 T while \(B_{\rm int}\) at La2 decreases to -0.37 T. These results suggest that Ni-ordered moments are more uniformly distributed in Ni positions in the saturated paramagnetic state. The temperature dependence of \(B_{\rm int}\) at the La2 site was well reproduced by the localized moment model rather than the self-consistent renormalization (SCR) theory for weak itinerant antiferromagnets. The temperature dependence of \(1/T_{1}\) measured at zero magnetic field in the paramagnetic state above \(T_{\rm N1}\) can be reproduced by both the SCR theory and the localized moment model. These results suggest that the itinerant antiferromagnet La\({}_{2}\)Ni\({}_{7}\) has magnetic properties characterized by localized moment nature, as well as the nature of weak itinerant antiferromagnets as seen by the small Ni-ordered moments in La\({}_{2}\)Ni\({}_{7}\).
Figure 1: (a) Crystal structure of La\({}_{2}\)Ni\({}_{7}\) (space group \(P6_{3}/mmc\)) which consists of La\({}_{2}\)Ni\({}_{4}\) and LaNi\({}_{5}\) sub-units with two different La sites (La1 and La2 in the La\({}_{2}\)Ni\({}_{4}\) and the LaNi\({}_{5}\) sub-units, respectively). (b) Schematic view of spin distribution in the antiferromagnetic state (A phase) proposed by DFT calculations [10] and ND measurements [4]. (c) \(H-T\) phase diagrams for \(H\)\(||\)\(c\) (bottom) and \(H\)\(\perp\)\(c\) (top) reported by Ribeiro _et al._ based on the \(H\) and \(T\) dependences of magnetization [\(M(H)\) and \(M(T)\)] and resistivity [\(R(H)\) and \(R(T)\)] measurements [3].
Experiment
Hexagonal-shaped plate-like La\({}_{2}\)Ni\({}_{7}\) single crystals were grown from high-temperature solutions as detailed in Ref. [3]. The crystalline \(c\) axis is perpendicular to the plane. NMR and NQR measurements of \({}^{139}\)La nuclei (\(I=7/2\), \(\gamma_{\rm N}/2\pi=6.0146\) MHz/T, \(Q=0.21\) barns) were carried out using a lab-built phase-coherent spin-echo pulse spectrometer. The \({}^{139}\)La-NMR spectra were obtained by recording the integrated spin-echo intensity with changing \(H\) at a fixed frequency or changing resonance frequency at a constant magnetic field. The \({}^{139}\)La-NQR spectra in zero magnetic field were measured either by measuring the intensity of the Hahn spin echo in steps of frequency or performing the Fourier transform (FT) of the spin echo signals. For most of NMR/NQR measurements, we performed the measurements using powdered samples crushed from the single crystals, as the intensities of NMR/NQR signals for the single crystals are very weak, making the measurements difficult at higher temperatures. For NMR measurements, we used a loosely packed powder sample and found that the grains of loosely packed samples orient under magnetic fields and the orientation direction depends on temperature and magnetic field as shown below. Only for NQR measurements at the lowest temperature of \(T=1.6\) K under small magnetic fields, a single crystal was used to determine the internal field direction.
The \({}^{139}\)La spin-lattice relaxation rates (\(1/T_{1}\)) were measured using a saturation recovery method. \(1/T_{1}\) at each temperature for an NQR line (corresponding to a transition line for \(m=\pm 5/2\leftrightarrow\pm 7/2\)) was determined by fitting the nuclear magnetization (\(M\)) versus time (\(t\)) using the exponential function \(1-M(t)/M(\infty)=0.214e^{-3t/T_{1}}+0.649e^{-10t/T_{1}}+0.136e^{-21t/T_{1}}\), where \(M(t)\) and \(M(\infty)\) are the nuclear magnetization at time \(t\) after the saturation and the equilibrium nuclear magnetization at \(t\rightarrow\infty\), respectively [18].
## III Results and Discussion
### \({}^{139}\)La NMR in the paramagnetic state
Figure 2(a) shows the temperature (\(T\)) dependence of frequency-swept \({}^{139}\)La-NMR spectra for the loosely packed powder sample under a magnetic field of \(H=7.4089\) T. The typical spectrum for a nucleus with spin \(I=7/2\) with Zeeman and quadrupolar interactions can be described by a nuclear spin Hamiltonian [19]
\[{\cal H} = -\gamma_{\rm N}\hbar(1+K){\bf H}\cdot{\bf I}+\frac{h\nu_{\rm Q} }{6}(3I_{Z}^{2}-I^{2}), \tag{1}\]
where \(H\) is external field, \(\hbar\) is Planck's constant (\(h\)) divided by \(2\pi\), and \(K\) represents the NMR shift. The first and second terms represent Zeeman and quadrupolar interactions, respectively [20]. The nuclear quadrupole frequency for \(I=7/2\) nuclei is given by \(\nu_{\rm Q}=eQV_{\rm ZZ}/14h\), where \(Q\) is the nuclear quadrupole moment and \(V_{\rm ZZ}\) is the maximum electric field gradient (EFG) at the nuclear site. In first-order perturbation theory, when the Zeeman interaction is much greater than the quadrupole interaction, one has the nuclear energy levels for \(I=7/2\)
\[E_{m}=-\gamma_{\rm N}\hbar(1+K)Hm-\frac{h\nu_{\rm Q}}{12}(3\cos^{2}\theta-1)( 3m^{2}-\frac{63}{4}), \tag{2}\]
where \(\theta\) is the angle between the external magnetic field and the principal axis of the EFG. Thus \({}^{139}\)La NMR spectrum is composed of a central transition line and three pairs of satellite lines shifted from the central transition line by \(\pm\frac{1}{2}\nu_{\rm Q}(3\cos^{2}\theta-1)\) (for the transitions of \(m\) = 3/2 \(\leftrightarrow\) 1/2 and -3/2 \(\leftrightarrow\) -1/2), \(\pm\nu_{\rm Q}(3\cos^{2}\theta-1)\) (for \(m\) = 5/2 \(\leftrightarrow\) 3/2 and -5/2 \(\leftrightarrow\) -3/2), and \(\pm\frac{3}{2}\nu_{\rm Q}(3\cos^{2}\theta-1)\) (for \(m\) = 7/2 \(\leftrightarrow\) 5/2 and -7/2 \(\leftrightarrow\) -5/2). It is noted that the spacing between the lines of the spectrum for \(\theta=0\) is twice that for \(\theta=90^{\circ}\), producing the spectrum for \(\theta\)
Figure 2: (a) Frequency-swept \({}^{139}\)La-NMR spectra under \(H=7.4089\) T at various temperatures. The green and blue lines are the position of \({}^{139}\)La-NMR spectra calculated with a temperature independent \(\nu_{\rm Q}=0.7\) MHz for La1 and \(\nu_{\rm Q}=3.6\) MHz for La2, respectively. The vertical dashed red line corresponds to the Larmor magnetic field position. (b) Temperature dependence of \(M/H\) measured at \(H=7\) T parallel to the \(c\) axis [\((M/H)_{c}\) in black] and parallel to the \(ab\) plane [\((M/H)_{ab}\) in red]. The inset shows the temperature dependence of \(\Delta M/H\) = \((M/H)_{ab}\) - \((M/H)_{c}\).
= 0 almost two times wider than for \(\theta\) = 90\({}^{\circ}\).
As shown in Fig. 1(a), La\({}_{2}\)Ni\({}_{7}\) has two inequivalent La sites: La1 and La2 belonging to the La\({}_{2}\)Ni\({}_{4}\) and the LaNi\({}_{5}\) sub-units, respectively. The observed spectra were well reproduced by simulated spectra [green and blue lines in Fig. 2(a)] from the simple nuclear spin Hamiltonian with \(\nu_{\rm Q}\) = 0.7 MHz for green lines and \(\nu_{\rm Q}\) = 3.6 MHz for blue lines. The value of \(\nu_{\rm Q}\)\(\sim\) 3.6 MHz for one of the La sites was nearly temperature independent which was confirmed by the observation of NQR spectra as discussed below. In order to assign the La sites, we have calculated the electric field gradient at each La site by a full potential linear augmented plane wave (FLAPW) method [21] with a generalized gradient approximation [22] using the lattice parameters described in Introduction. The \(\nu_{\rm Q}\) values were calculated to be 0.5 MHz and 4.1 MHz for La1 and La2, respectively, where \(V_{ZZ}\) is parallel to the \(c\) axis for both sites. The relatively large difference in the \(\nu_{\rm Q}\) values for those two La sites is consistent with the experimental results. Therefore we assigned the signals with \(\nu_{\rm Q}\)\(\sim\) 3.6 MHz to La2 and those with \(\nu_{\rm Q}\)\(\sim\) 0.7 MHz to La1. As shown at the bottom in Fig. 2(a), the observed spectra above \(T\) = 100 K are reproduced with \(\theta\) = 90\({}^{\circ}\). Since \(V_{ZZ}\) is parallel to the \(c\) axis, \(\theta\) = 90\({}^{\circ}\) indicates that the \(c\) axis of the small grains in the loosely packed sample is aligned perpendicular to the external magnetic field. This indicates that the magnetization parallel to the \(ab\) plane (\(M_{ab}\)) is greater than \(M\) parallel to the \(c\) axis (\(M_{c}\)) under \(H\)\(\sim\) 7.4 T above 100 K.
To check this, we measured the temperature dependences of \((M/H)_{ab}\) and \((M/H)_{c}\) under a magnetic field of \(H\) = 7 T whose results are shown in Fig. 2(b) [23]. Although the \((M/H)_{ab}\) and \((M/H)_{c}\) are nearly the same in the paramagnetic state in the scale of the figure, one can see a small difference between \((M/H)_{ab}\) and \((M/H)_{c}\) as shown in the inset of Fig. 2(b) where \(\Delta M/H=(M/H)_{ab}-(M/H)_{c}\) is plotted as a function of temperature. \(\Delta M/H\) changes the sign around 90 K where \((M/H)_{ab}\) is greater than \((M/H)_{c}\) above 90 K while \((M/H)_{ab}\) is less than \((M/H)_{c}\) below that temperature. The positive value of \(\Delta M/H\) above 100 K is consistent with the NMR results. Therefore our NMR data show that the \(\Delta M/H\) is enough to orient the grains of the powder samples along the magnetic field direction even though \(\Delta M/H\) is relatively small. Since the rotation of the grains of the powder samples is due to a torque acting on each grain, and the condition for getting an oriented powder sample depends on the size of each grain, \(\Delta M/H\), and so on, we do not discuss those quantitatively here.
Now back to the NMR spectra. With decreasing temperatures, each line becomes broader and the spacings of lines for each NMR spectrum below \(\sim\)100 K become twice those in the spectra above that temperature as typically shown in the upper two panels of Fig. 2(a). Those spectra measured at 85 K and 70 K are well reproduced with \(\theta\) = 0 with the same \(\nu_{\rm Q}\) values for the two La sites (the \(T\) independent \(\nu_{\rm Q}\) is confirmed by NQR measurements shown below). This means that the \(c\) axis of the small grains of the powder sample is rotated by 90 degrees and now is oriented along the external magnetic field in that temperature range where \(M_{c}\) must be greater than \(M_{ab}\). Again this is consistent with the negative values of \(\Delta M/H\) below 90 K where \((M/H)_{c}\) is greater than \((M/H)_{ab}\) as seen in the inset of Fig. 2(b).
Figure 3 shows the temperature dependence of NMR shifts determined by the observed spectra. The filled and open symbols correspond to La1 and La2, respectively. The NMR shift data above 100 K correspond to the case for \(H\)\(\parallel\)\(ab\) plane (\(K_{ab}\)) while \(H\)\(\parallel\)\(c\) axis (\(K_{c}\)) below 100 K at \(H\)\(\sim\) 7.4 T. With decreasing temperature, the values of \(K\) for La1 and La2 decrease similarly. It is noted that the change in the slopes around 100 K is mainly due to the different values of the hyperfine coupling constants for the different magnetic field directions as described below. To see the effect of \(H\) on NMR shifts (or the magnetization), we also measured NMR spectra at a different magnetic field of \(H\)\(\sim\) 4 T whose results are shown by the black symbols in Fig. 3. As can be seen, the NMR shifts under \(\sim\) 4 T are greater than those at 7.4 T in magnitude. These observations are consistent with the \(T\) dependence of \(M/H\) under the different magnetic field where the \(M/H\) is suppressed with increasing \(H\) in the \(T\) range of \(T\) = 50 - 100 K [3].
The NMR shift consists of temperature dependent spin shift \(K_{\rm s}(T)\) and \(T\) independent orbital shift \(K_{0}\); \(K(T)\) = \(K_{\rm s}(T)\) + \(K_{0}\) where \(K_{\rm s}(T)\) is proportional to the magnetization \(M(T)\) divided by \(H\), \(M(T)/H\), via hyperfine
Figure 3: Temperature dependence of the \({}^{139}\)La-NMR shifts: \({}^{139}\)\(K_{ab}\) (triangles) and \({}^{139}\)\(K_{c}\) (squares) at \(H\)\(\sim\) 7.4 T (red symbols below 100 K; green symbols above 100 K) and \(\sim\) 4 T (black symbols). Filled and open symbols correspond to La1 and La2, respectively. The inset shows the \(K\) vs. \(M/H\) plots for the corresponding \(H\). Here we used \(M/H\) measured at 7 T for the \(K\)-\(M/H\) plot since no \(M/H\) data at 7.4 T are available [23]. The \(M/H\) data at 4 T are from Ref. [3] The gray and green bold lines are the guides for the eye representing the slopes.
coupling constant \(A\): \(K_{\rm s}(T)=\frac{A}{\mu_{\rm B}N_{\rm A}}\frac{M(T)}{H}\). Here \(N_{\rm A}\) is Avogadro's number. Utilizing the \(T\) dependence of \(M/H\) in corresponding magnetic fields for the NMR measurements, we plot all \(K_{\rm c}\) data as a function of \(M/H\) in the inset of Fig. 3. Here, for \(K_{\rm c}\) at \(H\sim\) 7.4 T, we used \(M/H\) data measured at 7 T since no \(M/H\) data at 7.4 T are available [23]. Within our experiment uncertainty, all data below 100 K are roughly on lines as shown by the bold gray line, although the \(K_{\rm c}\) data at 7.4 T seem to deviate slightly from the line. Since the \(M/H\) is sensitive to \(H\) at low temperatures [3], the deviation could be due to the different magnetic fields. From the slope of the gray line, we estimate the hyperfine coupling constants to be \(A_{c}\) = -4.5T/\(\mu_{\rm B}\) for \(H\)\(||\)\(c\) for both La sites without any clear difference between them within our experiment uncertainty. It is noted that the coupling constant estimated here is considered to be a total hyperfine coupling constant produced by the nearest neighboring (NN) Ni ions, as both La1 and La2 are surrounded by six NN Ni ions [24].
We also plotted the \(K_{\rm ab}\) data (green symbols) as a function of \((M/H)_{ab}\) at 7 T in the inset of Fig. 3. Since \(M/H\) is less sensitive to \(H\) at high temperatures, it will be reasonable to estimate the hyperfine coupling constant along the \(ab\) plane (\(A_{ab}\)) even though the magnetic fields for the \(K_{\rm ab}\) and \(M/H\) measurements are slightly different. From the slope of the bold green line, we obtained \(A_{ab}\) = -3.0 T/\(\mu_{\rm B}\) for \(H\)\(||\)\(ab\) for both La sites.
### \({}^{139}\)La NMR in the magnetically ordered states
Figures 4(a)-(c) show frequency-swept \({}^{139}\)La-NMR spectra in magnetically ordered states at \(T\) = 4.3 K for the loosely packed powder sample under magnetic fields of (a) 2 T, (b) 3.5 T, and (c) 8 T. As observed in the paramagnetic state, the spectra are the sum of the two sets of La NMR lines originating from La1 and La2 with \(\nu_{\rm Q}\) = 0.7 MHz and 3.6 MHz, respectively. Since the observed spectra are relatively sharp and not so-called powder patterns, the small grains of the powder sample are also found to orient due to \(H\), similar to the case of the NMR spectra in the paramagnetic state discussed in III. A. At \(H\) = 2 T, the observed spectrum was well reproduced by a sum of the two spectra from La1 (green hatched area) and La2 (light blue hatched areas) with \(\theta\) = 90\({}^{\circ}\) where we had to add the internal magnetic induction (\(B_{\rm int}\)) at the La sites produced by Ni-ordered moments in the AFM state. Here we used \(B_{\rm int}\) = -0.37 T (for La1) and -0.76 T (for La2) whose directions are parallel to the \(c\) axis as directly determined by the NQR spectra in the next subsection III. C. \(\theta\) = 90\({}^{\circ}\) indicates that the \(c\) axis of the small grains is perpendicular to the \(H\) due to the magnetization \(M_{ab}\) being greater than \(M_{c}\) in the AFM state (A phase) where the Ni ordered moments are perpendicular to \(H\). This is consistent with the \(M\) data (\(T\) = 5 K) [3] shown in the inset of Fig. 4(a) where \(M_{ab}\) is greater than \(M_{c}\) for \(H\) less than \(\sim\) 6.2 T. A similar spectrum with \(\theta\) = 90\({}^{\circ}\) was observed at \(H\) = 3.5 T with a similar set of the parameters of \(B_{\rm int}\) = -0.31 T and -0.75 T for La1 and La2, respectively, with the same \(\nu_{\rm Q}\) values. On the other hand, when the system sets in a saturated paramagnetic state under \(H\) = 8 T, the spectra become wider due to the change in \(\theta\) from 90\({}^{\circ}\) to 0\({}^{\circ}\). This corresponds to the \(c\) axis of the small grains of the powder sample oriented along \(H\), which is again consistent with the \(M\) data where \(M_{c}\) is now greater than \(M_{ab}\) above \(H\)\(\sim\) 6.2 T. From the fit of the spectra with keeping the same \(\nu_{\rm Q}\) values for each La site, we found that \(B_{\rm int}\) is parallel to the \(c\) as well and those values are -0.48 T and -0.37 T for La1 and La2, respectively.
Utilizing the hyperfine coupling constant \(A_{c}\) = \(-\)4.5 T/\(\mu_{\rm B}\), \(B_{\rm int}\) = -(0.31-0.37) T for La1,
Figure 4: Frequency-swept \({}^{139}\)La-NMR spectra at \(T\) = 4.3 K on the loosely packed powder sample under magnetic fields of (a) 2 T, (b) 3.5 T, and (c) 8 T. The green and blue curves are simulated spectra with \(\nu_{\rm Q}\) = 0.7 MHz for La1 and \(\nu_{\rm Q}\) = 3.6 MHz for La2, respectively, and different internal magnetic inductions \(B_{\rm int}\) discussed in the text. The magenta curves are the sum of the two simulated spectra. The down arrows (\(f_{0}\)) correspond to the Larmor frequencies for each magnetic field. The peak around \(f_{0}\) clearly observed in the spectrum at \(H\) = 8 T probably comes from impurities. The inset of (a) shows the external magnetic field dependence of \(M\) for \(H\)\(||\)\(ab\) (\(M_{ab}\) in red) and \(H\)\(||\)\(c\) (\(M_{c}\) in black) measured at 5 K from Ref. [3].
and -(0.76-0.75) T for La2, we estimated the average values of Ni ordered moments around La1 and La2 to be \(\sim\) 0.07-0.08 \(\mu_{\rm B}\) and \(\sim\) 0.17 \(\mu_{\rm B}\), respectively, in the AFM state (A phase). The average value of Ni-ordered moments around La1 is nearly half of that around La2, suggesting a nonuniform distribution of the Ni-ordered moments in the AFM state. This is qualitatively consistent with the AFM state predicted by the theory and the ND measurement, where the maximum amplitude of the Ni-ordered moments is expected at the middle of two LaNi\({}_{\rm{i}}\) sub-units close to La2 while the minimum amplitude of the Ni-ordered moments is proposed at Ni ions in the La\({}_{2}\)Ni\({}_{4}\) sub-unit close to La1.
We also estimate the average values of Ni-ordered moments to be \(\sim\)0.11 \(\mu_{\rm B}\) and \(\sim\)0.08 \(\mu_{\rm B}\) around La1 and La2, respectively, in the saturated paramagnetic state. According to the theoretical calculations, in the ferromagnetic state, the ferromagnetic units are stacked ferromagnetically while keeping the nonuniform distribution of the moments as in the AFM state [10]. In this case, we expect almost no change in the average values of Ni-ordered moments around both La sites in the saturated paramagnetic state as compared to the AFM state. This is inconsistent with the NMR results showing that the average values of Ni-ordered moments are more uniformly distributed in Ni positions than the case of AFM state.
### \({}^{139}\)La NQR spectrum
At zero magnetic field, the electric quadrupole Hamiltonian (the second term of Eq. 1) for the case of \(I=\)7/2 produces three equally spaced NQR lines at resonance frequencies of \(\nu_{\rm Q}\), 2\(\nu_{\rm Q}\), and 3\(\nu_{\rm Q}\) corresponding to the transitions of \(m=\pm 1/2\leftrightarrow\pm 3/2\), \(\pm 3/2\leftrightarrow\pm 5/2\) and \(\pm 7/2\leftrightarrow\pm 5/2\), respectively. From NMR measurements, the values of \(\nu_{\rm Q}\) are estimated to be \(\sim\) 3.6 MHz for La2, so one expects three NQR lines at 3.6 MHz, 7.2 MHz, and 10.8 MHz, named \(\nu_{1}\), \(\nu_{2}\), and \(\nu_{3}\) lines, respectively. In fact, we observed such NQR lines in the paramagnetic state as shown in Fig. 5(a) where the \(\nu_{1}\) line was not measured due to the limitation of the resonance frequency of our spectrometer (we also could not measure the NQR lines for the La1 site due to the small value of \(\nu_{\rm Q}\sim 0.7\) MHz). The observed NQR spectra, that is, the peak frequencies for the \(\nu_{2}\) and \(\nu_{3}\) lines for the La2 site are well reproduced with \(\nu_{\rm Q}=3.63\) MHz as shown by the calculated spectra in magenta, and those peak frequencies are independent of temperature in the paramagnetic state [see Fig. 5(b)], indicating no change of \(\nu_{\rm Q}\) with temperature.
In the AFM state, we found that each line splits into two lines, as shown in Fig. 5(a). This is due to the appearance of \(B_{\rm int}\) at La2 produced by Ni-ordered moments in the AFM state where \(B_{\rm int}\) lifts the degenerate nuclear spin levels of \(\pm m\) states. To estimate \(B_{\rm int}\), we calculated NQR spectra using a Hamiltonian where a Zeeman term due to \(B_{\rm int}\), similar to the first term of Eq. 1, was added to the electrical quadrupole Hamiltonian. We found that the observed spectra were well reproduced by taking \(B_{\rm int}\) parallel to the \(c\) axis, as shown by the calculated spectra in magenta in the magnetically ordered state. The value of \(|B_{\rm int}|\sim\) 0.79 T at 4.3 K is close to the values of \(-\)(0.76-0.75) T estimated from the NMR spectrum in the A phase. The temperature dependence of \(B_{\rm int}\) at the La2 site is shown in Fig. 5(c), which corresponds to the temperature dependence of sublattice magnetization \(M_{\rm sub}\). Here we also plotted the expected temperature dependence of \(M_{\rm sub}\) (\(\propto\)\(B_{\rm int}\)) based on the ND data [4] in the commensurate antiferromagnetic A phase (the \(c\) components of the ordered moments by green open triangles), the incommensurate antiferromagnetic B phase (the \(c\) components by blue open triangles), and the incommensurate antiferromagnetic B and C phases (the \(ab\) plane components by magenta open triangles). The ND
Figure 5: (a) \({}^{139}\)La NQR spectra for La2 at zero magnetic field at various temperatures in both the paramagnetic and magnetic ordered states. The magenta curves are simulated spectra. (b) Temperature dependence of the resonance frequency for each peak. The symbols in black, red and blue represent for \(\nu_{3}\), \(\nu_{2}\) and \(\nu_{1}\) lines, respectively. (c) Temperature dependence of the magnitude of the internal magnetic induction (\(|B_{\rm int}|\)) at La2. The black circles are estimated \(B_{\rm int}\) from the \(\nu_{3}\) line and the red circles are from the \(\nu_{2}\) line. The expected temperature dependences of \(B_{\rm int}\) based on the ND data [4] are also plotted in (c) by the green open triangles (the \(c\) components of the ordered moments in the commensurate rate antiferromagnetic A phase), the blue open triangles (the \(c\) components of the ordered moments in the incommensurate Bi phase), and the magenta open triangles (the \(ab\) plane components of the ordered moments in the incommensurate antiferromagnetic B and C phases). The ND data sets shown by the green, blue, and magenta symbols were normalized to scale the NMR data at 40 K, 45 K, and 50 K, respectively. The black and red curves show the typical temperature dependence of \(B_{\rm int}\) based on the SCR theory and the localized moments picture, respectively (see text).
data sets shown by the green, blue, and magenta symbols were normalized to scale the NMR data at 40 K, 45 K, and 50 K, respectively. It is noted that the phase transitions at \(T_{\rm N2}\) and \(T_{\rm N1}\) are of second order while the first order phase transition was observed at \(T_{\rm N3}\)[4]. It is also noted that the temperature dependences of \(M_{\rm sub}\) from the ND measurements in the magnetically ordered states are well overlapped with that of \(B_{\rm int}\) determined by the NMR measurements although the overlapped temperature range (\(T=35-50\) K) is not wide.
According to self-consistent renormalization (SCR) theory, the temperature dependence of \(M_{\rm sub}\) for the case of weak itinerant antiferromagnets, the temperature dependence of \(M_{\rm sub}\) is given by [25]
\[M_{\rm sub}(T)\propto\sqrt{1-(T/T_{\rm N})^{3/2}} \tag{3}\]
The black curve is the calculated temperature dependence of \(|B_{\rm int}|\) (=\(B_{\rm int,0}\sqrt{1-(T/T_{\rm N})^{3/2}}\)) based on the above formula with \(B_{\rm int,0}\) = 0.79 T and \(T_{\rm N}\) = 62 K, which does not reproduce the experimental data well. Since the phase transition at \(T_{\rm N3}\) is of first order, we also tried to fit only for the data above \(T_{\rm N3}\) using the formula. However, we could not fit the data well with a reasonable value of \(B_{\rm int,0}\). In addition, it is clear that the observed temperature dependence of \(B_{\rm int}\) cannot be explained by the formula using the different values of \(T_{\rm N}\). Thus we conclude that the temperature dependence of the \(M_{\rm sub}\) in La\({}_{2}\)Ni\({}_{7}\) cannot be explained by the model of weak itinerant antiferromagnets. Instead, the temperature dependence of \(B_{\rm int}\) was found to be reasonably reproduced by a Brillouin function which was calculated based on the Weiss molecular field model shown by the red curve where we tentatively assumed \(S\) = 1 and used \(T_{\rm N}\) = 62 K. Here we normalized the calculated temperature dependence to scale with \(|B_{\rm int}|\) at the lowest temperature. These results may indicate that the magnetic state of the Ni ions is more likely described by the local-moment picture, contrary to the anticipation from the small Ni-ordered moments in the itinerant antiferromagnet La\({}_{2}\)Ni\({}_{7}\). Interestingly, similar discrepancy between \(M_{\rm sub}\) and the SCR theory has been reported in weak itinerant antiferromagnets such as V\({}_{5}\)Se\({}_{8}\)[26], V\({}_{5}\)S\({}_{8}\)[27], CrB\({}_{2}\)[28], and YMn\({}_{12}\)[29].
The direction of \(B_{\rm int}\) in the AFM state is also directly confirmed by \({}^{139}\)La NQR spectrum measurements on the single crystal in nonzero \(H\). Figure 6 shows the \(H\) dependence of NQR spectrum at 1.6 K under small magnetic fields applied parallel to the \(c\) axis using a single crystal. With the application of small \(H\) less than 0.2 T, we found that one of the split \(\nu_{2}\) lines that is shifted to higher frequency side (around 12 MHz) further splits into two lines as shown in Fig. 6. Similar splitting is also observed for the \(\nu_{3}\) line around 15.6 MHz under small magnetic fields. These results clearly indicate the existence of two La2 sites with \(B_{\rm int}\) parallel or antiparallel to the \(c\) axis. Since the effective field at the La site is given by the vector sum of \({\bf B}_{\rm int}\) and \({\bf H}\), i.e., \(|{\bf B}_{\rm eff}|\) = \(|{\bf B}_{\rm int}\) + \({\bf H}|\), the resonance frequency is expressed for \(H\)\(||\)\(B_{\rm int}\) as \(f=\frac{\gamma_{\rm N}}{2\pi}|B_{\rm eff}|\) = \(\frac{\gamma_{\rm N}}{2\pi}|B_{\rm int}\) \(\pm\)\(H|\). As shown in the inset, the \(H\) dependence of the split peak frequencies is well reproduced by the relation using the \(\frac{\gamma_{\rm N}}{2\pi}\) = 6.0146 MHz/T for La nucleus, evidencing that the \(B_{\rm int}\) is parallel or antiparallel to the \(c\) axis without any obvious deviation. This result also clearly shows the appearance of two sublattices, a direct confirmation of the antiferromagnetic state.
To see how the NQR spectra change around magnetic phase transition temperatures for the investigation of the AFM states of the B and C phases, we measured the NQR spectra around 11 MHz at zero magnetic field for the three phases. Figure 7 shows the measured NQR spectra for the powder samples in the three magnetic phases together with the one in the PM state. The line around 10.9 MHz in the PM state corresponds to the \(\nu_{3}\) line. When the sample is in the AFM state at \(T\) = 58 K (C phase), we found that the position of the NQR line does not change, although the line becomes slightly broad. This could be explained by (1) very small \(B_{\rm int}\) in the phase and/or (2) \(B_{\rm int}\) points perpendicular to the \(c\) axis. According to the ND measurements, the C phase is proposed as an incommensurate AFM state with the Ni-ordered moment perpendicular to the \(c\) axis. Our NQR spectrum in the C phase would be not inconsistent with the ND data. With the further decrease in temperature, the signal around 10.9 MHz disappeared when the system entered the B phase (for example, see the NQR line at \(T\) = 50 K). Instead, we observed the slightly broader NQR line centered at 10.5 MHz, lower than the peak frequency of 10.9 MHz in the paramagnetic state and the C phase. We assigned the line to be one of the split \(\nu_{2}\) lines due to \(B_{\rm int}\) along the \(c\) axis because the position of the
Figure 6: \({}^{139}\)La NQR spectra for La2 under small magnetic fields at 1.6 K. The inset shows the external magnetic field dependence of peak frequencies of the split lines for the \(\nu_{2}\) and \(\nu_{3}\) lines. The solid lines are the expected \(H\) dependence of peak frequencies when \(B_{\rm int}\) is parallel or antiparallel to the \(c\) axis.
line smoothly connects to that of the line in the A phase [see Fig. 5(c)]. This suggests that the dominant component of the \(B_{\rm int}\) at La2 is parallel or antiparallel to the \(c\) axis in the B phase, similar to the case in the A phase. According to the ND data, the incommensurate propagation vector is along the \(c\) axis in the B phase, which indicates that the Ni ordered moments are no longer ferromagnetic block [4]. In this case, one expects the large distribution of \(B_{\rm int}\) at the La sites. Therefore the NMR data would be inconsistent with the ND results. The slightly broader lines are due to small distributions of \(B_{\rm int}\). This suggests slight distributions of the Ni-ordered moments in the B phase, but probably not due to the incommensurate AFM state reported from the ND data. In contrast, as we discussed in the previous subsection, the commensurate AFM state (Phase A) reported from the ND measurements is consistent with the NMR results.
### \({}^{139}\)La spin-lattice relaxation time \(T_{1}\)
Figure 8 shows the \(T\) dependence of \(1/T_{1}T\) at the \(\nu_{3}\) line of \({}^{139}\)La NQR spectrum under zero magnetic field. In the paramagnetic state, \(1/T_{1}T\) is nearly constant above 150 K and starts to increase below that temperature with decreasing \(T\), and shows a peak around \(T_{\rm N1}\) due to critical slowing down of spin fluctuation, characteristics of second order phase transition. In the AFM state (A phase), the \(1/T_{1}T=\) constant behavior expected for metals is observed, consistent with the itinerant nature of La\({}_{2}\)Ni\({}_{7}\).
Since the compound has small ordered moments, we first fit the data based on the SCR theory. For the case of weak itinerant antiferromagnets, the SCR theory predicts the temperature dependence of \(1/T_{1}T\) in paramagnetic states given by [30; 31]
\[1/T_{1}T=\frac{a}{\sqrt{T-T_{\rm N}}}+b \tag{4}\]
where the first term originates from AFM fluctuations around a wave vector \(q=Q\) (\(Q\) being antiferromagnetic wave vector) and the second term is due to Korringa-type relaxation, a characteristic feature of metallic materials. The red curve in the paramagnetic state shows the calculated results based on the SCR theory with a set of parameters of \(a=8\) (s\({}^{-1}\)K\({}^{-0.5}\)), \(b=1.6\) (s\({}^{-1}\)K\({}^{-1}\)) and \(T_{\rm N}=62\) K, which reproduces the experimental data in contrast to the case of the \(T\) dependence of \(B_{\rm int}\).
On the other hand, as the \(T\) dependence of \(B_{\rm int}\) suggests the localized moment nature rather than the weak itinerant antiferromagnets, we also analyzed the \(T_{1}\) data with the localized moment model described by
\[1/T_{1}T=\frac{a^{\prime}}{T-\Theta}+b^{\prime} \tag{5}\]
where the first term originates from the paramagnetic uniform spin fluctuations of localized spins described by
Figure 7: \({}^{139}\)La NQR spectra for La2 at zero magnetic field at various temperatures in both the paramagnetic and the magnetically ordered states. The spectra shown by black solid and red open circles are the \(\nu_{3}\) and \(\nu_{2}\) lines, respectively.
Figure 8: Temperature dependence of \(1/T_{1}T\) measured at the \(3\nu_{\rm Q}\) line in the paramagnetic and magnetic states. The red and green curves are the calculated results based on the SCR theory and the localized moments picture, respectively (see text).
Curie-Weiss behavior of the magnetic susceptibility, i.e., \(1/T_{1}\propto\chi T\) and the second term is Korringa-type relaxation, a characteristic feature of metallic materials as in eq. 4. With \(a^{\prime}=20\) (s\({}^{-1}\)), \(b^{\prime}=2.2\) (s\({}^{-1}\)K\({}^{-1}\)), and \(\Theta=62\) K, the experimental data are also reproduced reasonably well as shown by the green curve in Fig. 8. Therefore, both the SCR theory and the localized moment moment can explain the observed \(T\) dependence of \(1/T_{1}\), although those are different in character of spin fluctuations. Thus, given the small Ni-ordered moments evidenced by the magnetization and present NMR spectrum measurements, our experimental results suggest that La\({}_{2}\)Ni\({}_{7}\) has both the characteristic natures of weak itinerant and localized moment antiferromagnets.
## IV Summary
In summary, we have carried out the microscopic characterization of the itinerant antiferromagnet La\({}_{2}\)Ni\({}_{7}\) by \({}^{139}\)La NMR and NQR measurements. Our results suggest the presence of commensurate antiferromagnetic A phase with Ni ordered moments aligned along crystalline \(c\) axis. Furthermore, a non-uniform distribution of the Ni-ordered moments was found with the comparative analysis of inequivalent La1 and La2 sites. In the saturated paramagnetic state under a magnetic field of 8 T at 4.3 K, the Ni-ordered moments were found to become more uniform. The temperature dependence of the sublattice magnetization measured by the internal magnetic induction \(B_{\rm int}\) at the La2 site is reproduced by the local moment model. The temperature dependence of \({}^{139}\)La spin-lattice relaxation time (\(T_{1}\)) suggests that the paramagnetic to antiferromagnetic phase transition at \(T_{N3}\) is second order in nature and the its temperature dependence in the paramagnetic state can be well produced by both the local moment and the self-consistent renormalization model for weak itinerant antiferromagnets. Our NMR results revealed a peculiar magnetism in the itinerant antiferromagnet La\({}_{2}\)Ni\({}_{7}\) that exhibits the properties of both localized and itinerant nature.
## V Acknowledgments
The authors would like to thank J. Wilde for providing the neutron diffraction data and also for fruitful discussions. The research was supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering. Ames National Laboratory is operated for the U.S. Department of Energy by Iowa State University under Contract No. DE-AC02-07CH11358.
|
2302.00908 | GANalyzer: Analysis and Manipulation of GANs Latent Space for
Controllable Face Synthesis | Generative Adversarial Networks (GANs) are capable of synthesizing
high-quality facial images. Despite their success, GANs do not provide any
information about the relationship between the input vectors and the generated
images. Currently, facial GANs are trained on imbalanced datasets, which
generate less diverse images. For example, more than 77% of 100K images that we
randomly synthesized using the StyleGAN3 are classified as Happy, and only
around 3% are Angry. The problem even becomes worse when a mixture of facial
attributes is desired: less than 1% of the generated samples are Angry Woman,
and only around 2% are Happy Black. To address these problems, this paper
proposes a framework, called GANalyzer, for the analysis, and manipulation of
the latent space of well-trained GANs. GANalyzer consists of a set of
transformation functions designed to manipulate latent vectors for a specific
facial attribute such as facial Expression, Age, Gender, and Race. We analyze
facial attribute entanglement in the latent space of GANs and apply the
proposed transformation for editing the disentangled facial attributes. Our
experimental results demonstrate the strength of GANalyzer in editing facial
attributes and generating any desired faces. We also create and release a
balanced photo-realistic human face dataset. Our code is publicly available on
GitHub. | Ali Pourramezan Fard, Mohammad H. Mahoor, Sarah Ariel Lamer, Timothy Sweeny | 2023-02-02T06:55:40Z | http://arxiv.org/abs/2302.00908v1 | # GANalyzer: Analysis and Manipulation of GANs Latent Space for Controllable Face Synthesis
###### Abstract
Generative Adversarial Networks (GANs) are capable of synthesizing high-quality facial images. Despite their success, GANs do not provide any information about the relationship between the input vectors and the generated images. Currently, facial GANs are trained on imbalanced datasets, which generate less diverse images. For example, more than 77% of 100K images that we randomly synthesized using the StyleGAN3 are classified as Happy, and only around 3% are Angry. The problem even becomes worse when a mixture of facial attributes is desired: less than 1% of the generated samples are Angry Woman, and only around 2% are Happy Black. To address these problems, this paper proposes a framework, called GANalyzer, for the analysis, and manipulation of the latent space of well-trained GANs. GANalyzer consists of a set of transformation functions designed to manipulate latent vectors for a specific facial attribute such as facial Expression, Age, Gender, and Race. We analyze facial attribute entanglement in the latent space of GANs and apply the proposed transformation for editing the disentangled facial attributes. Our experimental results demonstrate the strength of GANalyzer in editing facial attributes and generating any desired faces. We also create and release a balanced photo-realistic human face dataset. Our code is publicly available here.
Generative adversarial network, face editing, latent space interpretation, facial attribute editing, transformation
## 1 Introduction
Recently, we have witnessed great success and advancement in the quality of the images being synthesized by Generative Adversarial Networks (GANs) [1]. GANs learn a mapping between a random distribution and a distribution of real data, using adversarial training. As a result, GANs can generate photo-realistic images from randomly sampled vectors from latent space.
Despite the ability of GANs in the synthesis of high-fidelity images, GANs can not provide any information about the relation between the facial attributes and features of the synthesized images, and each element of the corresponding latent vector [2]. Hence, we are not able to utilize such coding information to control the facial attributes and features of the generated image. Interpreting the latent space of GANs would provide us with control over the attributes of the generated images. For human face synthesis, an ideal interpretation of the latent space should provide tools for both _Facial Attribute Editing_ and _Feature-Based Synthesis_.
Facial attribute editing [3] is an interesting research topic with a wide range of real-world applications such as entertainment, auxiliary psychiatric treatment, and data augmentation. The main goal of facial attribute editing is to preserve a person's identity while changing a set of specific attributes of their face. Contrary to facial attribute editing, we introduce a concept and call it _feature-based synthesis_, where the goal is to synthesize photo-realistic human faces with specific facial attributes. More specifically, instead of modifying the facial attributes of a _previously_ generated image, in feature-based synthesis, we want to generate human faces that have specific facial attributes (_e.g._ perceived facial expression, age, gender, race). As an example, in facial attribute editing, firstly we generate an image and then edit its facial expression and gender, while in feature-based synthesis, we can synthesize images with a specific facial expression, and gender.
One of the main applications of such an approach is creating diverse, and balanced datasets that can be used in other domains, such as facial expression recognition, age estimation, ethnicity recognition, and a wide variety of medical and psychological research.
In this paper, we propose a framework called GANalyzer to interpret and analyze the latent space of GANs for _both_ facial attribute editing and feature-based image synthesis. GANalyzer
Fig. 1: Our proposed transformation function \(\Psi\) gets an arbitrary latent vector \(z\), a facial attribute \(fa\), and manipulates \(z\) to generate the latent vectors \(z_{fb}\), and \(z_{id}\). While the latent vector \(z_{id}\) is used for facial attribute editing, the feature latent vector \(z_{fb}\) is used for feature-based synthesis.
is designed to analyze the latent space of well-trained GANs, and hence, learn how the manipulation of a latent vector could affect the attributes of the generated images. As Fig. 2 shows, from the facial attribute editing perspective, GANalyzer is capable of modifying a latent vector such that only a specific attribute of the corresponding image is changed (_e.g._ modifying a person perceived as a woman to instead be perceived as a man, while preserving the other facial attributes such as facial expression, age, _etc._). Likewise, from feature-based synthesis, GANalyzer can manipulate a randomly sampled latent vector such that it results in an image with specific facial attributes (_e.g._ generating images of angry women).
Our proposed GANalyzer analyzes facial attributes of a wide range of synthesized images and their corresponding latent vector to recognize and interpret their relationship. Thus, we generate around \(100K\) images using StyleGANs [4, 5] family as our training set. For any image in the training set, we utilize a set of different off-the-sheives classifiers to predict the corresponding facial attribute classes and label each image. More specifically, we use 4 different classifiers to predict image facial attributes including facial expression, gender, age, and race of each image in our training set. For each class (_e.g_ _Happy_ class from facial expression), we use the statistical variance of the covariance matrix of Eigenvectors of the latent vectors and the mean latent vector of that class to determine the relationship between the latent vectors and the specific facial attribute. Accordingly, we provide a transformation function for facial attribute editing and feature-based synthesis. We define our proposed transformation function \(\Psi(z,S)\) where \(z\) is the latent vector and \(S\) is the target facial attribute we want to modify. As Fig.1 shows, our proposed transformation method decomposes a latent vector \(z\) into two vectors \(z_{fb}\), and \(z_{id}\). While \(z_{fb}\) is designed to perform feature-based synthesis, \(z_{id}\) is used for the facial attribute editing approach.
Moreover, for both facial attribute editing and feature-based image synthesis, our proposed GANalyzer has control over the intensity of the desired target facial attribute. In other words, not can only GANalyzer modify the facial attributes of a synthesized image, but it also can control how strong or weak we want such facial attributes to be presented in the synthesized face. To illustrate, say we want to generate a face that is prototypically Black. We can vary how Black or White that face appears (e.g. changes to skin tone, hairstyle, and face shape) by increasing or decreasing that dimension, respectively. Fig. 3 shows a few examples of the intensity-based facial attribute editing provided by our proposed GANalyzer.
In addition, our proposed GANalyzer can be extended from _single_ facial attribute manipulation to _multiple_ attribute manipulation, where we manipulate the latent vector to modify more than one facial attribute in the corresponding generated image. Fig. 8 shows a few examples of multiple facial attribute editing. Likewise, GANalyzer can perform multiple feature-based syntheses too, where we can generate images with multiple desired facial attributes (_e.g._ a Black woman).
The contribution of the paper can be highlighted as the followings:
* We propose a method for interpreting, and analyzing the latent space of GANs designed for human face synthesis.
* We propose a transformation function called \(\Psi\) for single and multiple feature-based syntheses as well as facial attribute editing.
* Our proposed method has control over the intensity of the target facial attributes.
* Using our feature-based image synthesis, we generate a facial expression database having _Happy_, _Neutral_, and _Angry_ emotions, with improved diversity with respect to perceived Age, Gender, and Racial Prototypicality. The dataset will be publicly available for research purposes.
The remainder of this paper is organized as follows. Sec. 2 reviews the related work in GANs latent space analysis. Sec. 3 describes our proposed methodology for the interpretation of GANs latent space and our transformation method for feature-based synthesis and facial attribute editing. Sec. 4 provides the experimental results, and finally, Sec. 5 concludes the paper with
Fig. 3: The figure shows the intensity-based facial attribute editing using \(\Psi\).
Fig. 2: The figure shows a performance of our proposed transformation function \(\Psi\) for facial attribute editing.
some discussions on the proposed method and future research directions.
## 2 Related Work
_Generative Adversarial Networks:_ GANs, first introduced by Goodfellow _et al_. [1], are among the most powerful methods for photo-realistic image synthesis. The input of GANs is randomly sampled latent vectors from a known distribution (most commonly a Gaussian distribution). During an adversarial-based training process, GANs learn how to convert the input noise vector (_a.k.a._ latent vector) to the distribution of output data. Many variations have been proposed to improve the synthesis quality and make the training process stable [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]. Despite the variety of applications including image editing [17, 18, 19, 20, 21], image inpainting [22, 23, 24, 25, 26], super resolution [27, 28, 29, 30, 31], video synthesis [32, 33, 34], _etc._, there is little work on the analysis and the interpretation of the latent space which can eventually highlight how the modification of a latent vector can affect the synthesized output.
_Latent Space Interpretation and Analysis:_ Latent space of GANs can be taken as a Riemannian manifold [35, 36]. Thus, interpolation in the latent space [37, 38] has been studied to make the output image vary smoothly from a source image to a target image.
Modifying the training process to learn interpretable factorized representation is among the methods for GANs latent space analysis [39, 40]. Chen _et al_. [39] proposed InfoGAN which can learn disentangled representations by maximizing the mutual information between a small subset of the latent variables and the observation. Li _et al_. [41] used an auxiliary mapping network to model the relationship between latent vectors and the predicted semantic score of the corresponding generated images.
Vector arithmetic applied to the latent space can semantically manipulate the generated images [42, 43]. Vector arithmetic is model agnostic, and it can be categorized as supervised [44, 2], and unsupervised [46, 47, 48] methods. Supervised methods use a set of classifiers to label the properties of the generated images, and accordingly manipulate the latent vectors. Shen _et al_. [45] trained a linear Support Vector Machine (SVM) on latent vectors to find a decision hyperplane. More recently, InterfaceGAN [2] proposed how to learn a hyper-plane for binary classification in the latent space for each facial semantic, and ultimately use interpolation for modifying the attribute of the generated images. Plumerault _et al_. [44] proposed a method to advance the interpretability of the latent space which controls specific properties of the generated image like the position or scale of the object in the image. Voynov _et al_. [46] proposed an unsupervised method to manipulate the latent vector by finding the directions corresponding to sensible semantics. Harkonen _et al_. [47] applied Principal Component Analysis on the latent space and proposed to control the semantics by layer-wise perturbation along the principal directions. Shen _et al_. [48] proposed a factorization algorithm for latent semantic discovery using pre-trained weights decomposition.
While unsupervised methods do not require different classifiers, supervised methods would provide more control over the manipulation of a specific facial attribute. From feature-based image synthesis, it is crucial to modify latent vectors such that generated images inherit the desired facial attribute. Thus, we proposed GANalyzer, following the supervised category. Moreover, GANalyzer is model agnostic and it can be applied over the latent space of a well-trained GAN.
## 3 Methodology
In this section, we first introduce facial attribute recognition and the labeling process of images in our training set. Then, we provide a deep analysis of the latent space of well-trained GANs (_e.x._ StyleGANs [4, 5] family), and consequently, introduce our proposed transformation function. Afterward, we extend the transformation function for multiple facial attribute editing and feature-based synthesis. Finally, we analyze the entanglement between different facial attributes and features and provide a solution for disentangled facial attribute editing and feature-based synthesis.
### _Latent Space and Attribute Recognition_
We can formulate a GAN as a function \(\mathbf{G}:\ \mathcal{Z}\ \rightarrow\ \mathcal{X}\), where \(\mathcal{Z}\) is the latent space, and \(\mathcal{X}\) represents the image space. Most of the previously proposed GANs [7] sample \(\mathcal{Z}\ \subseteq\ \mathbb{R}^{d}\), from a Gaussian distribution \(N(0,\ \mathbb{I}_{d})\), where \(d\) defines the dimensions of the latent space. Since \(\mathbf{G}\) is a deterministic function, for any randomly sampled latent vector \(\mathbf{z}\ \in\ \mathcal{Z}\), there exists a unique image \(\mathbf{x}\ \in\ \mathcal{X}\). For any image \(\mathbf{x}\), we can calculate a set of different facial attributes (_e.g_. perceived facial expression, gender, age, race, _etc._), using off-the-shelf classifiers. Hence, we can define an arbitrary number of facial attribute recognition functions \(\mathbf{A}_{c}:\ \mathbf{x}\ \rightarrow\ \mathbf{c}\), where \(\mathbf{A}_{c}\) annotate a specific facial attribute of \(\mathbf{x}\). Moreover, \(\mathbf{c}\ \subseteq\ \mathbb{R}^{m}\) is a probability vector, and \(m\) defines number of the classes in \(\mathbf{A}_{c}\) (_e.x. women_ or _men_ from gender class).
In this paper, we choose to define four facial attribute recognition functions including _facial expression_, _gender_, age, and _race_, as follows in Eq. 1:
\[\begin{array}{lcl}\mathbf{A}_{Gender}:\mathbf{x}\rightarrow\mathbf{gnd}&s.t&\mathbf{gnd}\subseteq\mathbb{R}^{2}\in\{\text{woman, man}\}\\ \mathbf{A}_{Age}:\mathbf{x}\rightarrow\mathbf{age}&s.t&\mathbf{age}\subseteq \mathbb{R}^{2}\in\{\text{Young, Old}\}\\ \mathbf{A}_{Emotion}:\mathbf{x}\rightarrow\mathbf{emo}&s.t&\mathbf{emo} \subseteq\mathbb{R}^{3}\in\{\text{Happy, Nonat, Any}\}\\ \mathbf{A}_{Race}:\ \mathbf{x}\rightarrow\mathbf{rce}&s.t&\mathbf{rce}\subseteq \mathbb{R}^{2}\in\{\text{Black, White, Others}\}\end{array} \tag{1}\]
As mentioned above, we use pre-trained classifiers to label the synthesized images. For \(\mathbf{A}_{Gender}\), \(\mathbf{A}_{Age}\) and \(\mathbf{A}_{Race}\), and \(\mathbf{A}_{Emotion}\), we use the classifier proposed by Rothe _et al_. [49], Serengil _et al_. [50], and Fard _et al_. [51], respectively. Needless to say, it is possible to extend the modification dimensionality by utilizing other pre-trained classifiers or recognition methods (_e.x._ facial landmark and head pose estimators [52, 53, 54, 55] if modification of facial pose of the synthesized images is needed.) While the output of the original classifiers might be different than the output of our facial attribute recognition functions, we only use the probability scores assigned to the class of our interest, and ignore the rest. To illustrate, for facial expression recognition, the classifier proposed by Fard _et al_. [51] is designed to predict a 7-dimensional
Fig. 4: The process of generating an image \(x_{i}\), from a random noise \(z_{i}\), and its corresponding facial attribute set \(f_{a_{i}}\), created using pre-trained classifiers.
probability vector representing the probability of the following facial expressions: neutral, happy, sad, surprise, fear, disgust, and anger. However, to make the facial emotion space simpler, and more comparable to the other facial attributes, we define our \(\mathbf{A}_{Emotion}\) to only consider the facial expressions we are interested in manipulating: happy, neutral, and angry. Likewise, we simplified race, too, by focusing on just three racial groups which are Black, White, and Others: _Indian_, _Middle-eastern_, and _Latinx_.
Next, we synthesize \(N\) number of images using the generative function, \(\mathbf{G}\). As Fig. 4 shows, for any image \(\mathbf{x}_{i}\) generated by the corresponding latent vector \(\mathbf{z}_{i}\), we calculate its corresponding facial attributes set, \(\mathbf{fa}_{i}\), as follows in Eq. 2:
\[\mathbf{fa}_{i}=\{\mathbf{A}_{Cender}(\mathbf{x}_{i}),\mathbf{A}_{Age}( \mathbf{x}_{i}),\mathbf{A}_{Emotion}(\mathbf{x}_{i}),\mathbf{A}_{Race}(\mathbf{ x}_{i})\} \tag{2}\]
Then, using Eq.1 we can write down \(\mathbf{fa}_{i}\) as follows:
\[\mathbf{fa}_{i}=\{\mathbf{gnd}_{i},\mathbf{age}_{i},\mathbf{emo}_{i}, \mathbf{rec}_{i}\}\subseteq\mathbb{R}^{10} \tag{3}\]
After creating the corresponding facial attributes set for all the synthesized images, we use the similarity between the latent vectors that are categorized in the same class to propose our transformation function \(\Psi\). For instance, if we want to model the _Anger_ facial attribute from the facial expression class, we use the similarity between the corresponding latent vectors of the images which are labeled as _Angry_ to create our transformation function.
### _Latent Space Analysis & Transformation Functions_
We design our proposed GANalyzer framework to model any of the facial attributes in Eq. 1, by introducing a unique transformation function \(\Psi\) with respect to each facial attribute. For any arbitrary facial attribute object, \(fa\_obj\in\mathbf{fa}\) (_e.g._ anger from emotion), we define the transformation function in Eq. 4 as follows:
\[\Psi(\mathbf{z}_{i}\ |\ fa\_obj)\ \rightarrow\ \mathbf{z}_{i}^{fb},\ \mathbf{z}_{i}^{id} \tag{4}\]
Our proposed transformation function \(\Psi\) manipulates an input latent vector \(\mathbf{z}_{i}\), and creates 2 output latent vectors \(\mathbf{z}_{i}^{fb}\), and \(\mathbf{z}_{i}^{id}\). Then, using the generative function \(\mathbf{G}\), we synthesize \(\mathbf{x}_{i}\), \(\mathbf{x}_{i}^{fb}\), and \(\mathbf{x}_{i}^{id}\) corresponding to the latent vectors \(\mathbf{z}_{i}\), \(\mathbf{z}_{i}^{fb}\), and \(\mathbf{z}_{i}^{id}\) respectively.
We use \(\mathbf{z}_{i}^{id}\) for facial attribute editing, as we designed the transformation function to manipulate the input latent vector \(\mathbf{z}_{i}\) such that the corresponding generated image, \(\mathbf{x}_{i}^{id}\), inherits from the \(fa\_obj\) facial attributes, while its identity is preserved and almost similar to the identity of \(\mathbf{x}_{i}\). Likewise, \(\mathbf{z}_{i}^{fb}\) is designed for feature-based synthesis, which is agnostic to the identity. Thus, the corresponding generated image \(\mathbf{x}_{i}^{fb}\) inherits from the \(fa\_obj\) facial attributes, while the identities of \(\mathbf{x}_{i}^{fb}\) and \(\mathbf{x}_{i}\) would be different from each other.
To create a transformation function corresponding to the facial attribute object \(fa\_obj\), we first create a set of latent vectors from our training set and call it \(\mathcal{Z}_{fa\_obj}=\{\mathbf{z}_{0},\mathbf{z}_{1},...,\mathbf{z}_{k}\}\). For each latent vector \(\mathbf{z}_{i}\ \in\ \mathcal{Z}_{fa\_obj}\), we have \(fa\_obj\in\mathbf{fa}_{i}\), which means the target facial attribute \(fa\_obj\) must be in the corresponding facial attributes set \(\mathbf{fa}_{i}\). To clarify, we can define \(fa\_obj\) as _Anger_ from the emotion class, and the facial attribute set \(\mathcal{Z}_{anger}\), that includes the latent vectors corresponding to all the synthesized images labeled as _Angry_.
Inspired by Cootes _et al_. [56, 57], we use the Eigenvectors of the Covariance matrix corresponding to \(\mathcal{Z}_{fa\_obj}\) to propose our transformation functions. Firstly, we define \(\mathbf{V}_{fa\_obj}=\{v_{1},v_{2},...,v_{t}\}\) as the set of all the Eigenvectors of the Covariance matrix of \(\mathcal{Z}_{fa\_obj}\). Then, we define \(\mathbf{m}_{fa\_obj}\) as the element-wise statistical mean vector of \(\mathcal{Z}_{fa\_obj}\). Finally, we define \(\mathbf{b}_{fa\_obj}\) vector as the following in Eq. 5:
\[\mathbf{b}_{fa\_obj_{(i\times 1)}}=\mathbf{V}_{fa\_obj_{(i\times d)}}^{\intercal}( \mathbf{z}_{i_{(d\times 1)}}-\mathbf{m}_{fa\_obj_{(d\times 1)}}) \tag{5}\]
Considering that the \(i^{th}\) element of the statistical variance (_a.k.a._ Eigenvalues) corresponding to \(\mathbf{V}_{fa\_obj}\) is \(\lambda_{i}\). Following Cootes _et al_. [56, 57], we limit \(b_{i}\), the \(i^{th}\) element of the vector \(\mathbf{b}_{fa\_obj}\), such that \(b_{i}\in[-3\sqrt{\lambda_{i}},+3\sqrt{\lambda_{i}}]\), and create a new vector called \(\mathbf{\tilde{b}}\). We can estimate a latent vector \(\mathbf{z}_{i}\ \in\ \mathcal{Z}_{fa\_obj}\) using Eq. 6 as follows:
\[\mathbf{\tilde{z}}_{i}\approx\mathbf{m}_{fa\_obj}+\mathbf{V}_{fa\_obj} \mathbf{\tilde{b}} \tag{6}\]
We use Eq. 6 as the building block for defining our transformation function \(\Psi\). We defined \(\mathbf{V}_{fa\_obj}\) as the set of Eigenvectors of the Covariance matrix of the latent vectors having a _common_ facial attribute \(fa\_obj\). Thus, the first Eigenvectors represent the features and attributes which model _fa_obj_ facial attribute. To clarify, if we want to model _Anger_ (and thus we define our \(fa\_obj\) as Anger) using Eq. 6, the first (_a.k.a._ the most important) Eigenvectors in \(\mathbf{V}_{Anger}\) models the _Anger_-related features. In other words, to model a low-frequency facial attribute \(fa\_obj\) (such as perceived facial emotion, race, _etc._) in a dataset, we can use the most important Eigenvectors of the corresponding Covariance matrix of a subset where all its samples have \(fa\_obj\) (_e.g._ a subset where all samples are classified as happy_). Likewise, the high-frequency features (sample-specific features, which are mostly identity-related features) can be modeled by the least important Eigenvectors. Later, in Sec. 3.2.2, we define a hyper-parameter
Fig. 5: For feature-based synthesis, the portion of the Eigenvectors of the Covariance matrix defines the intensity of the target facial attribute in the generated image. As the figure shows, using \(10\%\) of the Eigenvalues will result in an image with high-intensity facial attributes (Anger, and Black), while using \(99\%\) of the Eigenvectors results in an image with a very low-intensity facial attribute, with the same identity compared to the original image.
\(\beta\in[0,100]\), indicating the portion of the first Eigenvectors that are used to create the transformation function.
In Fig. 5, we demonstrate the effect of the Eigenvectors in image synthesis. Accordingly, using \(10\%\) of the most important Eigenvectors for latent vector manipulation, using Eq. 6, results in an image that intensively inherits from the corresponding facial attribute. However, regarding the identity comparison, the original image and the modified target image hardly have any identity-related similarities. As we gradually choose a larger portion of the Eigenvectors, we observe a reduction in the intensity of the corresponding features (_e.x. Anger_ in Fig. 5), while the identity-related feature increases.
#### 3.2.1 Facial Attribute Editing
For facial attribute editing, the identity of the generated image after manipulation must be relatively similar to the source image. As discussed in Sec. 3.2, using Eq. 6, we need to use almost all the Eigenvectors of the Covariance matrix such that both low-frequency and high-frequency features of the source and the target images become relatively similar. However, using all Eigenvectors, makes the generated latent vector \(\mathbf{\tilde{z}}_{i}\) to be relatively similar to the source vector \(\mathbf{z}_{i}\). As Fig.5 shows, using either \(90\%\) or \(99\%\) of the Eigenvectors, the target synthesized image could be relatively similar to the original image, while there is no guarantee that the target image inherits from \(fa\_obj\).
To overcome this issue, we propose to increase the impact of the mean latent vector in Eq. 4. The mean latent vector \(\mathbf{m}_{fa\_obj}\), created from the element-wise average of all the latent vectors having \(fa\_obj\). Hence, as Fig. 7 shows, the images generated from the mean latent vectors highly represent the corresponding facial attributes. Consequently, increasing the weight of the mean latent vector, \(\mathbf{m}_{fa\_obj}\), while using all of the Eigenvectors of the Covariance Matrix, in the estimation of the source latent vector \(\mathbf{z}_{i}\) in Eq. 4 results in an image which is relatively similar to the original image in terms of the identity, while it inherits from the desired \(fa\_obj\) facial attribute. We propose Eq. 7 for facial attribute editing as follows:
\[\mathbf{z}_{i}^{id}=\alpha\ \mathbf{m}_{fa\_obj}+\mathbf{V}_{fa\_obj} \mathbf{\tilde{b}} \tag{7}\]
where \(\alpha\geq 1\) is a hyper-parameter added to Eq. 7 to intensify the effect of the mean vector \(\mathbf{m}_{fa\_obj}\) in the creation of \(\mathbf{z}_{i}^{id}\). As Fig. 6 shows, the mean vector \(\mathbf{m}_{fa\_obj}\) mostly preserves the most common facial attributes and features within the corresponding set of latent vectors having \(fa\_obj\) facial attribute. Thus, considering \(fa\_obj\) set, \(\mathbf{G}(\mathbf{m}_{fa\_obj})\) is an image \(\mathbf{x}_{m}\), which intensively inherits from the \(fa\_obj\). Thus, adding weight to the mean vector \(\mathbf{m}_{fa\_obj}\) in Eq. 7, would modify the input latent vector \(\mathbf{z}_{i}\) such that the generated latent vector \(\mathbf{z}_{i}^{id}\) results in an image that inherits facial attributes from \(fa\_obj\), while its identity-related features are similar to the image generated from \(\mathbf{z}_{i}\).
#### 3.2.2 Feature-Base Synthesis
For feature-based synthesis, instead of using the complete set of Eigenvectors of the Covariance matrix \(\mathbf{V}_{fa\_obj}\), we introduce a hyper-parameter \(\beta\in[0,100]\) indicating the portion of the first Eigenvectors. Based on the value of \(\beta\), we define \(\mathbf{\tilde{V}}_{fa\_obj}\), which is a subset of \(\mathbf{V}_{fa\_obj}\), having only the first \(\beta\) portion of the Eigenvectors. Then, we define Eq. 8 for feature-based manipulation:
\[\mathbf{z}_{i}^{fb}=\mathbf{m}_{fa\_obj}+\mathbf{\tilde{V}}_{fa\_obj} \mathbf{\tilde{b}} \tag{8}\]
Using Equations 4, 7 and 8, we propose our transformation
Fig. 6: Facial attribute editing with respect to different values of \(\alpha\). The greater the value of \(\alpha\), the higher would be the intensity of the target facial attribute. In fact, \(\alpha\) highlights the effect of the corresponding mean vector in the manipulated latent vector.
Fig. 7: The generated image from the mean latent vector of different facial attributes shows the entanglement between the facial attributes. As an example, _Anger_ is entangled with _Man._
function \(\Psi\) in Eq. 9 as follows:
\[\begin{array}{l}\Psi(\mathbf{z}_{i}\mid fa\_obj)\ \rightarrow\ \mathbf{z}_{i}^{fb},\ \mathbf{z}_{i}^{id}\\ \mathbf{z}_{i}^{id}=\alpha\ \mathbf{m}_{fa\_obj}+\mathbf{V}_{fa\_obj}\tilde{ \mathbf{b}}\\ \mathbf{z}_{i}^{fb}=\mathbf{m}_{fa\_obj}+\mathbf{\tilde{V}}_{fa\_obj}\tilde{ \mathbf{b}}\end{array} \tag{9}\]
The value of \(\alpha\) needs to be selected based on how intensely we want to add \(fa\_obj\) to the input image (see Fig. 6). Likewise, the parameter \(\beta\) defines how intense we need the facial attribute \(fa\_obj\) in the synthesized image.
### _Multiple Facial Attribute Manipulation_
Multiple facial attribute manipulation is a useful tool for both facial attribute editing and feature-based synthesis. We propose a linear combination of a set of desired target facial attributes to perform these tasks.
Let's assume we want to manipulate a randomly sampled latent vector \(\mathbf{z}_{s}\) such that the corresponding image generated from the manipulated latent vector \(\mathbf{z}_{d}\) inherits from a set of facial attributes \(fa\_set:\ \{fa\_obj_{1},fa\_obj_{2},...,fa\_obj_{n}\}\). We select one of the elements of \(fa\_set\) arbitrarily as the _base_ facial attribute and call it \(fa\_objBase\).
**Multiple Facial Attribute Editing**: For facial attribute editing, it is crucial to keep the identity of the original image. Hence, we use all of the Eigenvectors of the Covariance Matrix \(\mathbf{V}_{fa\_objBase}\), corresponding to \(fa\_objBase\) to keep both high- and low-frequency features and attributes of the original image after manipulation (see Sec. 3.2.1). Then, following the method proposed for facial attribute editing, we use a linear combination of the mean latent vectors corresponding to each element of \(fa\_set\). This ensures the synthesized image corresponding to the generated latent vector \(\mathbf{z}_{mfa}\) inherits from all of the desired facial attributes. Finally, we propose Eq. 10 as follows for multiple facial attribute editing:
\[\mathbf{z}_{mfa}=\sum_{1}^{n}\gamma_{i}\mathbf{m}_{fa\_obj_{i}}+\mathbf{V}_{fa \_objBase}\mathbf{\tilde{b}} \tag{10}\]
where \(n\) is the length of the set of facial attributes \(fa\_set\), and \(\gamma_{i}\) is a hyper-parameter that defines the intensity of the corresponding facial attribute in the image synthesized from \(\mathbf{z}_{mfa}\). As Fig 8 shows, our proposed method can manipulate multiple facial attributes, while we still have control over the intensity level of the desired facial attributes as mentioned in Sec. 3.2.
**Multiple Feature-Based Synthesis**: We follow the method proposed in Sec. 3.2.2 for feature-based synthesis. Using its corresponding \(\beta\), called \(\beta_{Base}\), mean latent vector \(\mathbf{m}_{fa\_objBase}\), and the subset of the Eigenvectors of the Covariance matrix \(\mathbf{\tilde{V}}_{fa\_objBase}\) (see Sec. 3.2.2, and Eq. 8), we introduce a transformation function for multiple feature-based syntheses as follows in Eq. 11:
\[\mathbf{z}_{mfe}=\sum_{1}^{n}\lambda_{i}\mathbf{m}_{fa\_obj_{i}}+\mathbf{ \tilde{V}}_{fa\_objBase}\mathbf{\tilde{b}} \tag{11}\]
where \(n\) is the length of the set of facial attributes \(fa\_set\), and \(\lambda_{i}\) is a hyper-parameter that defines the intensity of the corresponding facial attribute in the image synthesized from \(\mathbf{z}_{mfe}\).
Our proposed method for multiple feature-based syntheses manipulates a randomly sampled latent vector, using the subset of the Eigenvectors of the Covariance matrix of one arbitrary facial attribute and a linear combination of the mean vectors of all of the desired facial attributes, to make sure that the generated latent vectors inherit from all of the desired facial attributes. We use this method to create a diverse dataset in Sec. 4.4.
### _Attribute Entanglement Analysis_
The correlation between different facial attributes of samples on which GANs are trained heavily relies on entanglement and disentanglement of the latent space [2, 45]. As an example, GANs usually entangle _age_ with _glasses_[6, 7, 8]. Entanglement in
Fig. 8: Our proposed transformation function \(\Psi\) can perform multiple facial attribute editing. As the figure shows, in the first column, we modified the original image to be _Angry_ and _Young_, while in the last column, we modified the original image to be _Woman_ and _Black_.
Fig. 9: The figure shows the entanglement between some specific facial attributes, and how our proposed transformation function \(\Psi\) can be modified to make these facial attributes disentangle from each other. As shown, _Man_ facial attribute is entangled with both _Black_, and _Anger_, and with a small modification to \(\Psi\), all these facial attributes became disentangled.
the latent space negatively affects both facial attribute editing and feature-based synthesis manipulation. For facial attribute editing, it is crucial to modify one desired facial attribute while keeping the others as well as the identity with no change.
We propose an effective method for the investigation of the entanglement between different facial attributes and then propose a method to adjust the transformation function \(\Psi\), to deal with the entanglement issue. To study the entanglement between different facial attributes, we create the mean latent vector \(\mathbf{m}_{fa\_obj}\) corresponding to a _desired_ facial attribute \(fa\_obj\). Then, using the GAN generative function \(\mathbf{G}\), we synthesize the mean image \(\mathbf{x}_{m}\) from the mean latent vector \(\mathbf{m}_{fa\_obj}\). Afterward, we can visually observe the _undesired_ changes in facial attributes and facial attributes of the mean image, which can be taken as the entangled facial attributes with our desired facial attribute \(fa\_obj\).
In Fig. 7, we calculate the mean latent vector and depict the corresponding mean image for _Anger_, _Old_, _Young_, _Woman_, and _Black_ facial attributes. As Fig. 7 shows, visual analysis of the synthesized mean images can easily disclose the high degree of entanglement between specific facial attributes. As an example, it is obvious that _Anger_ is highly entangled with _Man_, _Old_ with _Man_, _Glasses_ and _Anger_, and _Black_ with _Man_. Also, in the mean image regarding the _Woman_ facial attribute, we do not observe any specific pattern, indicating that _Woman_ does not have a high entanglement to other facial attributes. While our proposed method can easily disclose feature/facial attribute entanglement, in Sec. 4.3, we statistically measure the entanglement degree between the facial attributes to support our proposed technique for visual analysis of entanglement.
We further can modify our proposed transformation function to deal with the entanglement between the facial attributes. Assume we want to modify a randomly sampled latent vector \(\mathbf{z}_{s}\) and create \(\mathbf{z}_{d}\) such that the generated image \(\mathbf{x}=\mathbf{G}(\mathbf{z}_{d})\) inherits from a _desired_ facial attribute \(fa\_obj_{D}\), while there is an _undesired_ facial attribute \(fa\_obj_{V}\), which is entangled with \(fa\_obj_{D}\). Using Eq. 12, we can modify \(\mathbf{z}_{s}\), and generate \(\mathbf{z}^{\prime}{}_{d}\) such that the synthesized image \(\mathbf{x}^{\prime}=\mathbf{G}(\mathbf{z}^{\prime}{}_{d})\)_only_ inherits from \(fa\_obj_{D}\).
\[\mathbf{z}^{\prime}{}_{d}=(\alpha\ \mathbf{m}_{fa\_obj_{D}}-\delta\ \mathbf{m}_{fa\_obj_{V}})+ \mathbf{V}_{fa\_obj_{D}}\mathbf{\tilde{b}} \tag{12}\]
\(\mathbf{m}_{fa\_obj_{V}}\) is the mean of the _undesired_ facial attribute \(fa\_obj_{U}\), and \(\delta>0\) is a hyper-parameter that sets the intensity of the _undesired_ facial attribute. More clearly, using \(\mathbf{m}_{fa\_obj_{U}}\) with a negative weight \(-\delta\) reduces the impact of the undesired facial attribute in the generation of the latent vector \(\mathbf{z}^{\prime}{}_{d}\). As an example, as Fig. 7 suggests, the _Anger_ facial attribute is entangled with the _Man_. Thus, can generate latent vectors which inherits from _Anger_, while it is disentangled from _Man_ as follows:
\[\mathbf{z}^{\prime}{}_{d}=(\alpha\ \mathbf{m}_{fa\_Anger}-\delta\ \mathbf{m}_{fa\_ man})+\mathbf{V}_{fa\_Anger}\mathbf{\tilde{b}} \tag{13}\]
As we discussed in Sec. 3.2, the term \(\mathbf{V}_{fa\_Anger}\mathbf{\tilde{b}}\) preserves the identity of the image, while \(\alpha\ \mathbf{m}_{fa\_Anger}\) intensifies the _Anger_ of the corresponding synthesized image. Using \(-\delta\ \mathbf{m}_{fa\_man}\), will reduce the intensity of the _undesired_ facial attribute (_Man_) in the resulting image. Fig. 9 shows the performance of our method for disentangled facial attribute editing. Moreover, in Sec. 4.3, we show how our proposed technique results in disentangled facial attribute editing.
## 4 Evaluation
In this section, we first propose a method to evaluate our transformation function for facial attribute editing. Then, we evaluate the performance of feature-based synthesis. For the purpose of evaluation, we randomly generate 10K images using StyleGANs [4, 5] family and call it the _validation set_. As mentioned in Sec. 3.2, for each image in our validation set, we use 4 pre-trained classifiers to measure the following facial attributes: Gender, Age, Facial Expression, and Race. For race, we use the pre-trained classifier provided by [50] for ethnicity prediction including _Asian_, _Indian_, _Black_, _White_, _Middle-Eastern_, and _Latinx_. As we are mostly interested in Black, and White in the context of this research for
Fig. 11: Covariance matrix of the training set and validation set. The figures show the portion of each facial attribute in the training and the validation set, as well as the correlation between different facial attributes. As an example, \(77.54\%\)of the samples in the training set, and \(87.51\%\)of the samples in the validation sets are classified as _Happy_. For the training set, \(45.18\%\)of this _Happy_ samples _Woman_, and \(32.36\%\)are _Man_. Likewise, for the validation set, out of \(87.51\%\)of this _Happy_ samples, regarding race, \(3.63\%\)are labeled as _Black_, \(9.86\%\)_as _Others_, and\(74.02\%\)as _White_.
Fig. 10: The distribution of the facial expression, race, gender, and age of the validation set.
simplicity, and also to be able to interpret the relationship and the entanglement between different facial attributes more accurately. Hence, while we explicitly have _White_, and _Black_, we consider the _Indian_, _Middle-eastern_, and _Latinx_ as _Others_ race.
Fig. 10, shows the facial attributes histogram of the validation set. Moreover, in Fig 11, we show the Covariance matrix of both the training set and the evaluation set, indicating the portion of each facial attribute class, as well as the correlation between each facial attribute. We also use the Covariance matrix of the validation set in Sec. 4.3 for facial attribute entanglement analysis and to evaluate our proposed method for disentangled facial attribute editing.
### _Facial Attribute Editing_
To evaluate the performance of the transformation function in facial attribute editing, we conducted 5 different experiments. In each experiment, we define \(fa\_obj\) as the target facial attribute that we want to modify, while we keep the identity of the synthesized image. Thus, for each image in the validation set, we modify the corresponding latent vector with respect to the target facial attribute \(fa\_obj\) and then calculate the identity similarity score and the target facial attribute class.
In each facial attribute modification experiment, we first modify the latent vector with respect to the target facial attribute and then, use the GAN generative function to generate the corresponding image. Next, we use RetinaFace [50] to compare the identity of the original image with the modified synthesized image. We also use pre-trained classifiers to measure the target facial attributes of the synthesized images.
In the first experiment, we evaluated our transformation function for editing the _Woman_ attribute. As Table I shows, the identity and facial attribute scores heavily rely on the value of hyper-parameter \(\alpha\) (see Eq. 7). Setting \(\alpha=2.5\), the identity score is \(99.66\%\), indicating that almost all of the modified images are identical to the original images, while the facial attribute score is \(66.48\%\). As mentioned in Sec. 3.2, increasing the value of \(\alpha\) to \(3.5\) reduces the identity score to \(96.68\%\), while the facial attribute score increases to \(96.27\%\). Finally, we increased the value of \(\alpha\) to \(4.0\), and around \(97.09\%\) of the images in the validation set labeled as _Woman_, while the identity scores reduces to \(92.83\%\).
Next, we evaluated our transformation function for editing _Man_ attribute. As Table I shows, setting \(\alpha=2.5\), the identity score is \(79.71\%\), while the facial attribute score is \(94.39\%\). Increasing the value of \(\alpha\) to \(3.0\) reduces the identity score to \(53.06\%\), while the facial attribute score increases to \(99.36\%\). Finally, we increased the value of \(\alpha\) to \(4.0\), and around \(99.60\%\) of the images in the validation set labeled as _Man_, while the identity scores reduces to \(41.89\%\). As expected, increasing the value of \(\alpha\) increases the corresponding facial attribute score in the modified image, while the identity score reduces.
In another experiment, we evaluate our transformation function regarding the _Anger_ facial attribute modification. As Table II shows, defining \(\alpha=2.0\) results in the identity score of \(98.87\), while the facial attribute modification score for _Happy_ to either _Neutral_ or _Anger_ is \(61.64\%\) (\(38.47\%\) for _Happy_ to _Angry_, and \(23.17\%\) for _Happy_ to _Neutral_), and for _Neutral_ to _Angry_ is \(69.50\%\). Increasing \(\alpha\) to \(2.5\) reduced the identity score to \(84.67\%\), while increasing the facial attribute score for _Happy_ to \(78.00\%\). Following the same trend, setting \(\alpha=3.0\) results in a reduction in identity score to \(57.22\%\), and an increase in the facial attribute score: \(87.70\%\) for _Happy_ to _Neutral/Anger_, and \(81.32\%\) for _Neutral_ to _Angry_.
In the next experiment, we evaluated our method considering race modification to _Black_. As Table III shows, for \(\alpha=1.5\) the identity score is \(84.67\%\), while the facial attribute score is around \(35.41\%\) for _White_ to either _Black_ (around \(16.24\%\)) or _Others_ (around \(19.16\%\)) and \(43.27\%\) for _Others_ to _Black_ race. As expected, increasing \(\alpha\) to \(2.0\), reduces the identity score to \(79.61\%\), while facial attribute scores increase to \(86.09\%\) and \(89.13\%\) for _White_ to _Black/Others_, and _Others_ to _Black_ respectively. Likewise, increasing \(\alpha\) to \(2.5\) results in a facial attribute score of \(99.24\%\) for _White_ to _Black/Others_ race, out of which about \(98.42\%\) of the samples with _White_ race converted to _Black_. As mentioned in Sec 3.2.1, increasing the value of \(\alpha\) results in high-intensity facial attribute editing. We can consider _White_ to _Black_ race as high-intensity modification, which increased from \(16.24\%\) for \(\alpha=1.5\), to \(76.56\%\) for \(\alpha=2.0\), and finally, to \(98.42\%\) for \(\alpha=2.5\). On the contrary, _White_ to _Others_ race, which can be taken as low-intensity modification, reduces from \(19.16\%\) for \(\alpha=1.5\), to \(9.52\%\) for \(\alpha=2.0\), and finally, to \(0.81\%\) for \(\alpha=2.5\).
In order to evaluate our transformation function regarding age modification, we designed two sets of experiments. In the first set of experiments, we modified the latent vectors in the validation set to make the samples _Older_. We defined the _modification accuracy_ by predicting the age of each sample, before and after the modification, and calculated the portion of samples where the age is greater after the modification. Fig. 12 shows the histogram of the increase in age for different values of \(\alpha\). For \(\alpha=3.5\), the identity score is \(96.30\%\) and the average increase in the age of the samples is around \(8.46\) years. Increasing \(\alpha\) to \(4.0\) and \(5.0\) results in the identity scores of \(90.49\%\) and \(63.83\%\) and the average increase in age by \(9.71\) and \(12.10\) years, respectively.
Similarly, we evaluated the transformation function for _Young_ age modification. As Fig. 13 shows, for \(\alpha=3.5\), the identity score is \(98.46\%\), the modification score \(79.36\%\), and the average age reduction is around \(6.50\) years. As expected, increasing \(\alpha\) to \(4.0\) and \(5.0\) results in a reduction in identity scores to \(86.10\%\) and \(61.24\%\), an increased in the modification score to \(82.55\%\) and \(84.33\%\), and the average decrease in age of \(8.44\) and \(9.86\) years, respectively.
### _Feature-Based Synthesis_
To evaluate the transformation function for feature-based synthesis, we performed sets of experiments. In each experiment, we defined a target facial attribute \(fa\_obj\) and manipulated a random latent vector such that its corresponding synthesized image inherits from \(fa\_obj\). As Fig. 10 shows, around \(87\%\) of the randomly generated images using StyleGANs [4, 5] family are labeled as _Happy_ expression. For _White_ for ethnicity and _Young_ age, we have more than \(83\%\), and \(87\%\), respectively. Thus, we defined our target facial attributes as _Neutral/Angry_ for expression, _Black_ for ethnicity, _Old_ and _Young_ for age, and _Woman_ and _Man_ for gender. As explained in Sec. 3.2.2, for feature-based synthesis, we only measure the _desired_ facial attribute score of the synthesized image after the modification of the corresponding latent vector using our proposed transformation function, and preserving the original identity is not taken into account.
In the first experiment, we manipulated the latent vectors in the validation set to generate images with _Woman_ gender. As Table IV shows, by decreasing the hyper-parameter \(\beta\) from \(50\%\) to \(35\%\), and \(25\%\), the corresponding facial attribute score increased from \(96.60\%\) to \(98.09\%\), and finally to \(98.88\%\). Likewise, for _Man_ gender modification, for \(\beta=50\%\), the facial attribute score is \(99.21\%\). As expected, decreasing \(\beta\) to \(35\%\), and \(25\%\) resulted in facial attribute score of \(99.86\%\), \(99.92\%\) respectively.
As Table V shows, for the evaluation of _Anger/Neutral_ facial expression modification, we measure both _Happy_ to _Angry_, as the high-intensity modification, and _Happy_ to _Neutral_, as the low-intensity modification. As expected, reducing \(\beta\) from \(50\%\) to \(35\%\), and \(25\%\) results in increase of the facial attribute score from
\begin{table}
\begin{tabular}{l c c c} & \(\beta\to 25\%\) & \(\beta\to 35\%\) & \(\beta\to 50\%\) \\ \hline White to Black/Others (\%) & 99.36 & 99.37 & 99.63 \\ White to Black (\%) & 99.30 & 99.96 & 99.02 \\ White to Others (\%) & 00.01 & 0.03 & 0.32 \\ Others to Black (\%) & 100 & 100 & 09.65 \\ \end{tabular}
\end{table} TABLE VI: The evaluation of the transformation function for feature-based synthesis with respect to _Dark_ skin color.
Fig. 14: Age histogram of the validation set after using \(\Psi\) for synthesis of the _Old_ faces. As the figure shows, as the amount of \(\beta\) reduces, the mean of the generated images becomes older relative to the histogram of the original validation set before feature-base manipulation. The blue chart shows the Kernel Density Estimation of the histogram.
\begin{table}
\begin{tabular}{l c c c} & \(\beta\to 25\%\) & \(\beta\to 35\%\) & \(\beta\to 50\%\) \\ \hline Woman to Man (\%) & \(99.92\) & \(99.86\) & \(99.21\) \\ Man to Women (\%) & \(99.88\) & \(98.09\) & \(96.0\) \\ \hline \end{tabular}
\end{table} TABLE V: The evaluation of the transformation function for feature-based synthesis with respect to _Anger_ expression.
Fig. 15: Age histogram of the validation set after using \(\Psi\) for synthesis of the _Young_ faces. As the figure shows, as the amount of \(\beta\) reduces, the mean of the generated images becomes older relative to the histogram of the original validation set before feature-base manipulation. The blue chart shows the Kernel Density Estimation of the histogram.
\(64.55\%\) to \(79.55\%\), and \(84.56\%\) for _Happy_ to _Anger/Neutral_, and \(77.80\%\) to \(83.81\%\), and \(87.55\%\) for _Neutral_ to _Angry_ respectively. Furthermore, reducing the value of \(\beta\), reduces the low-intensity modification (_Happy_ to _Neutral_) score, while simultaneously increasing the high-intensity modification (_Happy_ to _Angry_) score.
Table VI shows the facial attribute score for race modification to _Black_. As expected, reducing the value of \(\beta\) increased the _Black/Others_ facial attribute score. Following the same trend, the facial attribute score for low-intensity modification (_White_ to _Others_) decreased as \(\beta\) decreased, and simultaneously we faced an increase in facial attribute score for high-intensity modification (_White_ to _Black_).
Finally, we evaluated our transformation function for feature-based synthesis for age modification. For both _Old_ and _Young_ image synthesis, we followed our other experiments and set values of \(\beta\) as \(50\%\), \(35\%\), and \(25\%\). After the modification of the latent vectors in the validation set according to the desired _facial attribute_ (either _Old_ or _Young_), we measure the age of the synthesized images and depict the corresponding histograms in Fig. 14, and Fig. 15. As Fig. 14 shows, for \(\beta\) sets to \(50\%\), \(35\%\), and \(25\%\), the average age of the synthesized images is \(34.52\) to \(35.94\), and \(37.12\) years old respectively, while the average age for the validation set is \(31.53\) (see Fig. 10). For _Young_ facial attribute, as Fig. 15, decreasing \(\beta\) from \(50\%\) to \(35\%\), and finally to \(25\%\) results in a decrease in the average age of the synthesized images, from \(25.67\) to \(24.46\), and \(23.98\) years old, respectively.
### _Disentanglement Analysis and Attribute Editing_
As we mentioned in Sec. 3.4, facial attribute entanglement in the latent space of GANs would negatively affect facial attribute editing. In this section, we introduce experiments to evaluate our proposed solution for _disentangled_ facial attribute editing.
To show facial attribute entanglement in facial attribute editing, in Fig. 16, we depict the Covariance matrix of the validation set regarding different facial attributes, for each of the following facial attributes: _Anger_ with \(\alpha=2.5\), _Black_ race with \(\alpha=2.0\), _Woman_ with \(\alpha=3.5\), _Man_ with \(\alpha=3.0\), _Old_ with \(\alpha=3.5\), and _Young_ with \(\alpha=4.0\). The second row of Fig. 16 shows the _Entanglement Degree_ figure, which is the difference between the Covariance matrix of the validation set before any modification and the Covariance matrix of the validation set after facial attribute editing with respect to the mentioned facial attributes. Accordingly, the positive values show the _direct_ entanglement, while the negative values show the _reverse_ entanglement. Based on Fig. 16, we can express the following entanglements: 1- _Anger_ has direct entanglement with the _Man_, and inverse entanglement with _Woman_. 2- _Black_ race has direct entanglement with the _Man_ and _Anger_, and inverse entanglement with _Woman_ and _Happy_. 3- _Man_ has direct entanglement with the _Anger_, and inverse entanglement with _Happy_. 4- 3- _Old_ has direct entanglement with the _Anger_/_Neutral_ and _Man_, and inverse entanglement with _Happy_ and _Woman_. 5- _Young_ has direct entanglement with the _Neutral_ and _Man_, and inverse entanglement with _Happy_ and _Woman_. 6- On the contrary, as we also showed in Fig. 7, _Woman_ facial attribute has almost no entanglement with the other facial attributes.
In order to evaluate our proposed solution for disentangled facial attribute editing (see Sec. 3.4), we perform 3 experiments for modification of _Anger_ (entangled with _Man_), _Black_ (entangled
Fig. 16: The first row shows the histogram of the validation set after facial attribute editing with respect to the mentioned facial attribute and the value of \(\alpha\). In the second row, we show the Entanglement Degree, the difference between the original histogram (see Fig. 11) of the validation set, and its corresponding histogram after we applied the transformation function. The results show the increase/decrease in the entanglement between different facial attributes.
Fig. 17: The first row shows the histogram of the validation set after the _disentangled_ facial attribute editing with respect to the mentioned facial attribute and the value of \(\alpha\), and \(\gamma\). In the second row, we show the Entanglement Degree, the difference between the original histogram (see Fig. 11) of the validation set, and its corresponding histogram after we applied the disentangled transformation function.
with _Man_), and _Man_ (entangled with _Anger_). We depict the Covariance matrix of the validation set after modification and the corresponding _Entanglement Degree_ figure.
In the first experiment, we define our _desired_ facial attribute, \(fa\_obj_{D}\), as _Anger_ facial attribute with \(\alpha=2.5\), and set the entangled facial attribute, \(fa\_obj_{U}\), as _Man_ with \(\gamma=-0.5\) (see Eq. 12). As Fig. 17 shows, the entanglement degree between _Anger_, and _Man_ facial attributes reduced dramatically.
In the second experiment, we define our _desired_ facial attribute, \(fa\_obj_{D}\), as _Black_ with \(\alpha=2.0\), and set the entangled facial attribute, \(fa\_obj_{U}\), as _Man_ with \(\gamma=-0.5\). As Fig. 17 shows, after the disentangled synthesis of the images, there is almost no entanglement between _Black_, and _Man_ facial attributes.
In the third experiment, we define our _desired_ facial attribute, \(fa\_obj_{D}\), as _Man_ facial attribute with \(\alpha=3.0\), and set the entangled facial attribute, \(fa\_obj_{U}\), as _Anger_ with \(\gamma=-0.5\). As Fig. 17 shows, after the disentangled synthesis of the images, there is almost no entanglement between _Man_, and _Anger_ facial attributes. However, as expected, _Man_ facial attribute still is entangled with the age, and we observe an increase in the age of the synthesized images.
### _Generated Dataset_
Our proposed transformation function for multiple feature-based syntheses is a powerful tool to define a diverse dataset considering human expression, age, gender, and race. To create the dataset, we first defined a set of desired facial attributes (_e.g_ woman, Black, Angry), and then create the corresponding transformation function for multiple feature-based syntheses (see Sec. 3.3). For each set, we randomly sample 2K latent vectors, manipulate them using the corresponding transformation function, and finally synthesize the corresponding images using the generative function \(\mathbf{G}\).
In Fig. 18, we proposed each of our defined sets of desired facial attributes. While we have not introduced each possible combination of facial attributes, we tried our best to define each subset such that the final proposed dataset becomes as diverse as possible. As Fig. 18 shows, we have defined 23 different combinations, and hence, the final dataset includes 46K images, their corresponding latent vector, and annotations. In order to evaluate the diversity and the balance of the dataset, we depict its Covariance Matrix of its facial attributes in Fig. 19. Compared to our validation set (see Fig. 11) where about \(87.51\%\) of the samples are Happy, the number of Happy samples reduced to about \(30.48\%\). For the Neutral, and Angry expressions, we only have \(4.87\%\), and \(7.62\%\) of the sample in the validation set, while these ratios increased dramatically to \(24.93\%\), and \(44.59\%\) in our proposed dataset. Considering gender, Fig. 19 shows that we have almost a balanced combination between women (\(52.99\%\)), and man gender(\(47.01\%\)). Likewise, considering race, \(83.88\%\) of samples in the validation set are White, while in our proposed dataset this ratio reduced dramatically to \(51.46\%\). Similarly, we only have \(4.38\%\) and \(11.75\%\) Black and Others in the validation set respectively, while these ratios increased dramatically to \(34.61\%\), and \(13.893\%\) in our proposed dataset. For the age, we witness that the Old samples dramatically increased from less than \(1\%\) in the validation set to around \(9.38\%\) in our proposed dataset.
Another important attribute of our proposed dataset is its diversity within multiple facial attribute classes (_e.g. Old Anger woman_). By looking at Fig. 11, the Covariance Matrix of facial attributes in the validation set, and comparing it to Fig. 19, we can see that the former is much more sparse than the latter. The sparsity in the Covariance Matrix of facial attributes explicitly shows how imbalanced a dataset is. To illustrate, as Fig. 11 shows, \(87.51\%\) of samples in the validation set are Happy, \(74.02\%\) of samples are Happy-White, and only \(3.63\%\) and \(9.86\%\) of samples are Happy-Black, and Happy-Others respectively. In contrast, in our proposed dataset, around \(30.48\%\) of samples are Happy, only \(13.69\%\) of samples are Happy-White, and we have \(12.58\%\) and \(4.21\%\) of sample Happy-Black, and Happy-Others respectively. In other words, while in the validation set (which is generated using StyleGANs [4, 5] family with no modification in the latent space) we can rarely find a Happy sample with Black, in our proposed dataset, more than half of the Happy samples are Black. Another good example is the woman, where almost all woman samples in the validation set are Happy (about \(51.20\%\) are women in total, and \(50.20\%\) are woman-Happy, and less than \(1\%\) woman-Angry and woman-Neutral), in our proposed dataset,
Fig. 19: The Covariance Matrix of our proposed balanced dataset. The red box shows the histogram of the facial emotion expression \(24.93\%\) of all samples are Neutral, \(30.48\%\) Happy, and \(44.59\%\) Angry. The orange box shows the histogram of the gender: \(52.99\%\) of samples are women, and \(47\%\) are men. The blue box shows the histogram of the race: \(34.61\%\) of samples are Black, \(13.93\%\) Others (\(48.6\%\) are Black in total), and \(51.46\%\) are White. The pink box shows the histogram of the age: around \(53\%\) of the sample is less than 32 years old, while the rest are older than 32.
Fig. 18: We create our dataset using 23 different combinations of the least existing facial attribute. For each combination, we generate 2K images, which results in 46K images in total.
about \(52.99\%\) of samples are woman, we have \(21.15\%\) woman-Happy, and \(15.65\%\) and \(15.38\%\) woman-Angry and woman-Neutral respectively. Exploring the Covariance Matrix of the facial attributes of our proposed dataset in Fig. 19, and comparing it to that of the validation set depicted in Fig. 11, supports our claim about how diverse and balanced our proposed dataset is. We have provided detailed information about each subset of our proposed dataset in supplementary materials.
## 5 Discussion and Conclusion
Since GANs usually do not provide any information about the relation between the input vector, and the facial attributes and attributes of the synthesized image, interpreting the latent space of GANs is a vital step in controlling the features of the image generation. There are two main approaches for analyzing and manipulating the latent space of a well-trained GAN: _supervised and unsupervised analyses_. Our approach belongs to the supervised analysis category. The main drawback of the supervised methods, where off-the-shelf classifiers are used to label the synthesized images, is that they heavily rely on the performance of the utilized classifiers. However, compared to the unsupervised method, where the similarity between the latent vectors is captured and used for latent space analysis, the supervised approaches provide more specific control over the latent space manipulation, since the desired target attributes can be defined precisely. A combination of these approaches is needed and worth further exploration.
Besides, per category feature imbalance of existing datasets that are mostly being used for training facial GANs, impacts the GANs drastically and causes attribute entanglement. As discussed and shown in Sec. 4, not only facial GANs can hardly synthesize some specific combination of facial attributes (_e.g._ an old black woman) without latent code modification, but also modification of specific facial attributes mostly affects the other entangled feature of a face too (_e.g._ the entanglement between Anger and man attributes). Training or fine-tuning GANs on a balanced dataset, or penalizing GANs during the training process to generate a more diverse combination of facial attributes should be taken into account to deal with the mentioned drawbacks of facial GANs.
In conclusion, we presented a framework for analyzing and manipulating the latent space of well-trained GANs. First, we randomly synthesized \(100K\) images using StyleGANs [4, 5] family as our training set. For any image in the training set, we utilized 4 different off-the-shelf classifiers to predict the facial expression, age, gender, and race. Then, using the Eigenvectors of the Covariance matrix of the latent vectors having a specific facial attribute (_e.x._ Anger), we proposed a transformation function for both single and multiple facial attribute editing and feature-based synthesis.
We also showed that due to the implicit entanglement in the training set of GANs, they usually entangle specific facial attributes and features with each other. Hence, we analyzed facial attribute entanglement in the latent space of GANs and provided an effective solution for highly disentangled facial attribute editing. Our evaluations show that our proposed GANlayer framework can be utilized for accurate facial attribute editing, and feature-based synthesis in a wide range of applications. Finally, by utilizing our proposed framework, we generated a diverse photo-realistic human facial dataset with 23 different combinations of facial attributes. Our generated dataset which contains 46K images and the corresponding annotations, would be beneficial to many other automatic recognition tasks as well as physiological studies. Despite it is showed that our proposed dataset is relatively diverse, due to the fact that the image annotation has been done using deep learning-based algorithms, it is worth using human annotators for more accurate, and trustable annotation of the dataset.
## Acknowledgment
This work is partially supported by NSF grants (2141313 and 2141289), and an internal PROF grant from the University of Denver.
|
2301.10093 | Polarization-Controlled Nonlinear Computer-Generated Holography | Dynamic phase-only beam shaping with a liquid crystal spatial light modulator
is a powerful technique for tailoring the beam's intensity profile or wave
front. While shaping and controlling the light field is a highly researched
topic, dynamic nonlinear beam shaping has hardly been explored so far. One
potential reason is that generating the second harmonic is a degenerate process
as it mixes two fields at the same frequency. To overcome this problem, we
propose the use of type II phase matching as a control mechanism to distinguish
the two involved fields. We experimentally demonstrate that arbitrary intensity
distributions can be shaped in the frequency-converted field at the same
quality as for linear beam shaping and with conversion efficiencies similar to
the case without beam shaping. We envision this technique as a milestone
towards beam shaping beyond the physical limits of liquid crystal displays,
i.e. facilitating dynamic phase-only beam shaping in the ultraviolet spectral
range. | Lisa Ackermann, Clemens Roider, Kristian Cvecek, Nicolas Barré, Christian Aigner, Michael Schmidt | 2023-01-24T15:59:19Z | http://arxiv.org/abs/2301.10093v1 | # Polarization-Controlled Nonlinear Computer-Generated Holography
###### Abstract
Dynamic phase-only beam shaping with a liquid crystal spatial light modulator is a powerful technique for tailoring the intensity profile or wave front of a beam. While shaping and controlling the light field is a highly researched topic, dynamic nonlinear beam shaping has hardly been explored so far. One potential reason is that generating the second harmonic is a degenerate process as it mixes two fields at the same frequency. To overcome this problem, we propose the use of type II phase matching as a control mechanism to distinguish between the two fields. Our experiments demonstrate that distributions of arbitrary intensity can be shaped in the frequency-converted field at the same quality as for linear beam shaping and with conversion efficiencies similar to without beam shaping. We envision this method as a milestone toward beam shaping beyond the physical limits of liquid crystal displays by facilitating dynamic phase-only beam shaping in the ultraviolet spectral range.
## 1 Introduction
The first operating laser in the early 1960s [1] was the dawn for many research fields in modern optics although some of their fundamental effects were already demonstrated or proposed in theory decades earlier. Holography and nonlinear optics emerged independently from each other but both fields benefited from the high coherence and high power of new light sources. Holography is based on the interference of light waves and incorporates phase and amplitude information to go beyond photography. Dynamic phase-only beam shaping with a liquid crystal spatial light modulator (LC-SLM) is a method emerging from holography for arbitrarily controlling the intensity distribution of the beam with many applications in research [2, 3, 4] and industry [5, 6, 7]. As this method only modulates the wave front, there are no significant losses. As a drawback, liquid crystal displays are technically limited to the visible, near-infrared and mid-infrared spectral ranges. This is not an insurmountable problem, as frequency conversion processes such as second harmonic generation or sum frequency generation are coherent processes that preserve the phase of the impinging fundamental wave. Combining nonlinear
optics and holography allows for shaping the light field at the fundamental while achieving the targeted outcome at the frequency-converted field. Even though both research fields can be combined, the concept of nonlinear holography is only currently emerging.
Yariv showed decades ago that four wave mixing can be interpreted as holographic recording and reconstruction and proposed using it for the realization of real time holography [8]. In this process, the interaction between the fields can be interpreted as one field diffracted by the shaped pattern of another field. Meanwhile many investigations followed on the nonlinear conversion of structured light for the conservation of singularities [9, 10] and orbital or spin angular momentum and vortex beams [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]. Here, the physical principles of the nonlinear conversion of structured light have been well-explored and several works use beam characteristics such as polarization [18], differing wavelengths [15, 16, 17] or non-collinear geometries [19] as a control mechanism for the nonlinear conversion of optical vortices. The recent review paper published by Buono and Forbes gives an overview on nonlinear optics with structured light [24].
Currently, there are two major approaches to nonlinear holography: directly structuring the nonlinear crystal or imaging the plane of an LC-SLM into the crystal. 3D structuring of the nonlinear crystal leads to a modulation of the nonlinear susceptibility which shapes the wave front of the emerging light field. Such elements are called nonlinear photonic crystals as the modulation of the nonlinear susceptibility affects the beam generation and propagation [25, 26, 27, 28, 29, 30, 31]. There are demonstrations of a binary hologram in a nonlinear crystal [32], a structured element combined with structured light [33, 34] or plasmonic metasurfaces [35]. Such 3D structured nonlinear crystals act as volume holograms or phased arrays and in theory give more degrees of freedom as a thin hologram. As their implementation is technically challenging, the freedom of design is so far strongly limited and furthermore only static solutions are possible. Such practical limitations motivate the consideration of thin holograms which are easier to realize.
In a dynamic approach, the plane of a phase-only LC-SLM is directly imaged into the nonlinear crystal [36, 37, 38, 39]. It thus acts as a thin hologram and brings the SLM and the nonlinear crystal together in a common plane, where the SLM dynamically shapes the wave front of the fundamental as if the hologram was directly structured into the crystal. The phase mask on the SLM imprints a locally varying phase on the incoming light field which leads to a modification of the resulting wave front where the slope corresponds to wave vectors forming the angular spectrum. During the nonlinear interaction, all light fields which fulfill the phase matching condition interact and create a new, frequency-converted outgoing wave. If the incoming fields are not distinguishable, i.e. the fields are degenerate, all wave vectors will add up, resulting in a new outgoing wave that has an angular spectrum, containing all possible sums of incoming wave vectors. This complicates beam shaping of the outgoing, frequency-converted wave as the initially applied angular spectrum at the SLM is not conserved. Consequently, there needs to be a control mechanism which makes the incoming fields distinguishable. Current approaches for second harmonic generation [36, 37, 38, 39] are based on a degenerate type I phase matching process and thus involve a non-collinear geometry to control the beam-shaped outcome. This imposes severe limitations in the quality and the efficiency of the achievable results. We utilize type II phase matching and demonstrate high-quality and highly-efficient nonlinear beam shaping in the second harmonic with a LC-SLM in a collinear geometry:
Since the nonlinear conversion is only possible with phase matching, this constraint can be exploited for beam shaping. In type II phase matching, only crossed polarization states can mix and through this assignment, the two fields at the fundamental are clearly distinguishable even when working in a collinear geometry. This concept is sketched in Figure 1. By keeping one field unshaped while the other polarization carries the full phase information, there is an unambiguous assignment of the
frequency-doubled wave front for every wave vector. The experimental implementation is simple as the polarization is set diagonally before it impinges on the SLM and as the SLM is polarization-sensitive, only half of the light field is shaped. After imaging this plane into the nonlinear crystal, only crossed polarization components add up for the second harmonic field. The same concept can be transferred to sum frequency generation where the differing frequencies ensure a non-degenerate mixing process.
Based on those physical principles, we present a setup for nonlinear beam shaping with high quality while the conversion efficiency is almost defined by the crystal's conversion efficiency without beam shaping. Experimental results are shown in Section 2.1. In Section 2.2 we model the chosen KTP crystal to estimate the relative conversion efficiency. In Section 3 we discuss optimizing the conversion efficiency and give an outlook on the applicability of the method for beam shaping in the ultraviolet spectral range. Furthermore, Section 4 presents the experimental setup.
## 2 Results
For second harmonic generation, two fields at the fundamental frequency, \(\mathbf{E_{1}^{(\omega)}}\) and \(\mathbf{E_{2}^{(\omega)}}\), add up to the frequency-doubled field. From the three wave mixing process follows a quadratic relation between the fields \(\mathbf{E}^{(2\omega)}\propto\mathbf{E_{1}^{(\omega)}}\cdot\mathbf{E_{2}^{(\omega)}}\). The phase matching condition
\[\mathbf{k}^{(2\omega)}\overset{!}{=}\mathbf{k_{1}^{(\omega)}}+\mathbf{k_{2}^{(\omega)}}, \tag{1}\]
requires the fundamental and the second harmonic to maintain their phase relationship during propagation through the nonlinear material to avoid destructive interference and weak conversion. Here \(\mathbf{k}_{1/2}^{(\omega)}\) are the wave vectors at the fundamental frequency and \(\mathbf{k}^{(2\omega)}\) is the resulting wave vector at the second harmonic. We first present the experimental results before we model the nonlinear crystal to evaluate the relative conversion efficiency.
Figure 1: Basic concept of nonlinear beam shaping with linear beam shaping as reference: The applied wave front generates an angular spectrum of wave vectors which are centered around the optical axis. Their distribution results in a target image in the far field. The plane of the SLM is imaged into the nonlinear crystal. As for type II phase matching only crossed polarization states mix efficiently, the shaped light field can only mix with the unshaped one. The addition of the wave vectors results in the tailored light field with only half the initial deflection angle \(\alpha\).
### Nonlinear Beam Shaping
With polarization as a control mechanism, nonlinear beam shaping can be performed similar to linear beam shaping. The phase mask for wave front shaping can be calculated with the same algorithms and tools as for linear beam shaping. However, as wave front shaping generates an angular spectrum of wave vectors which are centered around the optical axis, the nonlinear crystal needs to equally support vectors which slightly deviate from the optimum phase matching angle. This required angular tolerance or acceptance is one characteristic of nonlinear crystals and it is defined for an angular deviation where the conversion efficiency drops to half of its maximum value. We chose KTP as the nonlinear crystal for our application as it exhibits a high angular acceptance.
Figure 2 shows experimentally recorded profiles of the second harmonic at 532 nm and images of the fundamental at 1064 nm (Figure 2 a-d) as reference. The corresponding phase masks are calculated with the Gerchberg Saxton algorithm [40]. When tailoring the target image in the far field, the propagation can be mathematically described by a Fourier transform. Thus, the intensity profile of the target image effectively reflects the generated angular spectrum. Its size directly shows the required angles of the wave vectors which need to be supported during frequency conversion. Consequently, the larger the targeted image size is, the higher the angular acceptance of the nonlinear KTP crystal needs to be, to equally convert the second harmonic signal for all wave vectors.
As a shorter nonlinear crystal exhibits a higher angular acceptance, second harmonic generation is
Figure 2: Experimentally recorded results for nonlinear beam shaping for two different crystal lengths and the initial result at the fundamental as reference. The measured conversion efficiency \(\eta\) is denoted above the individual images.
supported for a broad spectrum of wave vectors and arbitrary target structures can be shaped over a large area. We demonstrate this in Figure 2 e-h for four different target distributions with a \(2\,\mathrm{mm}\) long KTP crystal. All structures are homogeneously converted and the quality is comparable to the results at the fundamental. We calculate the conversion efficiency by dividing the measured power of the shaped target structure at the second harmonic by the power at the fundamental. The conversion efficiency is around \(6-8\,\%\) for all structures. Conversely, a \(9\,\mathrm{mm}\) long KTP crystal promises higher conversion efficiency at lower angular acceptance. The results of the two smaller target structures for the \(9\,\mathrm{mm}\) long crystal (Figure 2 i,j) are of the same quality as for the \(2\,\mathrm{mm}\) crystal. The conversion efficiency is \(>30\,\%\) and this is only a little less than the initial conversion efficiency of the nonlinear crystal without beam shaping which is around \(40\,\%\). For the \(2\,\mathrm{mm}\) crystal the values are even closer with \(8.5\,\%\) without and values around \(6-8\,\%\) with beam shaping. Those results demonstrate the applicability of nonlinear beam shaping in a regime of high conversion efficiency while maintaining high quality. The homogeneous conversion in the range of the initial conversion efficiency of the nonlinear crystal is due to a plateau in the conversion efficiency for small angular deflections. We further investigate this favorable effect for beam shaping in Section 2.2. Figure 2 k,l also shows the limitations of nonlinear beam shaping when working beyond this plateau. The globe and snowflake are almost cut at the borders as the required angles are not supported by phase matching. As parts of the light field are not converted, the conversion efficiency decreases. These results are shown to demonstrate the limitations outside the plateau of high conversion efficiency. It is nonetheless possible to shape a smaller target structure which is magnified with a telescope afterwards.
### Modeling the Nonlinear Crystal
The wave vector is a function of the wavelength and the corresponding refractive index, which depends on the wavelength. Consequently, the phase matching condition in Equation 1 is generally not fulfilled if fields with different wavelengths are propagating through a dispersive medium. Birefringence is one effect which is used for phase matching: For a biaxial crystal the refractive index is determined by three direction-dependent indices \(n_{x}\), \(n_{y}\), and \(n_{z}\) along the principal axes [41]. The effective refractive index at any position in space is given by their projection. The direction of the polarization determines the refractive index which is perceived by the corresponding light field. Thus, phase matching can occur under a specific angle of incidence depending on the phase matching type. Type II phase matching is fulfilled, if two crossed polarization states mix where their average value of the refractive indices corresponds to the refractive index of the second harmonic: \((n^{o/e}(\omega)+n^{e/o}(\omega))\cdot 0.5=n^{e/o}(2\omega)\).
KTP is a biaxial crystal which allows for type I and type II phase matching at different angles of incidence \(\theta\) and \(\phi\). As KTP exhibits a higher conversion efficiency for type II phase matching, this option is the typical choice. The phase matching angles \(\theta_{m}\) and \(\phi_{m}\) are marked in the refractive index ellipsoid in Figure 3. The two red curves along the three mutually perpendicular planes of the coordinate system indicate the effective refractive index at the fundamental resulting for a field either polarized within the plane of incidence or perpendicular to that plane. From a perspective of the highest symmetry, the beam oscillating perpendicular to the plane of incidence is called the ordinary beam (dashed lines in Figure 3) as the refractive index is the same for any angle while a beam with the polarization axis within the plane of incidence is called the extraordinary beam (solid lines in Figure 3) as the effective refractive index is given by the corresponding ellipse defined by the two refractive indices of the axes spanning that plane. Likewise, the two green curves indicate the refractive index for the frequency-doubled field. As the blue curve marks the mean value of the refractive indices at the fundamental for the two polarization states, its crossing point with the green curve marks the angles for type II phase matching.
Wave front shaping introduces a spectrum of wave vectors centered around the optical axis. Even in a collinear geometry, the generated angular spectrum deviates from the ideal phase matching condition. This deviation causes a refractive index mismatch and affects the conversion efficiency. Due to the high angular acceptance of the chosen KTP crystal, a deflection along \(\delta\theta\) and \(\delta\phi\) only causes marginal contributions to the phase mismatch. Our application for beam shaping is different from the general definition of the angular acceptance which accounts for a deflection of the total field of the fundamental: The two mixing fields have a slightly different angle as the shaped field is deflected while the other field propagates without any modifications along the phase matching angle. This results in the phase mismatch \(\Delta\mathbf{k}\):
\[\begin{split}\Delta\mathbf{k}=\mathbf{k}_{1}^{\omega}(\theta_{m},\phi_{m} )+\mathbf{k}_{2}^{\omega}(\theta_{m}+\delta\theta,\phi_{m}+\delta\phi)\\ -\mathbf{k}^{(2\omega)}(\theta_{m}+\delta\theta/2,\phi_{m}+\delta\phi/2 )\end{split} \tag{2}\]
Figure 3: Normalized conversion efficiency \(\eta\) for lateral deflection of the ordinary/extraordinary beam and projection along the two axes \(\delta\theta\) and \(\delta\phi\) for a 2 mm KTP crystal. Experimental measurements along these axes are in good agreement with the simulated curves. The refractive index ellipsoid for the biaxial KTP crystal (dimensions not to scale) shows the phase matching angles \(\theta_{m}=90\,^{\circ}\) and \(\phi_{m}=24.8\,^{\circ}\) with the axes for beam deflection along \(\delta\theta\) and \(\delta\phi\). The deflection of the ordinary beam along \(\delta\phi\) is indicated with two incoming fields where the extraordinary beam is polarized along the xy plane of the crystal and the deflected ordinary beam is polarized along the z-axis.
This scenario is sketched for two wave vectors in front of the refractive index ellipsoid in Figure 3.
We calculate the refractive index mismatch for a deflection along \(\delta\theta\) and \(\delta\phi\) by calculating the corresponding refractive indices [42] for the wave vectors. In good approximation, the resulting wave vector at the second harmonic is generated at half of the initial deflection angle, as this is the mean value between the deflected and undeflected wave vector at the fundamental. The phase mismatch is thus given by the difference of the wave vectors projected in the direction of the frequency doubled field. The relative conversion efficiency can be deduced from the projected mismatch \(\Delta k\) by solving the following equation:
\[I_{2\omega}\propto I_{\omega}^{2}L^{2}\,\text{sinc}^{2}\left(\frac{\Delta kL}{ 2}\right) \tag{3}\]
This analytical relation between the second harmonic intensity \(I_{2\omega}\) and the intensity at the fundamental \(I_{\omega}\) with respect to the crystal length \(L\) and the phase mismatch \(\Delta k\) follows when solving the nonlinear wave equation for the assumption of low depletion and the slowly varying amplitude approximation [43]. On this basis, we calculate the relative 2D conversion efficiency for a deviation within the plane of \(\delta\theta\) and \(\delta\phi\) in Figure 3. According to Snell's law and assuming the small angle approximation, the internal angle corresponds to the external angle connected via the effective refractive index which is perceived by the corresponding wave vector. The graphs in Figure 3 show the external angle, as this is the relevant value for beam shaping.
Besides calculating the relative conversion efficiency, we perform experiments for an angular deviation introduced by the SLM for beam shaping. To evaluate different angles, we apply a blazed grating either in horizontal or vertical direction on the SLM and image this plane into the nonlinear crystal which has a length of \(2\,\mathrm{mm}\). Similar to beam shaping, only one polarization component is shaped. The corresponding setup is sketched at the bottom of Figure 3. In the far field we measure the resulting power of the frequency-doubled field and divide it by the measured power of the field at the fundamental. This results in two curves for the relative conversion efficiency, either in the direction of \(\delta\theta\) or \(\delta\phi\), with respect to no deflection at \(0\,^{\circ}\) at the optimum phase matching angle. As either the ordinary or extraordinary polarization state can be deflected within the \(\delta\theta\) and \(\delta\phi\) plane, two different scenarios are possible and both are shown in Figure 3. Deflecting the ordinary beam causes a highly symmetric profile of the relative conversion efficiency and is thus the favorable option for our application. In addition to the experimentally measured points, we add the analytically calculated curve of the relative conversion efficiency.
Simulation and experiment are in good agreement and show that the conversion efficiency exhibits a broad plateau. Within that range, the light is homogeneously converted, independently of the exact deflection angle. Moreover, the conversion efficiency is not significantly reduced with respect to the actual conversion efficiency without beam shaping. Thus, arbitrary intensity profiles can be shaped within that angular range without reduction in homogeneity or efficiency. The acceptance angle approximately reaches \(\pm 2\,^{\circ}\) for a \(2\,\mathrm{mm}\) crystal and \(\pm 1\,^{\circ}\) for the \(9\,\mathrm{mm}\) long crystal. It makes sense to compare this value with the maximum deflection angle of the SLM which is determined by \(\sin^{-1}(\lambda/(2u))\cdot 1/M\), where \(u\) is the pixel pitch and \(M\) is the magnification between the SLM and the crystal. The two smaller images in Figure 2 approximately acquire \(19\,\mathrm{\char 37}\) of the SLM's linear field of view, while the two bigger images acquire \(32\,\mathrm{\char 37}\). This corresponds to deflection angles of approximately \(1.1\,^{\circ}\) and \(1.9\,^{\circ}\) (bearing in mind that the plane of the SLM is imaged with a factor of \(M=0.25\) into the nonlinear crystal and thus the maximum deflection angle is \(\pm 6\,^{\circ}\)). At the phase matching angle \(\theta_{m}=90\,^{\circ}\), the crystal is non-critically phase-matched along \(\delta\theta\). This means when expanding the mismatch \(\Delta k\) in a Taylor series, the first derivative \(\partial\Delta k/\partial\theta=0\) due to the crystal's symmetry and thus only higher order terms with a weaker effect on the phase mismatch contribute [44]. Besides the
control mechanism for the wave mixing process it is crucial to choose a nonlinear crystal with a high angular acceptance. We consider temperature-controlled noncritical phase matching or quasi-phase matching as another option but do not discuss this further within the scope of this paper.
## 3 Discussion
Nonlinear beam shaping is a powerful tool as it enables dynamic phase-only beam shaping in new spectral ranges. To optimize this process, two parameters should be considered: While the conversion efficiency should be maximized, the angular acceptance needs to be high enough to ensure homogeneous conversion of the angular spectrum. As both parameters are mutually dependent, we discuss the proper choice of the experimental parameters to optimize the outcome. Moreover, we give an outlook on dynamic phase-only beam shaping in the ultraviolet spectral range.
### Optimizing the Conversion Efficiency with Respect to the Angular Acceptance
When approaching a high conversion regime, it is more critical to maintain proper phase matching as the phase mismatch gains an increasing impact on the efficiency of the converted outcome. At high conversion, the angular range of tolerance narrows and this decreases the plateau of homogeneous conversion for the generated angular spectrum. Equation 3 shows the impact of the experimental parameters on the conversion efficiency while keeping the laser power constant. Both increasing the intensity and the crystal length promises higher conversion efficiency. However, both parameters also affect the angular acceptance of the nonlinear crystal.
The effect of the crystal length \(L\) is directly stated in Equation 3. While a longer crystal increases the relative conversion with \(L^{2}\), it also increases the contribution of the phase mismatch as a multiplier of \(\Delta k\) in the argument of the sinc function. We use the model of the nonlinear crystal to calculate the relative conversion of the target structures for the 9 mm crystal as they are shown in Figure 2.
The beam-shaped structures directly reflect the generated angular spectrum and thus the relative conversion can be simulated when multiplying the target image with the calculated angular conversion. Figure 4 shows the simulated results with the corresponding conversion efficiency. This value is calculated as the total efficiency without beam shaping multiplied by the integrated relative conversion. Working
Figure 4: Simulated beam conversion for the 9 mm crystal. The simulated efficiency is calculated from the integrated relative conversion multiplied with the efficiency of the 9 mm crystal without beam shaping (compare Figure 5).
beyond the limits of phase matching results in weak conversion and the simulated as well as the measured values for the conversion efficiency drop significantly. The experimental parameters should be chosen to work within the pleateau of homogeneous conversion as otherwise not only the quality gets worse but also the efficiency drops significantly. The comparison of the simulated and experimentally measured conversion efficiency explains the strong decrease in efficiency for the two larger structures with their measured conversion efficiency marked as gray crosses in Figure 5. This decrease can be traced back to insufficient phase matching as the structures are chosen larger than the angular acceptance. As simulation and experiment are in good agreement, proper experimental parameters can be derived on that basis even in a high conversion regime.
Similar to the crystal length \(L\), an increase in intensity also improves the conversion efficiency.
It can either be increased by increasing the power of the chosen light source or by reducing the illuminated area. As the laser power is typically technically limited, we will focus on the second case. Here the plane of the SLM is imaged with a certain demagnification into the nonlinear crystal to decrease the illuminated area. It is also possible to illuminate the SLM with a smaller beam but this reduces the number of illuminated pixels and unnecessarily increases the intensity on the SLM with respect to potential damage thresholds. It is thus beneficial to design the telescope with the required demagnification. This demagnification increases the intensity in the nonlinear crystal and scales the intensity with \(1/M^{2}\) to gain higher conversion efficiencies. Likewise, the telescope increases the angular spectrum as the deflection angles are proportional to \(M\). Similar to the crystal length \(L\), this factor also affects the phase mismatch \(\Delta k\) in the argument of the sinc function as it acts as a multiplier of the initially induced angular spectrum. Consequently, the intensity needs to be considered when modeling the phase mismatch to determine optimum parameters.
In our experimental setup we work with a telescope that decreases the image size by a factor of 0.25 and thus increases the intensity by a factor of 16. This reduction affects the deflection angles and increases them by a factor of 4 as the effective pixel size of the SLM changes. We consider that adjusting the intensity with respect to the required angular spectrum for a given crystal length is reasonable. The resulting plateau should be exactly within the required angular range - neither broader nor narrower - to ensure homogeneous conversion with maximized efficiency.
Figure 5: Measured output power vs. input power for second harmonic generation with a 2 mm and 9 mm KTP crystal without beam shaping. The crosses indicate the conversion efficiency of the structures in Figure 2 for beam shaping, with the two intentionally cropped structures in gray.
### Towards Dynamic Phase-only Beam Shaping in UV Range
Dynamic beam shaping in the ultraviolet (UV) spectral range is highly limited as many devices absorb the UV light, including liquid crystal displays. This paper demonstrates nonlinear beam shaping from the infrared to visible spectral range. Nonetheless, other conversion processes are possible, reaching from second harmonic generation at different fundamental frequencies up to other nonlinear processes like sum frequency generation. Currently, there is research on new materials with tunable birefringence in the UV spectral range for spatial light modulators with results showing light modulation at 303 nm [45]. We see the potential of our method for approaching even deeper UV spectral ranges. This section presents a few thoughts on proper nonlinear crystals and conversion processes for dynamic phase-only beam shaping in the UV range.
Ultrashort pulsed laser systems often feature nonlinear crystals for frequency conversion, for example to the second or third harmonic. High-energetic short wavelengths in the UV range of ultrashort pulsed laser systems are thus often generated with nonlinear frequency conversion from the visible or infrared range. Consequently, the initial conditions often enable a direct integration of nonlinear beam shaping into the laser setup and ideally this can be performed with conversion efficiencies close to the initial values.
To achieve high-quality results, the nonlinear crystal should have a broad angular acceptance and the conversion process needs to be non-degenerate. Sum frequency generation with an LBO crystal from 1064 nm and 532 nm to 355 nm is one option. The major benefit of this crystal is the high angular acceptance. Similarly, sum frequency generation with CLBO [46] from 1064 nm and 266 nm to 213 nm should be possible. In both options the process is inherently non-degenerate as two different wavelengths mix. While the SLM can either shape the light field in the infrared or visible spectral range, only the infrared light can be shaped in the latter case as the SLM can not shape light at 266 nm. With one of the options suggested here or other nonlinear processes, we see the potential of using nonlinear beam shaping to approach the UV spectral range. This enables dynamic phase-only beam shaping beyond the physical limits of the liquid crystal display.
## 4 Materials and Methods
Nonlinear phase-only beam shaping with a liquid crystal display is possible as the applied phase information is conserved during the frequency conversion. However, as second harmonic generation is a degenerate process, the mixing waves need to be controlled by making the interacting fields distinguishable. We use the requirements of type II phase matching to unambiguously define the outcome of the second harmonic. Under this condition, only fields of crossed polarization can mix efficiently. A half wave plate, abbreviated with \(\lambda/2\) in Figure 6 is placed in front of the SLM to generate diagonal polarization with respect to the SLM's orientation. As the liquid crystals are polarization-sensitive, only half of the field is shaped, while the other part remains unshaped. Consequently, a shaped component can only mix with the unshaped plane wave front. In Figure 6 the two polarization components are indicated as the ordinary and the extraordinary beam. The polarization configuration in the figure refers to where the ordinary beam is shaped. If the extraordinary beam is shaped, an additional half wave plate after the SLM helps to rotate the polarization states correspondingly. This might be beneficial when working off-axis to separate non-diffracted light as the area of quasi-homogeneous conversion is shifted away from the center of the optical axis. A telescope images the plane of the SLM into the nonlinear crystal. This ensures a homogeneous intensity distribution within the nonlinear crystal which is required to convert all field components equally within a range of proper phase
matching. In good approximation, the resulting wave vectors at the second harmonic reduce to half of the initially set angle of the shaped field component, as this is the sum of the two involved wave vectors.
The SLM is the model LSH0701010 from Hamamatsu (Hamamatsu, Japan) and has a pixel resolution of \(800\,\mathrm{px}\times 600\,\mathrm{px}\) with a pixel pitch of \(20\,\mathrm{\SIUnitSymbolMicro m}\). The beam diameter on the LC-SLM is \(14\,\mathrm{mm}\). To increase the intensity to obtain a higher conversion efficiency, we design our telescope to demagnify the beam by \(M=0.25\). The corresponding focal lengths are \(f_{1}=200\,\mathrm{mm}\) and \(f_{2}=50\,\mathrm{mm}\). Before the target distribution is imaged to the far field with a \(f_{3}=200\,\mathrm{mm}\) lens, the remaining IR light is blocked with the filter FL532-10 from Thorlabs (NJ, USA). The laser system Fuego from Time-Bandwidth Products (CA, USA) emits \(10\,\mathrm{ps}\) pulses at a repetition frequency of \(200\,\mathrm{kHz}\) at a wavelength of \(1064\,\mathrm{nm}\). The power measurements were done with the power meter PM10 from Coherent (CA, USA) and PM160 from Thorlabs. We record RGB images with the camera model UI-3000SE-C-HQ from IDS (Obersulm, Germany).
## 5 Conclusion
Nonlinear computer-generated holography is gaining increasing attention and there is a lot of research on structuring nonlinear crystals but this process is technically challenging and static. These deficiencies can be overcome by thin holograms generated with dynamic liquid crystal displays. However, research on nonlinear beam shaping is still in its infancy. We present a high-quality and highly efficient method for nonlinear dynamic phase-only beam shaping with a simple concept: The mixing process needs to be non-degenerate by employing a control mechanism and the angular acceptance of the nonlinear crystal needs to support the angular spectrum generated by wave front shaping. Based
Figure 6: Experimental setup for nonlinear beam shaping: A half wave plate introduces diagonal polarization on the beam impinging on the SLM. Thus, only half of the field is shaped while the crossed polarization component remains unshaped. A telescope images the plane of the SLM into the non-linear KTP crystal. Here, the two polarization components mix and shape the targeted field. After a filter separates the second harmonic from the fundamental, a lens images the target image, where the second harmonic signal is recorded. As the shaped result is sensitive to the deflected polarization component, a half wave plate after the SLM allows for changing the deflected and non-deflected polarization component.
on this, we demonstrate beam shaping at the second harmonic with conversion efficiencies close to the ones without beam shaping. This is mainly due to the fact that we work at a plateau of constant conversion efficiency for the generated angular spectrum. While working within that range, the conversion efficiency is mainly given by the conversion efficiency without beam shaping and thus this method is highly efficient. Furthermore, the quality of the frequency-converted result is of the same quality as for beam shaping at the fundamental. We see high potential in this approach as it not only enables highly-efficient and high-quality beam shaping with thin holograms but due to its simplicity, it can be easily combined with other elaborated processes. Based on nonlinear beam shaping, the spectral range of dynamic liquid crystal displays can be extended beyond the physical limits as we discuss here for the UV spectral range.
## Funding
The authors gratefully acknowledge funding from the Erlangen Graduate School in Advanced Optical Technologies (SAOT) by the German Research Foundation (DFG) in the framework of the German excellence initiative.
## Authors' contributions
LA and CR conceived the method. LA recorded the experimental data and performed simulations. LA, CR and KC analyzed the data. CA performed provisional experiments. LA, CR, KC, NB and MS contributed to the manuscript and MS acquired funding. |
2303.06088 | Towards domain-invariant Self-Supervised Learning with Batch Styles
Standardization | In Self-Supervised Learning (SSL), models are typically pretrained,
fine-tuned, and evaluated on the same domains. However, they tend to perform
poorly when evaluated on unseen domains, a challenge that Unsupervised Domain
Generalization (UDG) seeks to address. Current UDG methods rely on domain
labels, which are often challenging to collect, and domain-specific
architectures that lack scalability when confronted with numerous domains,
making the current methodology impractical and rigid. Inspired by
contrastive-based UDG methods that mitigate spurious correlations by
restricting comparisons to examples from the same domain, we hypothesize that
eliminating style variability within a batch could provide a more convenient
and flexible way to reduce spurious correlations without requiring domain
labels. To verify this hypothesis, we introduce Batch Styles Standardization
(BSS), a relatively simple yet powerful Fourier-based method to standardize the
style of images in a batch specifically designed for integration with SSL
methods to tackle UDG. Combining BSS with existing SSL methods offers serious
advantages over prior UDG methods: (1) It eliminates the need for domain labels
or domain-specific network components to enhance domain-invariance in SSL
representations, and (2) offers flexibility as BSS can be seamlessly integrated
with diverse contrastive-based but also non-contrastive-based SSL methods.
Experiments on several UDG datasets demonstrate that it significantly improves
downstream task performances on unseen domains, often outperforming or rivaling
with UDG methods. Finally, this work clarifies the underlying mechanisms
contributing to BSS's effectiveness in improving domain-invariance in SSL
representations and performances on unseen domain. | Marin Scalbert, Maria Vakalopoulou, Florent Couzinié-Devy | 2023-03-10T17:09:04Z | http://arxiv.org/abs/2303.06088v6 | # Improving Domain-Invariance in Self-Supervised Learning via Batch Styles Standardization
###### Abstract
The recent rise of Self-Supervised Learning (SSL) as one of the preferred strategies for learning with limited labeled data, and abundant unlabeled data has led to the widespread use of these models. They are usually pretrained, finetuned, and evaluated on the same data distribution,, within an in-distribution setting. However, they tend to perform poorly in out-of-distribution evaluation scenarios, a challenge that Unsupervised Domain Generalization (UDG) seeks to address.
This paper introduces a novel method to standardize the styles of images in a batch. Batch styles standardization, relying on Fourier-based augmentations, promotes domain invariance in SSL by preventing spurious correlations from leaking into the features. The combination of batch styles standardization with the well-known contrastive-based method SimCLR leads to a novel UDG method named CLaSSy (**C**ontrastive **L**earning with **S**tandardized **S**tyles). CLaSSy offers serious advantages over prior methods, as it does not rely on domain labels and is scalable to handle a large number of domains. Experimental results on various UDG datasets demonstrate the superior performance of CLaSSy compared to existing UDG methods. Finally, the versatility of the proposed batch styles standardization is demonstrated by extending respectively the contrastive-based and non-contrastive-based SSL methods, SWaV and MSN, while considering different backbone architectures (convolutional-based, transformers-based).
## 1 Introduction
**Motivations.** In recent years, both contrastive and non-contrastive self-supervised learning (SSL) methods, have seen significant growth and success [1, 3, 5, 6, 14, 16]. However, traditional SSL methods make the assumption that training and testing data come from the same distribution, which does not hold true in practice and consequently limits their real-life applications. This so-called domain shift usually leads to poor model generalization on testing
Figure 1: Comparison between regular contrastive learning and CLaSSy. \(\bullet\) and \(\bullet\) markers represent positive and negative examples respectively when considering augmented cats as anchors. (a) In Contrastive Learning, examples are augmented independently, which may result in using style information to repel negative from positive examples. (b) CLaSSy standardizes the styles of positive and negative examples using Fourier-based augmentations. Independent augmentations on positive examples promote domain invariance when they are pulled together. Standardizing styles of positive and negative examples prevents style information from leaking into features when repelling negatives from positives.
data from unseen domains.
Unsupervised Domain Generalization (UDG), a novel Domain Generalization (DG) setting introduced by [36], aims to tackle this issue by assessing the generalizability of SSL models on data from previously unobserved domains. In this setting, models are first pretrained on unlabeled data and then evaluated through linear probing or finetuning on a DG task. This work focuses only on the all-correlated UDG setting [36] where unlabeled data and the labeled data of the DG task come from the same domains, while testing data come from an unknown target domain.
Current UDG methods, whether they are contrastive-based [15, 36] or not [16], suffer from the same drawbacks: (1) They require domain labels, which in practice may be challenging to obtain or unavailable, and/or (2) have poor scalability to many domains due to architectural constraints such as domain-specific queues or decoders. These limitations highlight the need for more flexible domain-invariant self-supervised methods.
**Contributions.** This work proposes a novel method for standardizing image styles in a batch, leveraging Fourier-based augmentations [32, 34]. This standardization technique, which transfers the style of a randomly selected image to all images in the batch, promotes domain invariance by preventing style information from leaking into features. It does not entail additional training and has a minimal impact on the computational cost of SSL methods.
The proposed batch style standardization is combined with SimCLR [6], a well-known contrastive-based method, to create a domain-invariant SSL method named CLaSSy (**C**ontrastive **L**earning with **S**tandardized **S**tyles), illustrated on Fig. 1. Specifically, batch styles standardization in CLaSSy prevents style information leakage when repelling negative examples from positive examples. Somehow, it can also be seen as a means of creating harder negative examples, which has proven to be essential to make the most of contrastive-based methods. Unlike prior approaches, CLaSSy does not require domain labels and can handle any number of domains while showing substantial performance gains on common UDG benchmark datasets.
Finally, the versatility of batch styles standardization is demonstrated by extending its application to both contrastive and non-contrastive methods and different backbones with convolutional or transformers architectures. Experiments using SWaV [3] and MSN [1] methods, coupled with the proposed batch style standardization, highlight the potential of our method. The full code of our method will be released upon acceptance.
## 2 Related works
**Domain Generalization** aims to train a model on labeled data from multiple source domains with distinct distributions to generalize well on unseen target domains. Former DG methods have focused on aligning source feature distributions using diverse techniques [13, 20, 24, 26, 27, 28, 31, 37]. More recently, the trend has shifted towards improving cross-domain generalization through better data augmentation strategies. These strategies can be applied at either the image level [29, 32, 34, 38, 39] or feature representation level [21, 25, 40], and can be non-parametric [32, 40], trained adversarially during the DG task [17, 21, 38, 39], or pretrained beforehand on source domains [29]. Among these methods, Fourier-based augmentations [32, 34] is probably the most simple and promising approach to enhance generalization. We employ this augmentation for its simplicity, efficiency, speed, and particularly for its ability not to require domain labels.
**Self-Supervised learning** has gained a lot of attraction due to its faculty to efficiently pretrain models on large amounts of unlabeled data. A recent and successful line of SSL research is Contrastive Learning, which focuses on making features of augmented views of the same image (positives) closer in an embedding space while pushing apart features of other examples (negatives). These methods can work either at the instance level [6, 8, 18] or at the cluster level [2, 4], while there have also been attempts to eliminate the use of negative examples due to batch size limitations [1, 5, 9, 14]. In this work, we extend contrastive-based (SimCLR [6], SWaV [3]) and non-contrastive-based (MSN [1]) methods to ease the transfer of pretrained features for solving DG tasks, which corresponds to the UDG problem.
**Unsupervised Domain Generalization** methods based on Contrastive Learning such as DARLING [36] and BrAD [15] improve domain-invariance in their contrastive losses by ensuring that positive and negative examples belong to the same domain. Restricting comparisons to examples from the same domain prevents the development of spurious correlated features when repelling negative examples from positive ones. To do so, DARLING exploits domain-specific adversarial negative queues while BrAD preserves domain-specific negative queues containing past representations from the momentum encoder. Additionally, BrAD learns image-to-image mappings from the different domains into a shared space and performs contrastive learning between representations of raw and projected images. In [33], Yang propose an alternative approach relying on Masked Auto-Encoder [16] (MAE) to solve a cross-domain reconstruction task. More specifically, images styles are augmented using Fourier-based augmentations, then masked, and finally, domain-specific decoders reconstruct and recover the original styles of masked tokens. Masked tokens reconstruction learns semantically coherent features while recovering original styles with domain-specific decoders encourages domain-invariant features.
## 3 Method
The following sections define the UDG problem, outline the basic Fourier-based augmentations and how they are leveraged for batch styles standardization. Finally CLaSSy, our proposed novel UDG method is introduced.
### Problem formulation
In the all-correlated UDG setting, an unlabeled dataset \(D_{u}=\{(x_{i},d_{i})\in\mathcal{X}\times\mathcal{D}_{S}\}_{1\leq i\leq n_{u}}\), a labeled dataset \(D_{l}=\{(x_{i},y_{i},d_{i})\in\mathcal{X}\times\mathcal{Y}\times\mathcal{D}_{S }\}_{1\leq i\leq n_{l}}\) and a test dataset \(D_{t}=\{(x_{i},y_{i},d_{i})\in\mathcal{X}\times\mathcal{Y}\times\mathcal{D}_{ T}\}_{1\leq i\leq n_{t}}\) are given. The variables \(x\), \(y\) and \(d\) denote respectively an image, its class label and its domain label while \(\mathcal{X}\), \(\mathcal{Y}\) and \(\mathcal{D}\) are their respective spaces. In this setting, unlabeled and labeled training data are drawn from the same source domains \(\mathcal{D}_{S}\) and share the same class labels space \(\mathcal{Y}\). Testing data are drawn from an unseen target domain \(\mathcal{D}_{T}\). The goal of UDG is to pretrain a network on \(D_{u}\) via SSL such that when finetuned on \(D_{l}\) it will generalize well on the data \(D_{t}\) coming from a domain unobserved during the training. Such generalization reflects the ability of the SSL method to learn semantically coherent and domain-invariant features.
### Fourier-based data augmentation
Fourier-based data augmentation is motivated by an established property of the Fourier transform \(\mathcal{F}\): phase components \(\mathcal{P}\) of the Fourier transform tend to retain semantic information while amplitude components \(\mathcal{A}\) are more related to low-level information such as global colors and textures. Thus, by randomly altering the amplitudes of the images during training, the network can prioritize semantic information over style information.
The Fourier-based data augmentation procedure, illustrated in Fig. 2, consists of the following steps: (1) apply the Fourier transform \(\mathcal{F}\) to a source image and a target image, (2) swap low-frequency components of the source image amplitude with that of the target image, and (3) apply the inverse Fourier transform \(\mathcal{F}^{-1}\) to return to the image space. The size of the swapped region is controlled by an area ratio hyperparameter \(r\in[0,1]\) representing a ratio between the area of the swapped region and the entire amplitude area. To increase the diversity of the augmented images, rather than fixing the ratio \(r\), we sample it from a uniform distribution \(\mathcal{U}(r_{min},r_{max})\) where \(r_{min}\) and \(r_{max}\) stand for the minimum and maximum possible ratios.
### Batch styles standardization
We leverage Fourier-based augmentations as a means to project all images in a batch to the same style. To achieve this, the style of a single randomly selected image in the batch is transferred to all other images, creating the impression that they were drawn from the same domain. This batch styles standardization removes the variability associated with styles, thereby preventing the use of style information to solve the SSL task.
More specifically, to perform batch styles standardization, we first randomly select a single image from a batch of \(N\) images and then swap the low-frequency amplitudes of the other images with those of the selected image. Each augmented image is then subjected to independent geometrical transformations, while a consistent color augmentation is applied to maintain a common style across all images. This process is repeated \(V\) times, generating \(V\) augmented views of each image or somehow \(V\) augmented batches, each with a unique style/domain. Examples of these augmented batches from two studied datasets are shown in Fig. 3. A Pytorch implementation of batch styles standardization is provided in Appendix A.
### Contrastive Learning with Standardized Styles
CLaSSy, illustrated in Fig. 1, builds upon previous UDG works by incorporating Fourier-based augmentations and ensuring that the compared positive and negative examples belong to the same domain. A notable distinction from prior works is that CLaSSy does not rely on any domain labels. Instead, batch styles standardization is employed to standardize the styles of positive and negative examples making them appear as if they originated from the same domain, without requiring any domain labels.
Specifically, CLaSSy aims to bring representations of images that share the same content but differ in style closer together, while simultaneously repelling representations of different content but sharing the same styles as positive examples. Intuitively, restricting styles among positive and negative examples should prevent style information from affecting features and somehow should create harder neg
Figure 2: Fourier-based augmentations: Given a pair of images, the Fourier Transform \(\mathcal{F}\) is applied on both images. Regions associated with low-frequency components of the amplitude \(\mathcal{A}\), determined by the areas ratio \(r\), are swapped. The inverse \(\mathcal{F}^{-1}\) is applied to the altered transforms to project them back to the image space.
ative examples.
To fulfill this goal, once batch styles standardization has been applied, the resulting images are rearranged into a \(N\times V\) images grid denoted \(X\), where each row corresponds to a different content \(c\) and each column to a different style \(s\), as shown in Fig. 3. Each augmented image \(x_{cs}\) is then processed through a backbone, a projection head and normalized with the \(L^{2}\)-norm to produce its corresponding representation \(z_{cs}\in\mathbb{R}^{D}\). Finally, we minimize the same objective function as in SimCLR [6] combined with a multi-crops strategy [3] including \(V\) augmented views. Given an anchor \(z_{cs}\), this corresponds minimizing the following loss per sample:
\[\mathcal{L}_{cs}=\frac{-1}{V-1}\sum_{s^{\prime}\neq s}\log\left(\frac{e^{z_{cs ^{\prime}}z_{cs^{\prime}}/T}}{\sum_{(c^{\prime\prime},s^{\prime\prime})\neq(c,s)}e^{z_{cs^{\prime}}z_{cs^{\prime\prime}s^{\prime\prime}}/T}}\right) \tag{1}\]
\(T\) stands for a temperature hyperparameter.
## 4 Results
### Datasets
Extensive experiments were performed on \(3\) datasets widely used for benchmarking DG / UDG methods, namely **PACS**, **DomainNet** and **Camelyon17 WILDS**.
**PACS**[23] contains \(4\) domains (Photo, Art Painting, Cartoon, Sketch) and \(7\) classes. **DomainNet**[27] contains \(6\) different domains (clipart, infograph, quickdraw, painting, real and sketch) and covers \(345\) classes. Following prior UDG works [15, 33, 36], we consider a subset of DomainNet including \(20\) classes out of the \(345\) available classes. **Camelyon17 WILDS**[22] includes \(2\) classes and \(5\) domains. The dataset is split into train, val, and test subsets comprising respectively \(3\), \(1\), and \(1\) distinct domains.
### Experimental setup
To evaluate the cross-domain generalization ability of the proposed UDG method, we followed the exact same evaluation protocol described in [36] for the UDG all-correlated setting. Therefore, we first pretrained the backbone network on the source domains in an unsupervised manner using the proposed method. Second, using only a fraction of the source labeled data, we finetuned it or learned a linear classifier on top of the frozen representations (linear probing). Finally, performances were evaluated on data from an unseen target domain. All our networks were trained without Imagenet [11] pretraining, except on DomainNet to allow fair comparisons with prior UDG works (see discussion in 4.3).
Following previous work [15], on **PACS** and **DomainNet**, all the models were evaluated through linear probing except when considering the entire PACS dataset where full-finetuning was performed as in [33, 36]. On **Camelyon17 WILDS**, linear probing was performed for each proportion of labeled data. Additional implementation details about the unsupervised trainings and evaluations are provided in Appendix B.
### UDG performances
**PACS.** The averaged accuracy over three independent runs, for all possible combinations (sources, target) domains and for different proportions of labeled data are reported on Tab. 1. Our method achieves the highest per-domain averaged accuracy with margins of \(+0.63\), \(+3.45\), \(+0.49\) and \(+5.34\) for proportions of labeled data of \(1\%\), \(5\%\), \(10\%\) and \(100\%\), respectively. For the 10% proportion, we use linear probing for evaluation, which is similar to BrAD but different from DiMAE. On this proportion, our results show a significant improvement compared to BrAD and still a slightly better performance compared to DiMAE. Regarding accuracy on the different target domains, CLaSSy outperforms its competitors most of the time, except for the target domain photo. This discrepancy may be attributed to the fact that both BrAD and DiMAE benefit from transfer learning on ImageNet while we do not take advantage of any form of transfer learning. On the challenging target domain sketch, due to large visual discrepancies with source domains, our method demonstrates its generalization ability with significant performance gains ranging from \(+8.42\) to \(+16.33\) for the different proportions of labeled data.
**DomainNet.** We follow prior UDG works [15, 33, 36] by selecting painting, real and sketch as source domains and
Figure 3: Augmented batches with batch styles standardization on (a) PACS and (b) Camelyon17 WILDS datasets. Given a batch of \(N\) images, the amplitudes of each image are swapped with those of a randomly selected image in the batch (indicated by \(\bullet\). This process is repeated \(V\) times independently to obtain \(V\) augmented views of each image, or somehow \(V\) augmented batches, each with a unique style.
the others as target domains. The reversed combination of source and target domains is also considered. For these two combinations, we report on Tab.2, the averaged accuracy per domain and the overall accuracy for different proportions of labeled data (1%, 5% and 10%). The presented results are averaged over three independent runs.
For proportions of \(1\%\) and \(5\%\), CLaSSy exhibits superior performance over self-supervised and prior UDG methods on nearly all target domains. It outperforms the previous best UDG method, BrAD, on global metrics (i.e. overall accuracy and per-domain averaged accuracy) with comfortable margins. For the \(10\%\) proportion, DiMAE outperforms CLaSSy on the global metrics. However, it is worth noting that in this case, DiMAE is the only method that used full-finetuning for evaluation. When compared to methods employing linear probing for evaluation, CLaSSy still outperforms its competitors.
**Camelyon17 WILDS.** The averaged accuracy over ten independent runs, for different proportions of labeled data (\(1\%\), \(5\%\), \(10\%\), \(100\%\)), on the test split of Camelyon17 WILDS are reported on Tab. 3. For fair comparisons, the reimplemented methods DARLING and DiMAE used the same training and evaluation hyperparameters. According to Tab. 3, for any proportion of labeled data, our method outperforms the other UDG methods (DARLING, DiMAE) with large margins. It even surpasses the Semi-Supervised method FixMatch [30] and the SSL method SWaV [3] that have been trained with additional unlabeled data from the target domain. Compared to the second best-performing UDG method DiMAE, for all the proportions of labeled data, our method yields significant performance gains.
**Pretraining in DG/UDG.** The usage of pretrained networks in DG is common but we think it is misguided. Often the pretraining dataset includes one or more of the target domains, _e.g._, _photo_ for PACS or _real_ for DomainNet when using ImageNet pretraining. When evaluating on these domains, it is not possible to know if the performances are due to real generalization capacity or to simple conservation of the pretrained features. For new methods, it is hard not to follow the common practice because the pretraining unfairly boosts the results of previous works and state-of-the-art performances are often seen as a prerequisite for paper acceptance. We tried to limit the usage of pretraining in our experiments and only used it for the DomainNet dataset. The bias introduced by Imagenet pretraining can be seen in our results and especially for the domains that are close to ImageNet: on PACS (Tab. 1), without pretraining, CLaSSy is consistently the best method for all settings except for
\begin{table}
\begin{tabular}{|l|c c c c|c c c c|c|c|} \hline \multicolumn{1}{|c}{} & \multicolumn{6}{|c|}{Label Fraction: 1\%} & \multicolumn{6}{|c|}{Label Fraction: 5\%} \\ \hline \multirow{2}{*}{Methods} & \multicolumn{6}{|c|}{Target domain} & \multicolumn{6}{|c|}{Target domain} \\ \cline{2-11} & photo & art & cartoon & sketch & avg. & photo & art & cartoon & sketch & avg. \\ \hline MoCo V2 [8] & 22.97 & 15.58 & 23.65 & 25.27 & 21.87 & 37.39 & 25.57 & 28.11 & 31.16 & 30.56 \\ SimCLR V2 [7] & 30.94 & 17.43 & 30.16 & 25.20 & 25.93 & 54.67 & 35.92 & 35.31 & 36.84 & 40.68 \\ BYOL [14] & 11.20 & 14.53 & 16.21 & 10.01 & 12.99 & 26.55 & 17.79 & 21.87 & 19.65 & 21.46 \\ AdCo [18] & 26.13 & 17.11 & 22.96 & 23.37 & 22.39 & 37.65 & 28.21 & 28.52 & 30.35 & 31.18 \\ MAE [16] & 30.72 & 23.54 & 20.78 & 24.52 & 24.89 & 32.69 & 24.61 & 27.35 & 30.44 & 28.77 \\ DARLING [36] & 27.78 & 19.82 & 27.51 & 29.54 & 26.16 & 44.61 & 39.25 & 36.41 & 36.53 & 39.20 \\ DiMAE\({}^{*}\)[33] & _48.86_ & 31.73 & 25.83 & 32.50 & 34.73 & 50.00 & 41.25 & 34.40 & 38.00 & 40.91 \\ BrAD\({}^{*}\)[15] & **61.81** & _33.57_ & _43.47_ & _36.37_ & _43.80_ & **65.22** & _41.35_ & _50.88_ & _50.68_ & _52.03_ \\ \hline CLaSSy & 45.76 & **34.85** & **49.38** & **47.73** & **44.43** & _59.86_ & **43.74** & **54.97** & **63.33** & **55.48** \\ \hline \hline \multicolumn{11}{|c}{} & \multicolumn{6}{|c|}{Label Fraction: 10\%} & \multicolumn{6}{|c|}{Label Fraction: 100\%} \\ \hline \multirow{2}{*}{Methods} & \multicolumn{6}{|c|}{Target domain} & \multicolumn{6}{|c|}{Target domain} \\ \cline{2-11} & photo & art & cartoon & sketch & avg. & photo & art & cartoon & sketch & avg. \\ \hline MoCo V2 [8] & 44.19 & 25.85 & 35.53 & 24.97 & 32.64 & 59.86 & 28.58 & 48.89 & 34.79 & 43.03 \\ SimCLR V2 [7] & 54.65 & 37.65 & 46.00 & 28.25 & 41.64 & 67.45 & 43.60 & 54.48 & 34.73 & 50.06 \\ BYOL [14] & 27.01 & 25.94 & 20.98 & 19.69 & 23.40 & 41.42 & 23.73 & 30.02 & 18.78 & 28.49 \\ AdCo [18] & 46.51 & 30.31 & 31.45 & 22.96 & 32.81 & 58.59 & 29.81 & 50.19 & 30.45 & 42.26 \\ MAE [16] & 35.89 & 25.59 & 33.28 & 32.39 & 31.79 & 36.84 & 25.24 & 32.25 & 34.45 & 32.20 \\ DARLING [36] & 53.37 & 39.91 & 46.41 & 30.17 & 42.46 & 68.86 & 41.53 & 56.89 & 37.51 & 51.20 \\ DiMAE\({}^{*}\)[33] & **77.87** & **59.77** & _57.72_ & 39.25 & _58.65_ & _78.99_ & **63.23** & _59.44_ & _55.89_ & _64.39_ \\ BrAD\({}^{*}\)[15] & _72.17_ & 44.20 & 50.01 & _55.66_ & 55.51 & ✗_ & ✗_ & ✗_ & ✗_ \\ \hline CLaSSy & 63.10 & _48.91_ & **59.27** & **65.28** & **59.14** & **79.77** & _62.45_ & **64.49** & **72.22** & **69.73** \\ \hline \multicolumn{11}{|c}{} & \multicolumn{6}{|c|}{} & \multicolumn{6}{|c|}{} & \multicolumn{6}{|c|}{} & \multicolumn{6}{|c|}{} & \multicolumn{6}{|c|}{} & \multicolumn{6}{|c|}{} & \multicolumn{6}{|c|}{} \\ \end{tabular} \({}^{*}\) Uses Imagenet pretraining.
\end{table}
Table 1: UDG performances on PACS. Averaged accuracies are reported over three independent runs, for all possible combinations of (sources, target) domains and for different proportions of labeled data. The first and second best performing methods are respectively highlighted in **red** and _blue_.
the _photo_ domain, where methods using ImageNet pretraining report better performance. However, on DomainNet (Tab. 2), when using ImageNet pretraining similar to the other methods, CLaSSy is the most performant on the _real_ domain (best on 2 out of 3 proportions of labeled data).
### Features visualization
To assess the quality of the learned features and their ability to generalize across domains, we present t-SNE plots of the backbone representations for Camelyon17 WILDS in Fig. 4. DARLING representations tend to be domain-invariant as lots of examples from different domains are superimposed. However, this is also the case for many examples from different classes indicating potentially poor model generalization. In contrast, DiMAE representations appear to be well separated by classes but also by domains, especially for the target domains hospital_1 and hospital_2, indicating a lack of domain-invariance. Finally, better class separability and domain confusion
\begin{table}
\begin{tabular}{|l|r r r|r r r|r r|} \hline Sources & \multicolumn{3}{c|}{Paint. \(\cup\) Real. \(\cup\) Sketch.} & \multicolumn{3}{c|}{Clip. \(\cup\) Info. \(\cup\) Quick.} & \multicolumn{3}{c|}{} \\ \hline Target & clipart & infograph & quickdraw & painting & real & sketch & Overall & Avg. \\ \hline \hline \multicolumn{10}{|c|}{Label Fraction \(1\%\)} \\ \hline ERM & 6.54 & 2.96 & 5.00 & 6.68 & 6.97 & 7.25 & 5.88 & 5.90 \\ BYOL [14] & 6.21 & 3.48 & 4.27 & 5.00 & 8.47 & 4.42 & 5.61 & 5.31 \\ MoCo V2 [8] & 18.85 & 10.57 & 6.32 & 11.38 & 14.97 & 15.28 & 12.12 & 12.90 \\ AdCo [18] & 16.16 & 12.26 & 5.65 & 11.13 & 16.53 & 17.19 & 12.47 & 13.15 \\ SimCLR V2 [7] & 23.51 & 15.42 & 5.29 & _20.25_ & 17.84 & 18.85 & 15.46 & 16.86 \\ DARLING [36] & 18.53 & 10.62 & 12.65 & 14.45 & 21.68 & 21.30 & 16.56 & 16.54 \\ DiMAE [33] & 26.52 & 15.47 & 15.47 & 20.18 & _30.77_ & 20.03 & 21.85 & 21.41 \\ BrAD [15] & _47.26_ & _16.89_ & _23.74_ & 20.03 & 25.08 & _31.67_ & _25.85_ & _27.45_ \\ \hline CLaSSy & **61.42** & **18.93** & **24.85** & **28.18** & **33.84** & **42.63** & **32.46** & **34.98** \\ \hline \hline \multicolumn{10}{|c|}{Label Fraction \(5\%\)} \\ \hline ERM & 10.21 & 7.08 & 5.34 & 7.45 & 6.08 & 5.00 & 6.50 & 6.86 \\ BYOL [14] & 9.60 & 5.09 & 6.02 & 9.78 & 10.73 & 3.97 & 7.83 & 7.53 \\ MoCo V2 [8] & 28.13 & 13.79 & 9.67 & 20.80 & 24.91 & 21.44 & 18.99 & 19.79 \\ AdCo [18] & 30.77 & 18.65 & 7.75 & 19.97 & 24.31 & 24.19 & 19.42 & 20.94 \\ SimCLR V2 [7] & 34.03 & 17.17 & 10.88 & 21.35 & 24.34 & 27.46 & 20.89 & 22.54 \\ DARLING [36] & 39.32 & 19.09 & 10.50 & 21.09 & 30.51 & 28.49 & 23.31 & 24.83 \\ DiMAE [33] & 42.31 & 18.87 & 15.00 & 27.02 & _39.92_ & 26.50 & 27.85 & 28.27 \\ BrAD [15] & _64.01_ & **25.02** & _29.64_ & _29.32_ & 34.95 & _44.09_ & _35.37_ & _37.84_ \\ \hline CLaSSy & **69.35** & _19.82_ & **30.28** & **38.27** & **41.54** & **51.75** & **39.22** & **41.84** \\ \hline \hline \multicolumn{10}{|c|}{Label Fraction \(10\%\)} \\ \hline ERM & 15.10 & 9.39 & 7.11 & 9.90 & 9.19 & 5.12 & 8.94 & 9.30 \\ BYOL [14] & 14.55 & 8.71 & 5.95 & 9.50 & 10.38 & 4.45 & 8.69 & 8.92 \\ MoCo V2 [8] & 32.46 & 18.54 & 8.05 & 25.35 & 29.91 & 23.71 & 21.87 & 23.00 \\ AdCo [18] & 32.25 & 17.96 & 11.56 & 23.35 & 29.98 & 27.57 & 22.79 & 23.78 \\ SimCLR V2 [7] & 37.11 & 19.87 & 12.33 & 24.01 & 30.17 & 31.58 & 24.28 & 25.84 \\ DARLING [36] & 35.15 & 20.88 & 15.69 & 25.90 & 33.29 & 30.77 & 26.09 & 26.95 \\ DiMAE (finetuned) [33] & **70.78** & **38.06** & 27.39 & **50.73** & **64.89** & **55.41** & **49.49** & **51.21** \\ BrAD [15] & 68.27 & _26.60_ & **34.03** & 31.08 & 38.48 & 48.17 & 38.74 & 41.10 \\ \hline CLaSSy & _69.81_ & 19.62 & _30.37_ & _40.43_ & _42.86_ & _53.08_ & _40.06_ & _42.70_ \\ \hline \end{tabular}
\end{table}
Table 2: UDG performances on DomainNet subset. Averaged accuracy is reported over three independent runs, for the specified combinations of (sources, target) domains and for different proportions of labeled data. The first and second best performing methods are respectively highlighted in **red** and _blue_.
\begin{table}
\begin{tabular}{|l|c c c c|} \hline & \multicolumn{3}{c|}{Label Fraction} \\ \hline Method & 1\% & 5\% & 10\% & 100\% \\ \hline DARLING\({}^{\dagger}\)[36] & 70.44 & 72.00 & 72.43 & 72.36 \\ DiMAE\({}^{\dagger}\)[33] & _89.81_ & _89.17_ & _89.77_ & _90.40_ \\ \hline FixMatch\({}^{\dagger}\)[30] & ✗������� 90A\({}^{\dagger}\)[3] & ✗����� 90A\({}^{\dagger}\)[3] & ✗������ 90A\({}^{\dagger}\)[3] & ✗�
emerge from our method representations revealing a better domain-invariance and a potentially better cross-domain generalization.
## 5 Extended experiments
**Components ablations.** To evaluate the impact of the different components of the method (Fourier-based augmentations, batch styles standardization) on performances, we pretrained the backbone with all possible combinations of components using the same experimental setup and hyperparameters as experiments detailed in 4. The averaged results of this experiment over ten independent runs on the Camelyon17 WILDS dataset for the different proportions of labeled source data are presented in Tab. 4.
Our experiments show that the absence of Fourier-based augmentations and styles standardization, resulting in a regular SimCLR approach, leads to relatively low accuracy on the target domain for all proportions of labeled data. Conversely, applying the same color augmentations to both positives and negatives, thereby standardizing only the styles, results in improved generalization performance, with an average accuracy gain of \(3\) for various proportions of labeled data. This indicates that even reducing the styles variation between positives and negatives helps to learn domain-invariant features.
Fourier-based augmentations alone lead to drastic performance improvements compared to the regular SimCLR. Combining Fourier-based augmentations with styles standardization leads to even greater performance improvements than the regular SimCLR. It is worth noting that adding the styles standardization to Fourier-based augmentations results in substantial performance gains (\(+1.2\), \(+2.8\), \(+2.83\), and \(+2.85\)) for the proportions of labeled data \(1\%\), \(5\%\), \(10\%\), and \(100\%\), respectively. These observations support the effectiveness of Fourier-based augmentations and the proposed styles standardization in enhancing domain-invariance in contrastive learning.
**Behavior under smaller batch sizes.** The sensitivity of contrastive learning-based methods to batch size and the need for hard negative examples are major challenges. Various methods have attempted to tackle these challenges by using a larger set of negatives [8], exploiting harder negatives [18, 19], or eliminating the need for negatives [9, 14]. To investigate the effect of batch styles standardization on reducing sensitivity to batch size and/or creating harder negatives, we trained our method with or without batch styles standardization for different batch sizes on the Camelyon17 WILDS dataset. Apart from the batch size, the learning rate linearly scaled with respect to batch size, and using a LARS optimizer [35], hyperparameters remained unchanged. The accuracy for each batch size and the method with or without batch styles standardization are reported in Fig. 5.
Overall, using batch styles standardization leads to improved performance for any batch size. As batch size increases, performances increase until a plateau is reached. However, when batch styles standardization is used (indicated by the orange curve), the plateau is reached at smaller batch sizes. Specifically, with batch styles standardization, performance seems to cap beyond a batch size of \(64\), while without batch styles standardization, performance continues to increase until a batch size of \(128\). This finding supports the assumption that batch styles standardization helps reduce the sensitivity of contrastive learning to batch size by generating more challenging negative examples.
## 6 Batch styles standardization for other SSL methods
In this section, to demonstrate the versatility of the proposed batch styles standardization technique and its effectiveness in reinforcing domain-invariance in SSL methods, the respective contrastive and non-contrastive based methods SWaV (**SW**apping **A**ssignments between multiple **V**iews of the same image) [3] and MSN [1] (Masked **Si**amese **N**etworks) are extended.
**Preamble.** SWaV and MSN assign respectively global views or non-masked views of images to prototypes and try to predict these assignments using local or masked views of the same images. To uniformly assign the representations of global or non-masked views to the prototypes, these methods use the online Sinkhorn-Klopp algorithm [10]. We encourage the readers to refer to original publications or Appendix C for further technical details. We hypothesize that the presence of several domains/styles during the online assignments of global or non-masked views may lead to problems. Specifically, if a batch of global or non-masked views includes images from multiple domains/styles, the assignments may correlate with these domains/styles, resulting in a matching between prototypes and styles.
**Extensions.** To overcome this problem, we propose to enforce a single style for global views or non-masked views using our proposed batch styles standardization technique. Intuitively, restraining the styles to a single style should pre
\begin{table}
\begin{tabular}{|c|c|c c c c|} \hline & & \multicolumn{4}{c|}{Label Fraction} \\ \hline Fourier & BSS & 1\% & 5\% & 10\% & 100\% \\ \hline ✗ & ✗ & 63.54 & 63.58 & 63.50 & 65.06 \\ ✗ & ✓ & 66.81 & 68.19 & 68.45 & 68.53 \\ ✓ & ✗ & 91.07 & 91.95 & 91.99 & 92.15 \\ ✓ & ✓ & **92.27** & **94.75** & **94.82** & **95.00** \\ \hline \end{tabular}
\end{table}
Table 4: Ablation studies on the different components of the method (Fourier-based augmentations, Styles standardization). Averaged accuracy on Camelyon17 WILDS over ten independent runs and for different proportions of labeled source data are reported. (BSS : Batch Styles Standardization)
vent domain/style information from leaking into the resulting assignments, consequently making assignments more semantically relevant. All local and masked views are augmented independently.
To verify the previous hypothesis and evaluate the proposed solution, SWeV and MSN were trained with or without the extension based on batch styles standardization and evaluated using the same experimental setup. We report on Tab. 5, for the Camelyon17 WILDS dataset and different proportions of labeled data, the averaged accuracy of SWeV and MSN without the proposed extension along with the absolute accuracy gains resulting from the extension. For further implementation details and additional hyperparameters of SWeV and MSN, readers may refer to the Appendix C.
The initial findings reveal that the methods combined only with Fourier-based augmentations yields already relatively good results. Incorporating batch styles standardization to SWeV and MSN results in a significant boost of performances. It is important to notice that batch styles standardization improves contrastive/non-contrastive-based methods under different backbone architectures (convolutional-based and transformer-based). All these observations confirm that Fourier-based augmentation and batch styles standardization can be very efficient in reinforcing domain invariance for self-supervised methods.
## 7 Conclusion
This paper introduces a new approach for the UDG problem where image styles in a batch are standardized using Fourier-based augmentations. Building on this technique, we propose a novel contrastive-based UDG method named CLaSSy. Unlike other UDG methods, CLaSSy does not require domain labels and can handle any number of domains while achieving superior results on several benchmark datasets. Furthermore, we demonstrate that our batch styles standardization technique can be easily integrated into other SSL methods, including both contrastive and non-contrastive approaches, and can accommodate various backbone architectures, including convolutional-based and transformers-based architectures. Overall, our work provides new avenues to enhance domain invariance in SSL and improve the performance of existing methods.
Figure 4: t-SNE plots of the backbone representations for different UDG methods on Camelyon17 WILDS. On the target domains, our method shows better domain confusion while keeping better class separability. Display on pdf with larger zoom for better visualization.
Figure 5: UDG performances of CLaSSy with or without batch styles standardization (BSS) with respect to different batch sizes on the Camelyon17 WILDS dataset.
\begin{table}
\begin{tabular}{|c|c|c|c|c c c c|} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Backbone} & \multicolumn{2}{c|}{Components} & \multicolumn{3}{c|}{Label Fraction} \\ \cline{3-8} & & Fourier & BSS. & 1\% & 5\% & 10\% & 100\% \\ \hline CLaSSy & ResNet50 & ✓ & ✗ & 91.07 & 91.95 & 91.99 & 92.15 \\ & & ✓ & ✓ & \(\uparrow\)**1.20** & **1+2.80** & **+2.83** & **+2.85** \\ \hline SWeV [3] & ResNet50 & ✓ & ✗ & 91.03 & 91.94 & 91.96 & 92.26 \\ & & ✓ & ✓ & +2.39 & **+2.05** & **+2.11** & **+1.82** \\ \hline MSN [1] & ViT-S/8 [12] & ✓ & ✗ & 87.07 & 87.33 & 87.55 & 88.22 \\ & & ✓ & ✓ & **+3.86** & **+4.49** & **+4.28** & **+3.76** \\ \hline \end{tabular}
\end{table}
Table 5: Absolute accuracy gains due to batch styles standardization (BSS) on Camelyon17 WILDS for CLaSSy, SWeV and MSN. |
2305.02160 | Explaining Language Models' Predictions with High-Impact Concepts | The emergence of large-scale pretrained language models has posed
unprecedented challenges in deriving explanations of why the model has made
some predictions. Stemmed from the compositional nature of languages, spurious
correlations have further undermined the trustworthiness of NLP systems,
leading to unreliable model explanations that are merely correlated with the
output predictions. To encourage fairness and transparency, there exists an
urgent demand for reliable explanations that allow users to consistently
understand the model's behavior. In this work, we propose a complete framework
for extending concept-based interpretability methods to NLP. Specifically, we
propose a post-hoc interpretability method for extracting predictive high-level
features (concepts) from the pretrained model's hidden layer activations. We
optimize for features whose existence causes the output predictions to change
substantially, \ie generates a high impact. Moreover, we devise several
evaluation metrics that can be universally applied. Extensive experiments on
real and synthetic tasks demonstrate that our method achieves superior results
on {predictive impact}, usability, and faithfulness compared to the baselines. | Ruochen Zhao, Shafiq Joty, Yongjie Wang, Tan Wang | 2023-05-03T14:48:27Z | http://arxiv.org/abs/2305.02160v1 | # Explaining Language Models' Predictions with High-Impact Concepts
###### Abstract
The emergence of large-scale pretrained language models has posed unprecedented challenges in deriving explanations of why the model has made some predictions. Stemmed from the compositional nature of languages, spurious correlations have further undermined the trustworthiness of NLP systems, leading to unreliable model explanations that are merely correlated with the output predictions. To encourage fairness and transparency, there exists an urgent demand for reliable explanations that allow users to consistently understand the model's behavior. In this work, we propose a complete framework for extending concept-based interpretability methods to NLP. Specifically, we propose a post-hoc interpretability method for extracting predictive high-level features (concepts) from the pretrained model's hidden layer activations. We optimize for features whose existence causes the output predictions to change substantially, _i.e_. generates a high impact. Moreover, we devise several evaluation metrics that can be universally applied. Extensive experiments on real and synthetic tasks demonstrate that our method achieves superior results on predictive impact, usability, and faithfulness compared to the baselines. Our codebase is available at [https://github.com/RuochenZhao/HIConcept](https://github.com/RuochenZhao/HIConcept).
## 1 Introduction
Over the past few years, progress achieved by large language models (LLMs) has led them to be widely applied in sensitive applications such as personalized recommendation bots and recruitment. However, many users are still reluctant to adopt large-scale NLP models due to their opaque decision processes Mathews (2019). To increase transparency and user trust, it is crucial to utilize Explainable AI (XAI) to derive effective model interpretations. Following interpretability definitions proposed by Miller (2019) and Kim et al. (2016), we hope to allow humans to understand the cause of a model prediction, thus increasing the degree to which a human can consistently predict the model's results.
Current XAI methods fall into two categories based on model scope: (_i_) local or instance-based, and (_ii_) global or model-based Adadi and Berrada (2018). Illustrated in Fig. 1, to explain a sentiment classification model, local explanations focus on answering "why the model predicts negative for this particular instance". Local methods such as counterfactuals have been widely used to find the minimal change to an instance that results in an alternative prediction Wachter et al. (2017); Wu et al. (2021). Global explanations, on the other hand, attempt to explain the decision process at the model level, such as using feature clusters like'movie plot' and 'acting'. Among them, concept-based models, such as Koh et al. (2020) and Kim et al. (2018), extract high-level concepts to characterize the global behavior of deep neural models. When understanding texts, humans often employ _concept-based reasoning_, such as grouping words into topics Tenenbaum (1998). Therefore, concept-based interpretability provides a natural way of deriving explanations for NLP models.
However, one shortcoming of the existing concept-based methods is the lack of considera
Figure 1: Illustration of local and global explanation. On the right, impact of the concept “Directing” is the change in prediction after its removal.
tion of the explanations' "impact" on output predictions. We define the "impact" of a feature to be the change in output predictions without it. For example, the impact of the _directing_ concept in Fig. 1 is the change in prediction after removing it. Current concept-based methods emphasize learning features that can recover a model's predictions accurately. Thus, these features highly correlate with the predictions. However, as correlation does not always imply causation, we find that such explanations often have low impact, _i.e_. excluding it will not cause the predictions to be different. This undermines the validity of such explanations as users cannot utilize them to consistently predict the model's behavior when a feature changes. This is especially concerning for NLP. Stemming from the compositional nature of languages, LLMs contain a large number of _spurious correlations_ - features that are useful for training but not causal, which have become a serious threat (Feder et al., 2021; McCoy et al., 2019; Eisenstein, 2022). Therefore, simply applying correlation-based concept mining methods is problematic, as it may often generate correlational explanations.
In this work, we propose _HI-Concept_, a complete framework for explaining LLMs based on high-level concepts that directly impact the model's predictions by loss construction. As a post-hoc approach, our method trains a lightweight concept model that discovers latent features from hidden activations as global concepts, which can also be mapped back to generate local explanations. Specifically, to optimize for a high-impact set of concepts, we design a _causal loss_ to be optimized during training the concept model. To evaluate the faithfulness, usability, and impact of the discovered concepts, we devise several evaluation metrics and conduct human studies. In our experiments with 2 classification datasets and 3 pretrained models, we observe that HI-Concept consistently finds high-impact concepts that are more faithful and usable. Our contributions can be summarized as follows:
* We first propose HI-Concept, a method to derive both global concepts and their corresponding local explanations that result in high output changes. Both forms of generated explanations can complement each other while conforming to the'mindset' of the model.
* We propose evaluation metrics that stem from theoretical definitions of treatment effects in literature (Pearl, 2009) and design a human study to prove the usability of the derived explanations.
* We show that HI-Concept is impactful, usable, and faithful by constructing reliable and extensive experiments.
## 2 Related Work
### Explainability methods
Concept-based (General-domain):Concept-based methods (Kim et al., 2018) have been a popular group of interpretability methods as it derives user-friendly, high-level concepts as explanations. Among them, the most recent method is ConceptShap (Yeh et al., 2020), which discovers concepts in the intermediate layer with a bottleneck-shaped extraction model and proposes an adapted Shapley value metric to evaluate completeness scores. However, as we will show in our experiments (SS5), because the existing concept-based methods do not differentiate between correlational and causal information, their performances on NLP tasks is problematic: the discovered concepts often have little impact on final model predictions. Especially on complex transformer models with stronger confounding effects brought by pretraining, their performances may further decrease.
NLP:Many NLP explainability methods only discover correlational features, such as induction-based methods (Ling et al., 2017), explainability-aware architectures (Rajani et al., 2019), and feature importance scores (Croce et al., 2019).
### Causal explainability methods
General-domainHarradon et al. (2018) attempt to intervene in an unsupervised way on the hidden space by constructing several even-spaced Variational Autoencoders (VAEs) throughout a CNN, but they only train with a reconstruction loss instead of explicitly optimizing for impact. Probing methods (Conneau et al., 2018; Belinkov et al., 2020) train an external model - a _probe_ - to predict some properties of interest from the latent representations. To further investigate causal effects of the features learned from probing, Elazar et al. (2021) assess the influence of a causal intervention by removing a feature. However, subsequent work (Barrett et al., 2019) shows that such methods generalize poorly to unseen samples. Moreover, as Belinkov (2022) points out, the disconnect between the probing model and the original model may result in the properties not being utilized in the
original model's prediction task. Causal Mediation Analysis (CMA) (Pearl, 2022) measures the change in an output following a counterfactual intervention in an intermediate variable, or mediator.
NlpAccording to Feder et al. (2021), causality shows a promising path forward for NLP research, which can offer insights into the model's inner workings. Most current methods attempt to causally explain an NLP model by generating _counterfactual_ inputs (Alvarez-Melis and Jaakkola, 2017; Veitch et al., 2021; Wu et al., 2021). Vig et al. (2020) apply CMA to examine gender bias by changing pronouns in the input. We argue that both probing and CMA could be limited to a few aspects as they rely on human-constructed features (_e.g_., linguistic, gender features), requiring expertise on the datasets and tasks. Thus, it might be beneficial to develop unsupervised explanation features.
## 3 Methodology
### Task formulation
We follow the definition of interpretability by Kim et al. (2016): "the degree to which a human can consistently predict the model's result." Given a pretrained model \(f\), we aim to explain why it makes a certain predictions \(f(\mathbf{x})\) for an input \(\mathbf{x}\). To ensure validity, the explanation should significantly contribute to the overall prediction process, thus enabling humans to "consistently predict" the results. As \(f\) is fixed, its prediction process is deemed deterministic and reproducible, allowing us to conduct counterfactual experiments.
We generally follow the setup in traditional concept extraction models (Yeh et al., 2020), where explanations are high-level features (concepts) discovered on the hidden activation space with a concept extraction model. The pretrained model \(f\) can be viewed as a composite of two functions, divided at an intermediate layer \(f=\psi\circ\phi\). \(\phi(\cdot)\) encodes the input text \(\mathbf{x}\) to a hidden representation \(\phi(\mathbf{x})\), and \(\psi(\cdot)\) maps \(\phi(\mathbf{x})\) to classification probabilities \(\psi(\phi(\mathbf{x}))\). Without loss of generality, we assume that, for an input \(\mathbf{x}\), which consists of \(T\) tokens \([x_{1},\dots,x_{T}]\), \(\phi(\mathbf{x})\) can be represented as a concatenation of \([\phi(x_{1}),\dots,\phi(x_{T})]\), where each \(\phi(x_{t})\in\mathbb{R}^{d}\) denotes a representation of an input token \(x_{t}\). Depending on the model architecture, \(\phi(x_{t})\) can be encoded from a local receptive field as in convolutional nets or a global one as in Transformers (Vaswani et al., 2017). As the hidden space \(\phi(\mathbf{x})\) reflects the input distribution \(\mathbf{x}\), \(n\) concepts \(\mathcal{C}=\{\mathbf{c}_{1},\dots,\mathbf{c}_{n}\}\) could be extracted on \(\phi(\mathbf{x})\) that correspond to different input features.
### Correlated VS. predictive explanations
The general approach to concept-based explanation has been to use a concept mining model to discover high-level features from \(\phi(\mathbf{x})\). However, it only finds _correlational_ features, as it does not force explanations to be used in final predictions. The failure cases can be explained by drawing inspiration from causality analysis (Fig. 2). A real-life analogy is that, while the hot weather (\(X\)) creates high demand for ice cream (\(E\)), it also produces intense UV light exposure (\(Z\)), causing more sun-burns (\(Y\)). However, it is obvious that high ice cream sales (\(E\)) do not cause sunburns (\(Y\)).
In pretrained LLMs, the hidden activation space consists of both \(E\) and \(Z\). Although only \(Z\) truly affects prediction \(Y\), \(E\) and \(Z\) may be correlated due to the confounding effects brought by \(X\). However, a traditional concept mining model does not differentiate between them and considers both as valid, which is problematic. In our experiments (SS5), we also observe that the discovered concepts based on current methods often have little to no impact on output predictions, _i.e_. generate tiny changes in predictions when a concept is turned off, especially when interpreting LLMs like BERT (Devlin et al., 2018) and T5 (Raffel et al., 2020). To tackle this challenge, we enforce explanations to be predictive by considering their "impact".1
Footnote 1: We assume that, as the concept vectors coexist in the hidden embedding space of \(E\cup Z\), there is no causal relationship among the concepts \(\{\mathbf{c}_{1},\dots,\mathbf{c}_{n}\}\) themselves.
### Defining impact
To formally define the _impact_ of a feature, we draw inspiration from causal analysis, namely Individual Treatment Effect (ITE) and Average Treatment Effect (ATE), which measure the effect of interventions in randomized experiments. Given a binary
Figure 2: Causal graph illustration.
treatment variable \(T\) that indicates whether an intervention is performed, ATE and ITE are defined as the change in expected outcome with the do-operation:
\[\begin{split}\text{ITE}(x):=&\ \mathbb{E}[y|\mathbf{X}=x, \text{do}(T=1)]\\ &-\mathbb{E}[y|\mathbf{X}=x,\text{do}(T=0)];\\ \text{ATE}:=&\ \mathbb{E}[\text{ITE}(x)]\end{split} \tag{1}\]
In our case, a concept \(\mathbf{c}_{i}\) is discovered as a direction in the latent space, corresponding to a feature in the input distribution. If \(\mathbf{c}_{i}\) is used by the model for prediction, the removal of \(\mathbf{c}_{i}\) should cause a change in prediction. Thus, we define _impact_ I of a concept \(\mathbf{c}_{i}\) on an instance \((\mathbf{x},y)\) as:
\[\begin{split}\text{I}(\mathbf{c}_{i},\mathbf{x})=\mathbb{E}[y| \mathbf{X}=\mathbf{x},\mathbf{c}_{i}=\mathbf{0}]-\mathbb{E}[y|\mathbf{X}= \mathbf{x},\mathbf{c}_{i}=\mathbf{c}_{i}]\end{split} \tag{2}\]
Then, _individual impact_ (\(\text{I}(\mathbf{c}_{i})\)) of a concept \(\mathbf{c}_{i}\) and _average impact_ (\(\text{I}(\mathcal{C})\)) for a concept set \(\mathcal{C}\) could be written as:
\[\begin{split}\text{I}(\mathbf{c}_{i}):=\mathbb{E}[\text{I}( \mathbf{c}_{i},\mathbf{x})];\quad\text{I}(\mathcal{C})=\underset{\mathbf{c}_{i }\in\mathcal{C}}{\mathbb{E}}[\text{I}(\mathbf{c}_{i})]\end{split} \tag{3}\]
### Concept vector extraction
We approximate the distribution over the concepts by encoding the hidden activations \(\phi(x_{t})\in\mathbb{R}^{d}\) into concept probabilities \(p_{\mathcal{C}}(x_{t})=[p_{c}^{1}(x_{t}),\dots,p_{c}^{n}(x_{t})]\) with a bottleneck-shaped network while optimizing for _impact_. For a visualization of the overall model architecture, we refer readers to Appendix A.
Utilizing the original concept modelAfter initializing the concepts \(\mathcal{C}=\{\mathbf{c}_{1},\dots,\mathbf{c}_{n}\}\) uniformly, the probability for concept \(\mathbf{c}_{i}\) is calculated as \(p_{c}^{i}(x_{t})=\text{TH}((\phi(x_{t})^{\top}\mathbf{c}_{i}),\beta)\), where TH is a threshold function that forces all inputs smaller than \(\beta\) to be \(0\). To get the concept distribution for the entire sequence \(\mathbf{x}\), we concatenate the token-level distributions: \(p_{\mathcal{C}}(\mathbf{x})=[p_{\mathcal{C}}(x_{1}),\dots,p_{\mathcal{C}}(x_ {T})]\in\mathbb{R}^{T\times n}\). For transformer-based models such as BERT and T5, \(\phi(\mathbf{x})\) consists of only the sequence-level representation (e.g., [cls]) that is used for prediction. After encoding \(\phi(\mathbf{x})\) into \(p_{\mathcal{C}}(\mathbf{x})\), the bottleneck-shaped network reconstructs \(\phi(\mathbf{x})\) with a 2-layer perceptron \(g_{\theta}\) such that \(g_{\theta}(p_{\mathcal{C}}(\mathbf{x}))\approx\phi(\mathbf{x})\).
Then, two loss terms are proposed to train the concept model in an end-to-end way:
\(\bullet\)_Reconstruction loss:_ To faithfully recover the original DNN model's predictions, we optimize a surrogate loss with cross-entropy (CE) defined as:
\[\begin{split}\mathcal{L}_{\text{rec}}(\theta,\mathcal{C})& =\text{CE}\Big{(}\psi\big{(}\phi(\mathbf{x})\big{)},\psi\big{(}g _{\theta}(p_{\mathcal{C}}(\mathbf{x}))\big{)}\Big{)}\\ &=-\sum_{b\in\mathcal{B}}\psi\big{(}\phi(\mathbf{x})\big{)}_{b} \log\big{(}\psi(g_{\theta}(p_{\mathcal{C}}(\mathbf{x})))_{b}\big{)}\end{split} \tag{4}\]
where \(\mathcal{B}\) is the set of class labels and \(\psi(.)_{b}\) denotes the prediction score corresponding to label \(b\).
\(\bullet\)_Regularization loss:_ To ensure user-friendliness, we encourage each concept vector to correspond to actual examples and concepts to be distinct from each other. Formally,
\[\begin{split}\mathcal{L}_{\text{reg}}(\mathcal{C})=& -\lambda_{1}\frac{\sum_{i=1}^{n}\sum_{x_{t}\in\mathcal{R}_{i}} \mathbf{c}_{i}^{\top}\phi(x_{t})}{nN}\\ &+\lambda_{2}\frac{\sum_{i_{t}\neq i_{2}}\mathbf{c}_{i}^{\top} \mathbf{c}_{i_{2}}}{n(n-1)}\end{split} \tag{5}\]
Optimizing for impactWe introduce two losses to the original framework: (_i_) Auto-Encoding loss, which guarantees that the discovered concepts can serve as latent features representing the original distribution of \(\phi(\mathbf{x})\); (_ii_) Causality Loss, which mimics an estimation of "impact" proposed in Eq. (3). By directly optimizing these two new objectives, HI-Concept ensures that the discovered concepts generate a high impact on the prediction process.
\(\bullet\)_Auto-encoding loss:_ we define the following mean-squared error (MSE) loss for auto-encoding:
\[\begin{split}\mathcal{L}_{\text{rec}}(\theta,\mathcal{C})& =\text{MSE}\Big{(}\phi(\mathbf{x}),g_{\theta}(p_{\mathcal{C}}( \mathbf{x}))\Big{)}\\ &=\frac{1}{d}||\phi(\mathbf{x})-g_{\theta}(p_{\mathcal{C}}( \mathbf{x})||_{2}^{2}\end{split} \tag{6}\]
\(\bullet\)_Causality loss:_ This loss is designed to disentangle concept directions that have a greater impact. Following Eq. (3), our intuition is that a merely correlational concept should have an impact close to \(0\). Therefore, we optimize the following loss.
\[\begin{split}\mathcal{L}_{\text{cur}}(\theta,\mathcal{C})& =-\!\!\!\sum_{\mathbf{c}_{i}\in\mathcal{S}}\!\!\sum_{\mathbf{x}_{j}\in \mathcal{D}}\Big{|}\psi\Big{(}g_{\theta}(p_{\mathcal{C}}(\mathbf{x}_{j})| \mathbf{c}_{i}=\mathbf{0})\Big{)}\\ &\quad-\psi\Big{(}g_{\theta}(p_{\mathcal{C}}(\mathbf{x}_{j})| \mathbf{c}_{i}=\mathbf{c}_{i})\Big{)}\Big{|}\approx-|I_{\text{avg}}(\mathcal{C} )|\end{split} \tag{7}\]
Here, \(\mathcal{S}\subseteq\mathcal{C}\) denotes a set of concepts to remove.2 As we perturb on all inputs \(\mathbf{x}_{j}\in\mathcal{D}\), the training dataset \(\mathcal{D}\) serves both as the treatment group and the nontreatment group, ensuring that no divergence is present. As the term \(|\psi(\cdot|\mathbf{c}_{i}=\mathbf{0})-\psi(\cdot|\mathbf{c}_{i}=\mathbf{c}_{ i})|\) in Eq. 7 approximates the impact \(I(\mathbf{c}_{i},\mathbf{x}_{j})\) in Eq. 2, minimizing
the designed causality loss is a close approximation to maximizing the average impact in Eq. 3. Intuitively, this loss encourages the concepts to incorporate directions that result in more significant changes in the output predictions.
Total loss for HI-ConceptFinally, the overall loss function that we minimize becomes:
\[\begin{split}\mathcal{L}(\theta,\mathcal{C})=& \mathcal{L}_{\text{rec}}(\theta,\mathcal{C})+\mathcal{L}_{\text{ reg}}(\mathcal{C})\\ &+\lambda_{e}\mathcal{L}_{\text{enc}}(\theta,\mathcal{C})+\lambda_ {e}\mathcal{L}_{\text{can}}(\theta,\mathcal{C})\end{split} \tag{8}\]
where \(\lambda_{e}\), \(\lambda_{c}\) are the weights for the auto-encoding loss and the causal loss respectively. In practice, the hyperparameters require minimal tuning. Specifically, we recommend fixing \(\lambda_{1}=0.1\) and \(\lambda_{2}=0.5\) for regularizer loss in Eq. (5), and \(\lambda_{e}=1\) for reconstruction loss. The only hyperparameter to tune is \(\lambda_{c}\), whose optimal level can be found within a few steps. Futher details on implementation and the training process could be found in Appendix B.
Mapping concepts back to word tokensTo derive corresponding local explanations, we employ existing techniques to map activation-space features back to the discrete input tokens when the receptive field of a concept is larger than token-level. For explaining BERT, we employ the transformer visualization method proposed in Chefer et al. (2021) to map back from the [CLS] activation concepts to input tokens. Specifically, it visualizes classification with a combination of layer-wise propagation (LRP), gradient backpropagation, and layer aggregation with rollout. As a result, for each sample \(\mathbf{x}\) and concept \(\mathbf{c}_{i}\), we go from having only one concept similarity score \(p_{c}^{i}(\mathbf{x})\) to having a list of normalized token importance scores \(s_{1}(\mathbf{c}_{i}),\dots,s_{T}(\mathbf{c}_{i})\). For the intermediate layers of BERT, we simply use the corresponding token representation (_i.e.,_\(\phi(x_{t})\) and its corresponding \(p_{c}^{i}(x_{t})\)). For CNNs, we employ the GradCam approach (Selvaraju et al., 2017), which rolls out the gradients to produce scores for each token.
### Evaluating impact
To evaluate _impact_ for a feature, we devise three metrics based on its definition in Eq. (3), which can be globally applied to other feature-based interpretability methods.
\(\bullet\)_Individual and average impacts:_ We approximate Eq. (3) on the test set \(\mathcal{D}_{\text{test}}\):
\[\begin{split}\text{I}(\mathbf{c}_{i})&=\sum_{ \mathbf{x}_{j}\in\mathcal{D}_{\text{test}}}|\psi\big{(}g_{\theta}(p_{c}( \mathbf{x}_{j}))\big{)}-\psi\big{(}g_{\theta}(p_{c\setminus\{i\}}(\mathbf{x}_ {j}))\big{)}|;\\ \text{I}(\mathcal{C})&=\frac{1}{|\mathcal{C}|}\sum_{ \mathbf{c}_{i}\in\mathcal{C}}\text{I}(\mathbf{c}_{i})\end{split}\]
\(\bullet\)_Recovering accuracy change:_ Intuitively, if a concept \(\mathbf{c}_{i}\) is a crucial factor used by the model to make predictions, omitting it will disrupt the ability to faithfully recover predictions. Denoting \(\text{Acc}(\mathcal{C})=\text{Accuracy}(\psi\big{(}g_{\theta}(p_{\mathcal{C}}( \mathbf{x}))\big{)},\phi(\psi(\mathbf{x}))\):
\[\begin{split}\Delta\text{Acc}_{i}&=|\text{Acc}( \mathcal{C})-\text{Acc}(\mathcal{C}\setminus\{\mathbf{c}_{i}\})|;\\ \Delta\text{Acc}&=\frac{1}{|\mathcal{C}|}\sum_{ \mathbf{c}_{i}\in\mathcal{C}}\Delta\text{Acc}_{i}\end{split}\]
\(\bullet\)_Impact score:_ To measure the impact of an individual text token \(x_{t}\), we take the top-3 most similar concepts \(\mathcal{C}_{\text{top}}\) to the input \(\mathbf{x}\) using the normalized similarity score \(p_{\mathcal{C}}(\mathbf{x})\). For each concept \(\mathbf{c}_{i}\in\mathcal{C}_{\text{top}}\), the transformer visualization method (SS3.4) produces normalized token importance scores \(\{s_{1}(\mathbf{c}_{i}),\dots,s_{\text{T}}(\mathbf{c}_{i})\}\). The **impact** score for a token \(x_{t}\) is defined as:
\[\text{I}(x_{t})=\sum_{\mathbf{c}_{i}\in\mathcal{C}_{\text{top}}}p_{c}^{i}( \mathbf{x})s_{t}(\mathbf{c}_{i})\]
## 4 Experiment Settings
### Datasets and classification models
We test the effectiveness of our method with two standard text classification datasets: IMDB (Maas et al., 2011) and AG-news (Zhang et al., 2015). The IMDB dataset consists of movie reviews labeled with positive or negative sentiments. The AG-news dataset consists of news articles categorized with 4 topics. Table 6 in Appendix D gives a dataset summary. We explain three classification models: (_i_) a 6-layer transformer encoder trained from scratch, (_ii_) a pre-trained BERT with finetuning, (_iii_) a pre-trained T5 (Raffel et al., 2020) with finetuning.
### Evaluation measures
We evaluate the explanation methods based on three important aspects as described below.
\(\bullet\)**Causality:** Causality is an important consideration to evaluate explainability methods, especially where spurious correlations are strong. Doshi-Velez and Kim (2017) state: "Causality implies that the predicted change in output due to a perturbation will occur in the real system". We evaluate
\(\text{I}(\mathcal{C})\) and \(\Delta\text{Acc}\) proposed in SS3.5, as a higher \(\text{I}(\mathcal{C})\) and \(\Delta\text{Acc}\) represent a higher change in model predictions, thus a more impactful set of concepts.
\(\bullet\)**Usability:** Proposed in (Doshi-Velez and Kim, 2017), an important desideratum of explainability is to make sure that it provides usable information that assists users to accomplish a task. With the concepts generating a high impact on predictions, we expect that it can enable end-users to better understand the model's reasoning process, thus being useful for debugging and fairness. We include visualizations and human studies to test it qualitatively.
\(\bullet\)**Faithfulness:** Faithfulness evaluates whether our surrogate model can accurately mimic the original model's prediction process. In other words, we want to ensure that the captured concept probabilities \(p_{\mathcal{C}}(\mathbf{x})\) can recover the original model's predictions \(\psi\big{(}\phi(\mathbf{x})\big{)}\). We report the _recovering accuracy_ for the set of concepts \(\mathcal{C}\):
\[\text{RAcc}=\frac{1}{|\mathcal{D}_{\text{test}}|}\sum_{\mathbf{x}_{j}\in \mathcal{D}_{\text{test}}}\mathbb{1}\Big{(}\psi\big{(}\phi(\mathbf{x}_{j}) \big{)}=\psi\big{(}g_{\theta}(p_{\mathcal{C}}(\mathbf{x}_{j}))\big{)}\Big{)}\]
### Baselines and hyperparameters
As fair comparisons to our method, we can only consider _unsupervised_ feature discovery algorithms. Thus, we use conceptSHAP (Yeh et al., 2020) as a baseline. To compare to VAE methods, we include the disentanglement VAE (\(\beta\)-TCVAE) by Chen et al. (2018). Moreover, we include comparisons to popular non-parametric clustering techniques, including PCA and k-means.
The full list of hyperparameters used for training the HI-Concept model can be found in Appendix D. Briefly, we use the causal coefficient \(\lambda_{c}\in[1,3]\), depending on the level of confounding within the dataset. During training, perturbation is performed on the most similar concept to the input. All experiments are conducted on the penultimate layer with 10 concepts. The hyperparameters are chosen as an optimal default through grid search. To make the comparison fair, PCA, K-means, and \(\beta\)-TCVAE also use 10 dimensions to encode.
## 5 Results and Analysis
To first provide a sanity check for our method, we conduct a toy experiment with a synthetic graphic dataset where the level of confounding can be controlled with ground truth concepts. Appendix C gives details of the experiment. The results show that our method discovers concepts that align with human understanding and consistently outperforms the baseline by deriving more impactful features. As confounding levels in the dataset increase, the performance gap also widens.
### Results on text classification datasets
The experiment results on text classification datasets are presented in Table 1. Concepts discovered by the baseline methods lead to tiny changes in prediction outputs, which undermine their reliability. On the contrary, on all models, especially pretrained BERT and T5, concepts discovered by HI-Concept induce a larger impact than all the baseline methods, while maintaining faithfulness. This observation consolidates our intuition that pretrained complex language models with more confounding
\begin{table}
\begin{tabular}{l l r r r r r r r} \hline Dataset & Model & Cls.Acc & Metric & \(\beta\)-TCVAE & K-means & PCA & ConceptSHAP & HI-Concept \\ \hline \multirow{6}{*}{IMDB} & \multirow{3}{*}{Transformer} & \multirow{3}{*}{81.74\%} & \(\text{I}(\mathcal{C})\) & 0.037 & 0.047 & 0.001 & 0.031 & **0.150** \\ & & & \(\Delta\text{Acc}\) & 1.24\% & 2.59\% & 0.01\% & 1.30\% & **11.06\%** \\ & & & \(\text{RAcc}\) & 52.08\% & 83.64\% & 85.18\% & 84.36\% & **88.78\%** \\ \cline{2-10} & \multirow{3}{*}{BERT} & \multirow{3}{*}{89.14\%} & \(\text{I}(\mathcal{C})\) & 0.057 & 0.038 & 0.002 & 0.050 & **0.104** \\ & & & \(\Delta\text{Acc}\) & 4.10\% & 1.56\% & 0.02\% & 0.06\% & **9.47\%** \\ & & & \(\text{RAcc}\) & 93.86\% & **98.69\%** & 96.68\% & 95.84\% & 94.53\% \\ \cline{2-10} & \multirow{3}{*}{TS} & \multirow{3}{*}{72.98\%} & \(\text{I}(\mathcal{C})\) & 0.000 & 0.025 & 0.000 & 0.000 & **0.094** \\ & & & \(\Delta\text{Acc}\) & 0.00\% & 1.06\% & 0.02\% & 20.21\% & **38.34\%** \\ & & & \(\text{RAcc}\) & 0.00\% & 75.85\% & 98.86\% & 60.20\% & **99.50\%** \\ \hline \multirow{6}{*}{AG} & \multirow{3}{*}{Transformer} & \multirow{3}{*}{88.33\%} & \(\text{I}(\mathcal{C})\) & **0.049** & 0.044 & 0.000 & 0.000 & 0.045 \\ & & & \(\Delta\text{Acc}\) & 6.62\% & 0.07\% & 0.03\% & 0.00\% & **7.12\%** \\ \cline{1-1} & & & \(\text{RAcc}\) & 98.90\% & 98.16\% & **99.99\%** & 73.01\% & 99.50\% \\ \cline{1-1} \cline{2-10} & \multirow{3}{*}{BERT} & \multirow{3}{*}{93.75\%} & \(\text{I}(\mathcal{C})\) & 0.044 & 0.028 & 0.001 & 0.025 & **0.058** \\ \cline{1-1} & & & \(\Delta\text{Acc}\) & 5.32\% & 7.15\% & 0.01\% & 4.44\% & **10.54\%** \\ \cline{1-1} & & & \(\text{RAcc}\) & 92.30\% & 86.83\% & 99.79\% & 93.46\% & **99.90\%** \\ \cline{1-1} \cline{2-10} & \multirow{3}{*}{TS} & \multirow{3}{*}{94.30\%} & \(\text{I}(\mathcal{C})\) & 0.000 & 0.011 & 0.000 & 0.000 & **0.054** \\ \cline{1-1} & & & \(\Delta\text{Acc}\) & 0.00\% & 1.49\% & 0.01\% & 0.00\% & **52.20\%** \\ \cline{1-1} \cline{2-10} & & & \(\text{RAcc}\) & 0.00\% & 24.87\% & 97.38\% & 0.00\% & **99.46\%** \\ \hline \end{tabular}
\end{table}
Table 1: Faithfulness (RAcc\(\uparrow\)) and causality (\(\text{I}(\mathcal{C})\)\(\uparrow\), \(\Delta\text{Acc}\)\(\uparrow\)) evaluation of different text classification methods.
correlations can benefit more from HI-Concept. As a seq2seq text generation model, the pretrained T5 is especially hard to learn for the surrogate models, as the output vocabulary has a class size of 32,128. For better calculation of impact, we simplify outputs by filtering to only the classification classes (e.g., words "Positive", "Negative" for IMDB) and summing all other vocab probabilities as "Other". Some models collapse completely in this case. HI-Concept, however, excels in maintaining both faithfulness and a large impact.
To qualitatively examine the discovered concepts, we take an example of BERT on AG-News. In Table 2, 2 out of 10 concepts from both HI-Concept and the baseline are shown. For ConceptSHAP, both concepts have low I(\(\mathcal{C}\)), corresponding to lower impact. Although vaguely hinting at the category "Sports", they consist of words that are less indicative, such as "one", "two", and "new". When looking at HI-Concept discovered topics, the first talks about "World", especially in the global finance topic. The second clearly points to the American sports leagues, indicating category "Sports". Thus, instead of merely pointing to class information, the concepts discovered contain more information that aligns with human understandable concepts. The I(\(\mathcal{C}\)) score shown here is also consistent with what humans perceive as important and causal words to the classification, thus indicating the metric's validity. Moreover, in Appendix G.1, we show more concepts discovered in former layers of BERT, which show that the concepts are not only separable in semantic meanings, but also syntactical information (such as nouns and adjectives).
The **usability** of our method is visualized in Fig. 3, which shows the same failure case (labeled as "World" news but misclassified as "Sports") highlighted with the top concept discovered. ConceptSHAP discovers a top concept related to the keywords "leads", "as expected", or "on thursday", which are not informative as to why the model classified this input as "Sports". On the contrary, HI-Concept could precisely point out why: BERT is looking at keywords such as "dream team", "game", and country names. Such examples show the potential of HI-Concept being used in understanding the model's failure processes, which we further investigate in SS5.3 with a carefully designed human study.
### Ablation study
To ensure that the designated 4 objectives behave as expected, we conduct ablation studies for BERT on AG-News and report the results in Table 3. As observed, eliminating prediction loss leads to a large decrease in RAcc, resulting in an unfaithful model. Thus, even though the model leads to large accuracy changes, the results cannot be trusted. Without auto-encoding loss or regularizer loss, the model has lower performances both in faithfulness and impact. Without causality loss, RAcc is the highest, indicating an accurate reconstruction of the original predictions. However, the discovered set of concepts results in lower output changes. Finally, the full HI-Concept method discovers a set of concepts that both generate high impact and maintain a good level of faithfulness.
### Human study
To validate that HI-Concept can identify words that are more impactful than the baseline ConceptSHAP, we design the following human study setup: 100 randomly selected examples from AG's testset are shown, where each example consists of the text input and the model's prediction. The annotator is asked to select up to three most causal words for the predicted label. We collect annotations from 4
\begin{table}
\begin{tabular}{c c l} \hline \hline Method & I(\(\mathcal{C}\)) & Keywords \\ \hline ConceptSHAP & 0.000 & one, two, gt, new, cl, l, first, world, mo, last, b, san, tuesday, soccer, time,nhl, Australia, red, bryant \\ ConceptSHAP & 0.000 & first, new, red, world, Yankees, Australia, giants, nl, as, two, one, ga, last, b, u, tuesday, quo, men \\ \hline HI-Concept & 0.108 & update, us, fed, wal, op, u, stocks, oil, dollar, delta, hr, ex, Wednesday, world, percent, crude \\ HI-Concept & 0.151 & red, NBA, football, Yankees, sports, NFL, team, baseball, olympic, league, game, season, coach \\ \hline \hline \end{tabular}
\end{table}
Table 2: Generated concept keywords with Average Impact (I(\(\mathcal{C}\))) from AG-News dataset, BERT model.
Figure 3: Qualitative comparison from AG-News: “World” news misclassified as “Sports” by BERT.
different annotators proficient in English in order to obtain a diverse set of causal keywords. We consider the keywords selected by the annotators to be the ground-truth and calculate the average impact score (I\((x_{t})\)) (SS3.5) for all unique words (superset) selected by the annotators, with the baseline and our method trained on the penultimate layer of BERT. Details about how the human study is conducted can be found in Appendix F.
As shown in Table 4, the average impact score produced by HI-Concept is 2 times higher than the baseline, demonstrating that our model selects the right (causal) tokens. The annotators have a Cohen's Kappa agreement of 0.41, which is considered as moderate agreement (Landis and Koch, 1977). Therefore, even though the annotators prefer slightly different keywords as causal, our model still assigns a higher score to them compared to the baseline, covering causal keywords preferred by all annotators. For example, Fig. 4 shows a correctly classified "Business" news article. The human annotators, with a small disagreement, provide 4 unique keywords, which are all highlighted by HI-Concept. Such ability to identify causal tokens gives HI-Concept the potential to be used for debugging applications.
### Hyper-parameter comparisons
We perform further studies on two hyperparameters: the layer(s) to interpret and number of concepts. We conduct text experiments and evaluate in terms of both impact and concept quality. In this section, we summarize our main findings and refer the readers to Appendix G for experiment details and results (charts and wordcloud visuals).
For **layer-wise comparisons**, we experiment on the 3rd, 6th, 9th, and 12th layer respectively, all with 10 concepts. In terms of impact, the intermediate layers (3, 6 and 9) have a higher impact I\((\mathcal{C})\) (around 0.150), while the penultimate layer (12) has a lower I\((\mathcal{C})\) (around 0.058). This is because the concepts discovered at the penultimate layer are sentence-level (using the [CLS] token), while the intermediate layer concepts are token-level. Thus, the sentence-level concepts have less fine-grained control. In terms of topic quality, the later layers tend to discover more coherent concepts, where each concept mostly corresponds to one class label. The beginning layers, on the contrary, tend to discover concepts that are more abstract with mixed class labels. The earlier layers can also discover lexical concepts, such as concepts with only nouns or adjectives. Similarly, Dalvi et al. (2021) observe that BERT finds more lexical information in the earlier layers. This interesting observation could lead to future studies in investigating how information flows through different layers in BERT.
For **number of concepts**, we experiment with 3, 5, 10, 50, and 100 concepts on the penultimate layer. We find that a concept number close to the number of output classes usually gives higher prediction changes, while increasing the number results in higher recovering accuracy. When the number of concepts becomes larger, concepts usually become more coherent, although the performance will decrease, as too large a number of concepts introduces more noise into the training process.
## 6 Conclusions
We have proposed a complete framework to derive impactful concepts that explain a black-box language model's decisions. Our framework addresses 3 important challenges in NLP explainability: (_i_) **Impactful:** it derives concepts that generate high output changes and minimize confounding explanations through a causal loss objective. (_ii_) **Counterfactuals:** it proposes an innovative substitute to the traditional input counterfactual. By
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \multirow{2}{*}{\# Examples} & \multirow{2}{*}{Cohen \(\kappa\)} & ConceptSHAP & Ours \\ & & & (Impact) & (Impact) \\ \hline Total & 100 & 0.41 & 0.41 & **0.83** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Human study for causality/usability evaluation.
Figure 4: Example sentence in the human study: “Business” news correctly classified.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & RAcc \(\uparrow\) & I\((\mathcal{C})\uparrow\) & \(\Delta\)Acc \(\uparrow\) \\ \hline No Auto-Encoding Loss & 93.46\% & 0.028 & 6.11\% \\ No Prediction Loss & 68.00\% & 0.035 & **17.41\%** \\ No Regularizer Loss & 95.76\% & 0.041 & 6.23\% \\ No Causality Loss & **99.92\%** & 0.029 & 2.95\% \\ \hline HI-Concept & 99.90\% & **0.058** & 10.54\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation on BERT for IMDB with faithfulness (RAcc) and impact (I\((\mathcal{C})\), \(\Delta\)Acc) evaluation.
producing latent counterfactuals that are designed to remove features within input texts, we avoid the input space search. The concern of interpreting the hidden activations is also addressed by incorporating visualization methods. (_iii_) **User-friendly:** as demonstrated with visualizations and human studies, HI-Concept leads to human-friendly explanations in NLP tasks that contain high-level text attributes and semantically meaningful concepts.
### Limitations
Regarding potential concerns, HI-Concept only encourages high impact in post-hoc model explanations and should serve as an assistive tool instead of being accepted as ground-truth.
As a future venue to our work, we believe that our framework will set a good foundation for future research on causal NLP explainability methods, especially those that hope to derive human-friendly explanations. To improve it further, a similar causal objective could be used to address spurious correlations during training. It also has the potential of being carried over to other domains, such as vision or tabular tasks. The high-level attributes in the hidden space can also be used in downstream applications to provide better controllability for the users.
## Ethics Statement
HI-Concept demonstrates the potential to play an important role in practical scenarios such as debugging and transparency. As AI ethics have become a major concern in real-life applications, such explanations can help users better identify bias and promote fairness.
|
2308.13091 | Quantum Analog of Shannon's Lower Bound Theorem | Shannon proved that almost all Boolean functions require a circuit of size
$\Theta(2^n/n)$. We prove a quantum analog of this classical result. Unlike in
the classical case the number of quantum circuits of any fixed size that we
allow is uncountably infinite. Our main tool is a classical result in real
algebraic geometry bounding the number of realizable sign conditions of any
finite set of real polynomials in many variables. | Saugata Basu, Laxmi Parida | 2023-08-24T21:31:40Z | http://arxiv.org/abs/2308.13091v1 | # Quantum analog of Shannon's lower bound theorem
###### Abstract.
Shannon proved that almost all Boolean functions require a circuit of size \(\Theta(2^{n}/n)\)[11]. We prove a quantum analog of this classical result. Unlike in the classical case the number of quantum circuits of any fixed size that we allow is uncountably infinite. Our main tool is a classical result in real algebraic geometry bounding the number of realizable sign conditions of any finite set of real polynomials in many variables.
Key words and phrases:Quantum circuits, Shannon's lower bound theorem, real algebraic geometry 2000 Mathematics Subject Classification: Primary 68Q12; Secondary 14P10, 81P68
## 1. Introduction
Boolean circuit complexity measures complexity of Boolean functions \(f:\{0,1\}^{n}\to\{0,1\}\) by the size of the smallest circuit computing \(f\). We assume that the gates of a circuit come from some fixed finite universal set of gates. We refer the reader to [13, Page 7, Definition 3.1] for the precise definition of a circuit. Notice that in this case the set of circuits of bounded size is finite.
The study of circuit complexity of Boolean functions naturally leads to the definitions of "non-uniform" analogs of classical complexity classes such as \(\mathbf{P}\). For example, the non-uniform analog of the class \(\mathbf{P}\) is the set of sequences of Boolean functions
\[\left(f:\{0,1\}^{n}\to\{0,1\}\right)_{n>0}\]
for which there exists a polynomial \(p(n)\) such that for each \(n>0\), there exists a circuit of size bounded by \(p(n)\) computing \(f_{n}\).
Shannon [11] showed using a counting argument (using the fact that as noted before that the set of circuits of bounded size is finite) very early that almost all Boolean functions need circuits of size \(\Omega(2^{n}/n)\) (see [13, Page 90, Theorem 2.1]. Here almost all refers to the fact that the number of Boolean functions that require circuits of size \(\Omega(2^{n}/n)\) is bounded from below by \(2^{2^{n}}(1-o(1))\). It is also known (see for instance [13, Page 92, Theorem 2.2]) that every Boolean function \(f:\{0,1\}^{n}\to\{0,1\}\) can be computed by a circuit of size \(O(2^{n}/n)\).
_Remark 1_.: In the theorems cited above the set of gates used is assumed to be all possible gates of arity two (i.e. the finite set of \(2^{2^{2}}=16\) gates computing
all possible Boolean functions \(g:\{0,1\}^{2}\to\{0,1\}\)). The arity two is not important. The same asymptotic results will hold even if we allow all gates of some bounded arity \(q\geq 2\) (the constants will depend on the arity).
In this paper we are are concerned with proving analogous results for quantum circuits.
Quantum complexity theory is a relatively new discipline. A quantum circuit computes a unitary transformation of a certain finite but exponentially large dimensional Hilbert space. We refer the reader to [8, Definition 6.1] for the precise definition of a quantum circuit. The Hilbert space comes equipped with a computational basis whose elements should be thought of as the elements of the Boolean hypercube. There is a standard definition of what it means for such a circuit to compute a Boolean function \(f:\{0,1\}^{n}\to\{0,1\}\) which we explain later in the paper. The notion of a size of a quantum circuit mirrors the classical definition (i.e. the number of gates) though we also take into account the number of ancillary qubits which form the workspace of the quantum circuit. But there is one crucial difference between classical and quantum circuits which we explain below.
In analogy with classical circuits (cf. Remark 1), by a quantum circuit we will mean a circuit as defined in [8, Definition 6.1] where we allow all quantum gates of some bounded arity. Even if one fixes the arity (say \(q>0\)) of a quantum gate, unlike in the classical case there is a _continuum many_ choices of such a gate - namely, each element of the unitary group 1\(\mathbf{U}(2^{q})\) is a possible quantum gate of arity \(q\). The real dimension of the group \(\mathbf{U}(2^{q})\) is equal to \(2^{2q}\). Thus, with such an uncountable choice of the set of gates, the cardinality of the set of quantum circuits of size bounded by any fixed positive number is also uncountable. Thus it makes sense to ask whether using this additional flexibility quantum circuits of "small" (in terms of \(n\)) sizes can actually compute all the \(2^{2^{n}}\) Boolean functions \(f:\{0,1\}^{n}\to\{0,1\}\). The main result of the paper (see Corollary 1) can be colloquially framed as saying that Shannon's lower bound for classical circuits also hold for quantum circuits (even after we allow arbitrary quantum gates of bounded arity) i.e. almost all Boolean functions \(f:\{0,1\}^{n}\to\{0,1\}\) require quantum circuits of size \(\Omega(2^{n}/n)\). Since a classical circuit computing a Boolean function can be converted into a quantum circuit with at most a constant factor increase in size it follows that every Boolean function \(f:\{0,1\}^{n}\to\{0,1\}\) can be computed by a quantum circuit of size at most \(O(2^{n}/n)\). Thus, it is the lower bound part of Shannon's result that is of main interest.
Footnote 1: The unitary group \(\mathbf{U}(N)\) is the group of \(N\times N\) complex matrices \(U\) satisfying \(UU^{\dagger}=\mathrm{Id}_{n}\). It is a real Lie group of dimension \(N^{2}\).
We prove our quantum version of Shannon's theorem by first proving an upper bound on the number of Boolean functions that can be computed by
a quantum circuit of a given size (see Theorem 1 below). A key tool in the proof of Theorem 1 is a bound from real algebraic geometry (that we explain in Section 3.1) which gives an upper bound on the number of realizable sign conditions of any finite set of real polynomials in several indeterminates of bounded degrees. This result actually bounds the zero-th Betti number of the realizations of all sign conditions of the set of polynomials and has generalizations to higher Betti numbers as well [3]. It has previously being used in proving upper bounds on the number of configurations in various geometric settings (see for example [6] for one such example), but to the best of our knowledge has not being used in proving lower bounds in quantum complexity theory.
The rest of the paper is organized as follows. In Section 2 we state our main results. In Section 3 we prove the main theorem and its corollary. In Section 4 we discuss an alternative approach towards proving the main result of this using variants of Solovay-Kitaev approximation and explain its deficiency. In Section 5 we discuss some possible future work.
## 2. Main Results
We follow the usual conventions. We denote by \(|0\rangle,|1\rangle\) the computational basis for each qubit. We identify a 0-1 string \(x_{n-1}\cdots x_{0}\) of length \(n\), with the integer
\[x=\sum_{k=0}^{n-1}x_{k}\cdot 2^{k},\]
and denote the corresponding (separable) state \(|x_{n-1}\rangle\otimes\cdots\otimes|x_{0}\rangle\) simply by \(|\mathbf{x}\rangle\). Similarly, we will often abbreviate \(|0\rangle\otimes\cdots\otimes|0\rangle\) by \(|\mathbf{0}\rangle\).
For \(q\geq 1\) we denote by \(\mathcal{U}_{q}\) the (uncountably infinite) set of quantum gates with at most \(q\) inputs. Each gate \(g\in\mathcal{U}_{q}\) corresponds to a unitary transformation \(U_{g}\in\mathbf{U}(2^{k_{g}})\), where \(k_{g}\leq q\) is the arity (fan-in) of the gate \(g\). For \(q\geq 2,\,\mathcal{U}_{q}\) is a universal family. Namely, any unitary transformation can be implemented by a quantum circuit with gates from \(\mathcal{U}_{2}\).
A quantum circuit \(C\in\mathcal{C}_{1}\) with \(n\) input qubits, (whose values will be denoted by \(x_{1},\ldots,x_{n}\)) and \(t\) ancillary qubits (whose values will be denoted by \(z_{1},\ldots,z_{t}\)), and having \(r\) gates drawn from \(\mathcal{U}_{q}\) is determined by the following data:
1. an ordering of the \(r\) gates (lets suppose the ordered tuple of gates is \(g_{1},\ldots,g_{r}\), with the gate \(g_{i}\) having arity \(k_{i}\leq q\)), and
2. for each \(i,1\leq i\leq r\), an ordered choice of \(k_{i}\) elements from amongst \(x_{1},\ldots,x_{n},z_{1},\ldots z_{t}\).
For \(q\geq 1\), we will denote by \(\mathcal{C}_{q}\), the set of quantum circuits using gates from the set \(\mathcal{U}_{q}\).
For \(C\), a quantum circuit with \(n\) qubits as input and \(t\) ancillary qubits, we denote by \(U(C)\in\mathbf{U}(N)\) the unitary transformation implemented by \(C\) where \(N=2^{n+t}\).
We now explain what is meant by a quantum circuit computing a Boolean function. For ease of understanding, we start with a provisional (very stringent) definition and then give a more general (much less stringent) definition afterwards. We will use the more general (less stringent) definition in the rest of the paper noting that a lower bound result is more powerful if the definition is less stringent.
Let \(f:\{0,1\}^{n}\to\{0,1\}\) be a Boolean function. We associate a unitary transformation \(U_{f}\in\mathbf{U}(2^{n+1})\) to \(f\) which takes for \(x_{0},\ldots,x_{n-1},y\in\{0,1\}\), the separable state \(|\mathbf{x}\rangle\otimes|y\rangle\) to the state \(|\mathbf{x}\rangle\otimes|f(x)\oplus y\rangle\), where \(\oplus\) denotes "exclusive-or". This motivates calling a quantum circuit \(C\) taking as input \(n\) qubits \(x_{0},\ldots,x_{n-1}\) and an ancillary quibit \(y\), such that \(U(C)=U_{f}\), a circuit computing _stringently_ the Boolean function \(f\). Note that using the measurement postulate of quantum mechanics, if we set the input qubits to \(|\mathbf{x}\rangle\otimes|0\rangle\) and measure the ancillary bit in the output, we will be left in the state \(|1\rangle\) if \(f(x)=1\) and in the state \(|0\rangle\) if \(f(x)=0\)_with probability_ 1.
We can state our main result already for the stringent model described above. Later we will state and prove a more powerful result by making the definition less stringent to take into account a much looser notion of an acceptable output for the quantum circuit computing a Boolean function and also allow additional ancillary bits (exponentially many). The following theorem can be deduced from Corollary 1 stated later.
For a quantum circuit \(C\), we will denote by \(\mathcal{G}(C)\) the set of gates of \(C\).
**Theorem**.: _For each \(q\geq 1\), there exists \(c=c_{q},0<c<1\) (depending only on \(q\)), such that for each \(n>0\) the number of distinct Boolean functions on \(n\) variables that can be computed stringently by quantum circuits belonging to \(\mathcal{C}_{q}\), with_
\[\operatorname{card}(\mathcal{G}(C))\leq c\cdot\frac{2^{n}}{n}\]
_is bounded by_
\[2^{2^{n-1}}.\]
_Consequently, the fraction of Boolean functions that need quantum circuits in \(\mathcal{C}_{q}\) of size greater than \(c\cdot\frac{2^{n}}{n}\) is \(1-2^{-2^{n-1}}=1-o(1)\)._
_Remark 2_.: Notice that since the set of gates \(\mathcal{U}_{q}\) is infinite (uncountably so), and hence the number of distinct quantum circuits of any bounded size is also uncountably infinite. So a priori there is no reason for such circuits not being able to compute all Boolean functions on \(n\) variables.
_Remark 3_.: A lower bound similar to the one in the theorem stated above with a finite choice of the gate set has appeared in the literature [5, Claim F.1]. The proof of the result in [5] strongly uses the finiteness of the set of gates and indeed the constant in the theorem depends on the cardinality of this set. The import of our result is that the choice of quantum gates we allow is uncountable (only the arity is fixed) - while the lower bound stays the same.
As mentioned before we will work with a more general notion of what it means for a quantum circuit to compute a Boolean function. We relax our prior definition in two ways. First, instead of ending up in the right state depending on the value of the function \(f\) with probability \(1\), we are satisfied if we end at the right state with probability \(>1/2\). We will also allow more than one ancillary bits. The precise definition is as follows.
**Definition 2.1** (Computation of Boolean functions by a quantum circuit).: A quantum circuit \(C\) on \(n\) qubits denoted \(x_{0},\ldots,x_{n-1}\) and \(t+1\) ancillary bits denoted by \(y,z_{1},\ldots,z_{t},\)_computes_\(f\), if for each \(x,x^{\prime},0\leq x,x^{\prime}\leq 2^{n}-1\), \(y\in\{0,1\}\),
\[|\langle\mathbf{x}^{\prime}\;y\;\mathbf{0}\;|\;U(C)\;|\;\mathbf{x }\;0\;\mathbf{0}\rangle|^{2} > 1/2\text{ if }x^{\prime}=x\text{ and }y\oplus f(x_{0},\ldots,x_{n-1})=0, \tag{2.1}\] \[< 1/2\text{ otherwise.}\] \[(|\mathbf{x}\rangle\otimes|y\rangle\otimes|\mathbf{0}\rangle\text { is abbreviated as }|\mathbf{x}\;y\;\mathbf{0}\rangle).\]
_Remark 4_.: Note that if a quantum circuit computes a Boolean function \(f\) stringently, then it also computes \(f\) according to Definition 2.1. Also note that the inequality appearing in (4.1) is the most relaxed possible in as much as we do not insist on any positive gap between the accepting and rejecting probabilities. This point will be important later when we discuss an alternative possible method for obtaining lower bounds using the Solovay-Kitaev approximation theorem (see Section 4).
We now fix a notion for size of quantum circuits.
**Definition 2.2**.: For \(C\) a quantum circuit taking as input \(n\) qubits. We denote by \(t(C)\) the number of anicillary qubits used by \(C\) and \(r(C)\) the number of gates in \(C\). We will denote by \(s(C)=t(C)+r(C)\) and call \(s(C)\) the _size_ of \(C\). We will denote the set of all quantum circuits \(C\in\mathcal{C}_{q}\) with \(n\) inputs, and with \(t(C)=t\) and \(r(t)=r\), by \(\mathcal{C}_{q,n,r,t}\).
Our main theorem is the following.
**Theorem 1**.: _The number of distinct Boolean functions \(f:\{0,1\}^{n}\to\{0,1\}\) which can be computed approximately by a quantum circuit belonging to \(\mathcal{C}_{q,n,r,1+t}\) is bounded by_
\[t^{q\cdot r}\cdot(2^{n}\cdot r)^{c\cdot 2^{2q\cdot r}}\]
_for some universal constant \(c>0\)._
Theorem 1 has the following important corollary.
**Corollary 1**.: _There exists \(c=c_{q},0<c<1\) (depending only on \(q\)), such that for each \(n>0\), the number of distinct Boolean functions on \(n\) variables that can be computed by quantum circuits belonging to_
\[\mathcal{C}_{q,n,c\cdot\frac{2^{n}}{n},2^{n}}\]
_is bounded by_
\[2^{2^{n-1}}.\]
_Consequently, the fraction of Boolean functions that need quantum circuits of size \(\Omega(\frac{2^{n}}{n})\) is \(1-2^{-2^{n-1}}=1-o(1)\)._
## 3. Proof of Theorem 1 and Corollary 1
For \(g\in\mathcal{G}(C)\), we denote by \(U_{g,C}\) the \(2^{k_{g}}\times 2^{k_{g}}\) unitary matrix corresponding to \(g\), where \(k_{g}\) is the arity of the gate \(g\).
Let \(C\in\mathcal{C}_{q,n,r,t}\) and denote the sequence of gates of \(C\) by \(g_{1},\ldots,g_{r}\) with corresponding arities \(k_{1},\ldots,k_{r}\). For each \(i,1\leq i\leq r\), denote the entries of the two real \(2^{k_{i}}\times 2^{k_{i}}\) matrices, \(\operatorname{Re}(U_{g_{i},C}),\operatorname{Im}(U_{g_{i},C})\) by \(\left(v_{h,h^{\prime}}^{(i)}\right)_{1\leq h,h^{\prime}\leq 2^{k_{i}}}\) and \(\left(w_{h,h^{\prime}}^{(i)}\right)_{1\leq h,h^{\prime}\leq 2^{k_{i}}}\). So,
\[U_{g_{i},C}=\left(v_{h,h^{\prime}}^{(i)}\right)_{1\leq h,k\leq 2^{k_{i}}}+ \sqrt{-1}\cdot\left(w_{h,h^{\prime}}^{(i)}\right)_{1\leq h,h^{\prime}\leq 2^{k_ {i}}}.\]
We use the notation introduced above in the following lemma. We use the convention that upper case letters with indices such as \(V_{h,h^{\prime}}^{(i)},W_{h,h^{\prime}}^{(i)}\) are used to denote indeterminates in certain polynomials and the corresponding lower case letters \(v_{h,h^{\prime}}^{(i)},w_{h,h^{\prime}}^{(i)}\) are used to denote real numbers to which the corresponding indeterminates are specialized.
**Lemma 3.1**.: _For each \(x,x^{\prime},0\leq x,x^{\prime}\leq 2^{n}-1,z,z^{\prime},0\leq z,z^{\prime} \leq 2^{t}-1\), there exists a real polynomial_
\[P_{|\mathbf{x}\mathbf{z}\rangle,|\mathbf{x^{\prime}}\mathbf{z^{\prime}} \rangle,C}\in\operatorname{R}\left[\left(V_{h,h^{\prime}}^{(i)},W_{h,h^{\prime }}^{(i)}\right)_{\begin{subarray}{c}1\leq i\leq r,\\ 1\leq h,h^{\prime}\leq 2^{k_{i}}\end{subarray}}\right],\]
_with \(\deg(P_{|\mathbf{x}\mathbf{z}\rangle,|\mathbf{x^{\prime}}\mathbf{z^{\prime}} \rangle,C})\leq 2r\), such that_
\[|\langle\mathbf{x}\mathbf{z}\mid U(C)\mid\mathbf{x^{\prime}}\mathbf{z^{\prime }}\rangle|^{2}=P_{|\mathbf{x}\mathbf{z}\rangle,|\mathbf{x^{\prime}}\mathbf{z^ {\prime}}\rangle,C}\left((v_{h,h^{\prime}}^{(i)},w_{h,h^{\prime}}^{(i)})_{ \begin{subarray}{c}1\leq i\leq r,\\ 1\leq h,h^{\prime}\leq 2^{k_{i}}\end{subarray}}\right).\]
Proof.: Let \(N=2^{n+t}\). Observe that \(\langle\mathbf{xz}\mid U(C)\mid\mathbf{x}^{\prime}\mathbf{z}^{\prime}\rangle\) is the \((2^{t}\cdot x+z,2^{t}\cdot x^{\prime}+z^{\prime})\)-th entry of the \(N\times N\) unitary matrix \(U(C)\) where \(U(C)\) is a product of \(r\) many \(N\times N\) unitary matrices of the form \(U_{g_{i},C}\otimes\mathbf{1}_{N-2^{k_{i}}}\).
Observe that the \((h,h^{\prime})\)-th entry of \(U_{g_{i},C}\) equals \(v_{h,h^{\prime}}^{(i)}+\sqrt{-1}\cdot w_{h,h^{\prime}}^{(i)}\). Thus, the real and the imaginary parts of each entry of \(U(C)\) is a polynomial in the real numbers \(v_{h,h^{\prime}}^{(i)},w_{h,h^{\prime}}^{(i)},1\leq i\leq r,1\leq h,h^{\prime} \leq 2^{k_{i}}\) having degree at most \(r\), and taking the square of the modulus gives a polynomial of degree at most \(2r\).
Next, we count the number of distinct "topologies" underlying quantum circuits of bounded size. In order to make this precise we introduce the following definition.
**Definition 3.1**.: For any quantum circuit \(C\), we will denote by \(\widetilde{C}\) the quantum circuit obtained by replacing for each \(g\in\mathcal{G}(C)\), the gate \(g\), by \(g_{0}\) with the same input and output but with \(U_{g_{0}}=\mathbf{1}_{2^{k_{g}}}\) where \(k_{g}\) is the arity of \(g\).
For any two quantum circuits \(C,D\in\mathcal{C}_{q,n,r,t}\), we will say that \(C,D\) are equivalent (denoted \(C\sim_{q,n,r,t}D\)) if \(\widetilde{C}=\widetilde{D}\). Clearly \(\sim_{q,n,r,t}\) is an equivalence relation.
We denote by \(\mathcal{T}_{q,n,r,t}\) the set of equivalence classes of \(\sim_{q,n,r,t}\).
_Remark 5_.: The set \(\mathcal{T}_{q,n,r,t}\) should be thought of as the set of underlying "topologies" of quantum circuits with \(r\) gates with fan-ins at most \(q\), with \(n\) qubits as input and \(t\) ancillary bits.
The following lemma establishes an upper bound on the cardinality of \(\mathcal{T}_{q,n,r,t}\) in terms of \(q,n,r\) and \(t\).
**Lemma 3.2**.: _For every \(q,n,r,t>0\),_
\[\operatorname{card}(\mathcal{T}_{q,n,r,t})\leq q^{r}\cdot(n+t)^{q\cdot r}.\]
Proof.: Each of the \(r\) gates have arities \(\leq q\). The number of \(r\)-tuples \((k_{1},\dots,k_{r})\) of possible arities, where \(1\leq k_{i}\leq q\) is equal to \(q^{r}\). The equivalence class \(T\) of a circuit is determined by the ordered choice of \(k_{i}\leq q\) from amongst the input qubits \(x_{1},\dots,x_{n}\) and the \(t\) ancillary bits \(z_{1},\dots,z_{t}\). The number of choices for the \(i\)-th gate \(g_{i}\) is
\[(k_{i})!{n+t\choose k_{i}}\leq(n+t)^{k_{i}}\leq(n+t)^{q}.\]
Since there are \(r\) gates the total number of choices is bounded by
\[q^{r}\cdot(n+t)^{q\cdot r}.\]
_Remark 6_.: Lemma 3.2 gives a rather crude bound that suffices for our purpose. It is possible to prove a much tighter upper bound (see for example [13, Page 88, Lemma 2.1]).
We will now fix an equivalence class \(T\in\mathcal{T}_{q,n,r,1+t}\) and ask how many distinct Boolean functions can be computed (cf. Definition 2.1) using quantum circuits belonging to \(T\). Our strategy is to identify a finite set of real polynomials (denoted by \(\mathcal{P}_{T}\) below), and associate to each Boolean function \(f:\{0,1\}^{n}\to\{0,1\}\) which is computed by some circuit \(C\in T\) a unique realizable sign condition (see below). Our bound on the number of Boolean functions that can be computed using circuits in \(T\) will then follow from an upper bound on the number of realizable sign conditions on \(\mathcal{P}_{T}\) which is furnished by a classical result in real algebraic geometry.
### Bound on the number of realizable sign condition
Suppose that \(\mathcal{P}\) be a finite set of polynomials in \(\mathbb{R}[X_{1},\ldots,X_{k}]\). For each \(x\in\mathbb{R}^{k}\), and \(P\in\mathbb{R}[X_{1},\ldots,X_{k}]\), we define
\[\mathbf{sign}(P(x))=\begin{cases}+1,\text{if }P(x)>0,\\ -1,\text{if }P(x)<0,\\ 0,\text{ if }P(x)=0.\end{cases}\]
Similarly, for a finite tuple of polynomials \(\mathcal{P}\) and \(x\in\mathbb{R}^{k}\), we similarly define
\[\mathbf{sign}(\mathcal{P}(x))=(\mathbf{sign}(P))_{P\in\mathcal{P}}\in\{0,1,-1 \}^{\mathcal{P}}.\]
We say that an element \(\sigma\in\{0,1,-1\}^{\mathcal{P}}\) is a _realizable sign condition of \(\mathcal{P}\)_ if there exists \(x\in\mathbb{R}^{k}\) such that
\[\mathbf{sign}(\mathcal{P}(x))=\sigma.\]
We denote the set of realizable sign conditions of a finite tuple \(\mathcal{P}\) of polynomials in \(\mathbb{R}[X_{1},\ldots,X_{k}]\) by \(\operatorname{Sign}(\mathcal{P})\subset\{0,1,-1\}^{\mathcal{P}}\).
Observe that if \(\mathcal{P}\) has length \(N\), then the cardinality of \(\operatorname{Sign}(\mathcal{P})\) could potentially be as large as \(3^{N}\). It is an important result in real algebraic geometry (with many applications), that if \(k\) as well as the degrees of the polynomials in \(\mathcal{P}\) are small compared to \(N\), then the \(\operatorname{card}(\operatorname{Sign}(\mathcal{P}))\) is much smaller than \(3^{N}\). The precise result that we need is the following.
**Proposition 3.1**.: _[_4_, Proposition 7.31]_ _Let \(\mathcal{P}\) be a finite tuple of polynomials in \(\mathbb{R}[X_{1},\ldots,X_{k}]\). Then,_
\[\operatorname{card}(\operatorname{Sign}(\mathcal{P}))\leq\sum_{1\leq j\leq k} \binom{N}{j}4^{j}d(2d-1)^{k-1}=(O(Nd))^{k},\]
_where \(N=\operatorname{length}(\mathcal{P})\) and \(d=\max_{P\in\mathcal{P}}\deg(P)\)._
_Remark 7_.: Various versions of Proposition 3.1 are known (see for example [12, 2]). It appears in the more precise form as stated above in [3], where the bound is proved on the number of connected components of the realizable sign conditions (i.e. the sum of the zero-th Betti number of the realizations of each sign condition). Note this is a priori larger than just the number of realizable sign conditions.
We now return to the proof of Theorem 1 by proving an upper bound on the number of distinct Boolean functions that can be computed by quantum circuits belonging to a fixed equivalence class \(T\in\mathcal{T}_{q,n,r,1+t}\).
First notice that for each \(T\in\mathcal{T}_{q,n,r,1+t}\) and \(C,C^{\prime}\in T\), \(0\leq x,x^{\prime}\leq 2^{n}-1\), \(y,y^{\prime}\in\{0,1\}\) and \(0\leq z,z^{\prime}\leq 2^{t}-1\),
\[P_{|\mathbf{x}y\mathbf{z}\rangle,|\mathbf{x}^{\prime}y^{\prime}\mathbf{z}^{ \prime}\rangle,C}=P_{|\mathbf{x}y\mathbf{z}\rangle,|\mathbf{x}^{\prime}y^{ \prime}\mathbf{z}^{\prime}\rangle,C^{\prime}}\]
(see Lemma 3.1). We will denote by \(P_{|\mathbf{x}y\mathbf{z}\rangle,|\mathbf{x}^{\prime}y^{\prime}\mathbf{z}^{ \prime}\rangle,T}\) the polynomial \(P_{|\mathbf{x}y\mathbf{z}\rangle,|\mathbf{x}^{\prime}y^{\prime}\mathbf{z}^{ \prime}\rangle,C}\) for some and hence all \(C\in T\).
**Lemma 3.3**.: _For each \(T\in\mathcal{T}_{q,n,r,1+t}\), the number of distinct Boolean functions computed by some quantum circuit \(C\in T\) is bounded by_
\[(2^{n}\cdot r)^{O(2^{2q}\cdot r)}.\]
Proof.: Let \(\mathcal{P}_{T}\) denote the tuple of polynomials
\[\big{(}P_{|\psi\rangle,|\psi^{\prime}\rangle,T}-1/2\big{)}_{|\psi\rangle=| \mathbf{x}\;0\;\mathbf{0}\rangle,|\psi^{\prime}\rangle=|\mathbf{x}^{\prime} \;y\;\mathbf{0}\rangle}\cdot\]
Suppose that \(f:\{0,1\}^{n}\to\{0,1\}\) is a Boolean function computed by \(C\in T\). We will follow the notation introduced in Lemma 3.1 and denote by
\[\Big{(}v^{(i)}_{h,h^{\prime}},w^{(i)}_{h,h^{\prime}}\Big{)}_{\begin{subarray} {c}1\leq i\leq r,\\ 1\leq h,h^{\prime}\leq 2^{k}i\end{subarray}}\]
the tuple of real numbers which correspond to the real and imaginary parts of the unitary matrices corresponding to the gates of \(C\).
Then, for \(0\leq x,x^{\prime}\leq 2^{n}-1\), \(|\psi\rangle=|\mathbf{x}\;0\;\mathbf{0}\rangle\), \(|\psi^{\prime}\rangle=|\mathbf{x}^{\prime}\;y\;\mathbf{0}\rangle\), the sign of the polynomial
\[P_{|\psi\rangle,|\psi^{\prime}\rangle,T}\]
evaluated at the point
\[\Big{(}v^{(i)}_{h,h^{\prime}},w^{(i)}_{h,h^{\prime}}\Big{)}_{\begin{subarray} {c}1\leq i\leq r,\\ 1\leq h,h^{\prime}\leq 2^{k}i\end{subarray}}\]
(following the notation introduced in Lemma 3.1)
\[=\begin{cases}1&\text{if }y\oplus f(x)=0\text{ and }x=x^{\prime},\\ -1&\text{otherwise}.\end{cases}\]
Denote by \(\sigma_{f}\in\{0,1,-1\}^{\mathcal{P}_{T}}\) the corresponding sign condition on \(\mathcal{P}_{T}\). Hence, each Boolean function \(f:\{0,1\}^{n}\to\{0,1\}\) computable by a quantum circuit \(C\) in \(T\) defines a realizable sign condition \(\sigma_{f}\in\operatorname{Sign}(\mathcal{P}_{T})\). Moreover, if \(f\neq f^{\prime}\), then \(\sigma_{f}\neq\sigma_{f^{\prime}}\). This implies that the number of distinct Boolean functions \(f:\{0,1\}^{n}\to\{0,1\}\) which are computable by a quantum circuit \(C\) in \(T\), is bounded by \(\operatorname{card}(\operatorname{Sign}(\mathcal{P}_{T}))\).
Now, \(\operatorname{card}(\mathcal{P}_{T})=2^{n}\cdot 2^{n+1}\leq 2^{2n+1}\). The degrees of the polynomials in \(\mathcal{P}_{T}\) are bounded by \(2r\), and the number of indeterminates by \(2\cdot 2^{2q}\cdot r\). We obtain using Proposition 3.1 that
\[\operatorname{card}(\operatorname{Sign}(\mathcal{P}_{T}))\leq(O(2^{2n+1}\cdot 2 \cdot r))^{2\cdot 2^{2q}\cdot r}=(2^{n}\cdot r)^{O(2^{2q}\cdot r)}.\]
_Remark 8_.: Note that in the proof of Lemma 3.3 we are not using the fact the real dimension of the unitary group \(\mathbf{U}(2^{k})\) is equal to \(2^{2k}\). It is possible to take this into account and use a more refined estimate on the number of realizable sign conditions whose combinatorial part (i.e. the part depending on the number of polynomials) depends on the dimension of the ambient real variety (see [3]). However, this would only improve the constant in our theorem at the cost of introducing more technicalities and so we avoid making this more refined analysis.
Proof of Theorem 1.: Multiplying the bounds in Lemmas 3.2 and 3.3 we obtain that that the number of Boolean functions \(\{0,1\}^{n}\to\{0,1\}\) that can be computed using quantum circuits in \(\mathcal{C}_{q,n,r,t}\)
\[q^{r}\cdot(n+t)^{q\cdot r}\cdot(2^{n}\cdot r)^{O(2^{2q}\cdot r)}=t^{q\cdot r }\cdot(2^{n}\cdot r)^{O(2^{2q}\cdot r)}.\]
Proof of Corollary 1.: From Theorem 1 there exists \(c>0\) such that the number of distinct Boolean functions \(f:\{0,1\}^{n}\to\{0,1\}\) that can be computed by quantum circuits in \(\mathcal{C}_{q,n,r,2^{n}}\) is bounded by \((2^{n}\cdot r)^{c\cdot r}\) for some constant \(c=c_{q}\) depending only on \(q\).
Suppose that \(r\leq\frac{2^{n}}{4cn}\).
Then, the number of Boolean functions that can be computed by such circuits is bounded by
\[(2^{n}\cdot r)^{c\cdot r}\leq(2^{n}\cdot\frac{2^{n}}{4cn})^{\frac{2^{n}}{4n}} \leq(2^{2n})^{\frac{2^{n}}{4n}}=2^{2^{n-1}}.\]
Thus, the fraction of all Boolean formulas in \(n\) variables that can be computed by such circuits is bounded by
\[\frac{2^{2^{n-1}}}{2^{2^{n}}}=2^{-2^{n-1}}=o(1).\]
## 4. Alternative approach using approximation and counting
In this section we explore an alternative approach towards proving Corollary 1. It is a classical result due to Solovay and Kitaev [7] that there exists \(c>0\), such that for for any fixed subset \(\mathcal{U}^{\prime}\subset\mathbf{U}_{2}\) generating a dense subgroup of \(\mathbf{U}(2)\) and each \(\varepsilon>0\), an arbitrary unitary matrix \(U\in\mathbf{U}(2)\) can be approximated by an element \(U^{\prime}\in\mathbf{U}(2)\) with \(||U-U^{\prime}||\leq\varepsilon\) where \(U^{\prime}\) is a product of at most \(\log^{c}(1/\varepsilon)\) elements of \(\mathcal{U}^{\prime}\) and \(||\cdot||\) is the operator norm.
There has been many later improvements on this fundamental result reducing the value of the constant \(c\). In particular, it is possible to choose a finite set of gets (Clifford and Toffoli gates) for which one can take \(c=1\)[9, 10]. Using this fact it is not too difficult to prove the following.
**Proposition 4.1**.: _For each \(\varepsilon>0\), and any quantum circuit \(C\in\mathcal{C})q,n,r,t\), there exists another circuit \(C^{\prime}\in\mathcal{C}_{q,n,r^{\prime},t}\) which uses only Clifford and Toffoli gates, such that \(||U(C)-U(C^{\prime})||\leq\varepsilon\) and_
\[r^{\prime}=O(r\log r+r\log(1/\varepsilon)).\]
Proof sketch.: Replace each gate in \(C\) by a circuit using only Clifford and Toffoli gates such that the error in norm is bounded by \(\varepsilon/r\). Since there are \(r\) gates in \(C\) the total error will be bounded by \(\varepsilon\).
An approach towards proving Corollary 1 would then be as follows. Given any \(C\in\mathcal{C}_{q,n,r,t}\) computing a Boolean function \(f\), let \(C^{\prime}\) be a circuit using only Clifford and Toffoli gates that approximates \(U(C)\) sufficiently well so that \(C^{\prime}\) also computes \(f\). Using Proposition 4.1 one would obtain an upper bound on the size of \(C^{\prime}\) and since the number of circuits with Clifford and Toffoli gates is finite one can then use a counting argument. However, Definition 2.1 gives no room for any approximation, as any error could mean that the new circuit \(C^{\prime}\) does not compute \(f\).
One can make the Definition 2.1 more stringent (thus easier for proving lower bounds). For instance, consider the following definition which we call \(\delta\)-stringent.
**Definition 4.1** ((\(\delta\)-stringent)-computation of Boolean functions by a quantum circuit).: Let \(0\leq\delta<1/2\). A quantum circuit \(C\) on \(n\) qubits denoted \(x_{0},\ldots,x_{n-1}\) and \(t+1\) ancillary bits denoted by \(y,z_{1},\ldots,z_{t},\)_computes \(f\)\(\delta\)-stringently_, if for each \(x,x^{\prime},0\leq x,x^{\prime}\leq 2^{n}-1,\)\(y\in\{0,1\},\)
\[|\langle\mathbf{x}^{\prime}\;y\;\mathbf{0}\;|\;U(C)\;|\;\mathbf{x }\;0\;\mathbf{0}\rangle|^{2} > 1/2+\delta\text{ if }x^{\prime}=x\text{ and }y\oplus f(x_{0},\ldots,x_{n-1})=1, \tag{4.1}\] \[< 1/2-\delta\text{ otherwise.}\]
\((|\mathbf{x}\rangle\otimes|y\rangle\otimes|\mathbf{0}\rangle\) is abbreviated as \(|\mathbf{x}\;y\;\mathbf{0}\rangle\rangle\).
_Remark 9_.: Definition 2.1 is the special case of being \(0\)-stringent.
One can carry through the program sketched earlier using approximation and counting, if we take Definition 4.1 as our definition for quantum circuit computing Boolean function with \(\delta>0\). In this case one would need to replace \(C\) by a circuit \(C^{\prime}\) using Clifford and Toffoli gates which approximates \(U(C)\) within \(\delta/2\) in max-norm. Using Proposition 4.1 one can take \(r^{\prime}=O(r\log r+r\log(1/\delta))\). Since the number of such circuits is bounded by \(n^{O(r\log r+r\log(1/\delta))}\). From the inequality
\[n^{O(r\log r+r\log(1/\delta))}\geq 2^{2^{n}}\]
one derives the lower bound
\[r\geq\Omega(\min(2^{n}/(n\log n),2^{n}/((\log(1/\delta)\log n))))\]
from which one can conclude that almost all Boolean functions need quantum circuits of size
\[\Omega(\min(2^{n}/(n\log n),2^{n}/((\log(1/\delta)\log n)))).\]
Notice that the above lower bound is worse than that in Corollary 1, and moreover goes to \(0\) as \(\delta\to 0\) and thus does not produce any meaningful lower bound for \(\delta=0\) (which is the case in Definition 2.1).
## 5. Conclusion and future work
We have introduced a new algebraic technique for proving lower bounds in quantum complexity theory. This might have applications in proving lower bounds for other problems in quantum complexity theory. One important feature of our method is that it does not need unitarity of the gates (see Remark 8). This may be relevant for proving results about the complexity class **PostBQP**[1]. We leave this for future work.
## Acknowledgements
We thank Sergey Bravyi, Elena Grigorescu, Dmitri Maslov and Eric Samperton for their comments on a draft version of the paper which helped to improve the paper.
|
2305.12373 | Modeling of Thermal Emission from ULX Pulsar Swift J0243.6+6124 with
General Relativistic Radiation MHD simulations | We perform general relativistic radiation magnetohydrodynamics (MHD)
simulations of super-Eddington accretion flows around a neutron star with a
dipole magnetic field for modeling the galactic ultra-luminous X-ray source
(ULX) exhibiting X-ray pulsations, Swift J0243.6+6124. Our simulations show the
accretion columns near the magnetic poles, the accretion disk outside the
magnetosphere, and the outflows from the disk. It is revealed that the
effectively optically thick outflows, consistent with the observed thermal
emission at $\sim10^7$ K, are generated if the mass accretion rate is much
higher than the Eddington rate $\dot{M}_{\rm Edd}$ and the magnetospheric
radius is smaller than the spherization radius. In order to explain the
blackbody radius ($\sim 100-500$ km) without contradicting the reported spin
period ($9.8~{\rm s}$) and spin-up rate ($\dot{P}=-2.22\times10^{-8}~{\rm
s~s^{-1}}$), the mass accretion rate of $(200-1200)\dot{M}_{\rm Edd}$ is
required. Since the thermal emission was detected in two observations with
$\dot{P}$ of $-2.22\times10^{-8}~{\rm s~s^{-1}}$ and $-1.75\times10^{-8}~{\rm
s~s^{-1}}$ but not in another with $\dot{P}=-6.8 \times10^{-9}~{\rm s~s^{-1}}$,
the surface magnetic field strength of the neutron star in Swift J0243.6+6124
is estimated to be between $3\times10^{11}~{\rm G}$ and $4\times10^{12}~{\rm
G}$. From this restricted range of magnetic field strength, the accretion rate
would be $(200-500)\dot{M}_{\rm Edd}$ when the thermal emission appears and
$(60-100)\dot{M}_{\rm Edd}$ when it is not detected. Our results support the
hypothesis that the super-Eddington phase in the 2017-2018 giant outburst of
Swift J0243.6+6124 is powered by highly super-Eddington accretion flows onto a
magnetized neutron star. | Akihiro Inoue, Ken Ohsuga, Hiroyuki R. Takahashi, Yuta Asahina | 2023-05-21T06:56:17Z | http://arxiv.org/abs/2305.12373v1 | Modeling of Thermal Emission from ULX Pulsar Swift J0243.6+6124 with General Relativistic Radiation MHD simulations
###### Abstract
We perform general relativistic radiation magnetohydrodynamics (MHD) simulations of super-Eddington accretion flows around a neutron star with a dipole magnetic field for modeling the galactic ultra-luminous X-ray source (ULX) exhibiting X-ray pulsations, Swift J0243.6+6124. Our simulations show the accretion columns near the magnetic poles, the accretion disk outside the magnetosphere, and the outflows from the disk. It is revealed that the effectively optically thick outflows, consistent with the observed thermal emission at \(\sim 10^{7}\) K, are generated if the mass accretion rate is much higher than the Eddington rate \(\dot{M}_{\rm Edd}\) and the magnetospheric radius is smaller than the spherization radius. In order to explain the blackbody radius (\(\sim 100-500\) km) without contradicting the reported spin period (9.8 s) and spin-up rate (\(\dot{P}=-2.22\times 10^{-8}\) s s\({}^{-1}\)), the mass accretion rate of \((200-1200)\dot{M}_{\rm Edd}\) is required. Since the thermal emission was detected in two observations with \(\dot{P}\) of \(-2.22\times 10^{-8}\) s s\({}^{-1}\) and \(-1.75\times 10^{-8}\) s s\({}^{-1}\) but not in another with \(\dot{P}=-6.8\times 10^{-9}\) s s\({}^{-1}\), the surface magnetic field strength of the neutron star in Swift J0243.6+6124 is estimated to be between \(3\times 10^{11}\) G and \(4\times 10^{12}\) G. From this restricted range of magnetic field strength, the accretion rate would be \((200-500)\dot{M}_{\rm Edd}\) when the thermal emission appears and \((60-100)\dot{M}_{\rm Edd}\) when it is not detected. Our results support the hypothesis that the super-Eddington phase in the 2017-2018 giant outburst of Swift J0243.6+6124 is powered by highly super-Eddington accretion flows onto a magnetized neutron star.
0000-0002-8181-8888]Akihiro Inoue
0000-0002-8870-788X]Ken Ohsuga
0000-0002-0883-0886]Hiroyuki R. Takahashi
## 1 Introduction
Ultra-Luminous X-ray sources (ULXs) are bright X-ray compact objects, whose X-ray luminosities exceed the Eddington luminosity for stellar mass black holes \(\sim 10^{39}\) erg s\({}^{-1}\), and have been discovered in off-nuclear regions of our galaxy and nearby galaxies (see reviews by Kaaret et al., 2017; Fabrika et al., 2021). Recently, several ULXs have been shown to have coherent pulsations at periods, \(\sim 1-10\) s (e.g., Bachetti et al., 2014; Furst et al., 2016; Israel et al., 2017, 2017). These objects are called ULX Pulsars (ULXPs). It is widely believed that these pulsations can be observed if the radiation from the vicinity of the neutron star (NS) periodically changes via rotation of the NS. In this case, the ULXP is powered by a super-Eddington accretion onto the magnetized NS since the luminosity of the ULXP exceeds the Eddington luminosity for the NS, \(L_{\rm Edd}\).
The super-Eddington accretion flows around the magnetized NS consist of the accretion disk, the outflows, and the accretion columns. The super-Eddington accretion disk is truncated at a certain radius, called the truncation radius, by the magnetic fields of the NS, provided that the magnetic field strength of the NS is high enough to prevent disk accretion. It is considered that radiatively driven outflows are launched from the super-Eddington accretion disk, which forms on the outside of the truncation radius, since the vertical radiation pressure force is larger than the vertical gravitational force (Shakura and Sunyaev, 1973; Poutanen et al., 2007). The gas moves along the magnetic field lines inside the truncation radius. Then, column accretion flows, called the accretion columns, are formed near the magnetic poles of the NS. The radiative luminosity at the base of the accretion columns exceeds the Eddington luminosity (Basko and Sunyaev, 1976; Mushtukov et al., 2015; Kawashima et al., 2016). If the magnetic axis is misaligned with the rotation axis of the NS, then the pulsation exceeding the Eddington luminosity can be observed via the precession of the accretion columns (Mushtukov et al., 2018; Inoue et al., 2020). This inflow-outflow structure around
the magnetized NS has been shown by general relativistic radiation magnetohydrodynamics (GR-RMHD) simulations of super-Eddington accretion flows onto a NS with a dipole magnetic field (Takahashi and Ohsuga, 2017; Abarca et al., 2021).
Recent observations suggested that outflows exist in ULXs (see e.g., Pinto et al., 2016) and ULXPs (see e.g., Kosec et al., 2018). In the galactic ULXP, Swift J0243.6+6124, the outflows, as well as a relativistic jet (van den Eijnden et al., 2018), were reported (see e.g., Tao et al., 2019; van den Eijnden et al., 2019). Tao et al. (2019) showed that the radiation spectrum of Swift J0243.6+6124, at the peak of the 2017-2018 giant outburst, can be explained by the thermal blackbody at \(0.4-0.6\) keV \(\sim 10^{7}\) K, with a radius of \(100-500\) km (however, see also Jaisawal et al., 2019). This thermal emission is considered to originate from the optically thick outflows. It is not known whether the outflows appearing in the numerical simulations mentioned above can explain such thermal emission. Although a semi-analytical model was proposed to account for the effects of the outflows (Mushtukov et al., 2019; Chashkina et al., 2019), it remains to be unclear whether such powerful outflows can occur or not.
In this paper, we perform GR-RMHD simulations of super-Eddington accretion flows onto a NS with a dipole magnetic field for modeling of Swift J0243.6+6124. The dependence of the magnetospheric and outflow structure on the surface magnetic field strength at the NS surface \(B_{\rm NS}\) and the mass accretion rate \(\dot{M}_{\rm in}\), which is not investigated in previous studies (Takahashi and Ohsuga, 2017; Abarca et al., 2021), is reported. It is revealed that the super-Eddington accretion onto the magnetized NS reproduces the thermal emission observed in this object. We also restrict \(B_{\rm NS}\) and \(\dot{M}_{\rm in}\) based on the condition of the thermal emission. The paper is organized as follows: we will describe the numerical methods in Section 2 and then present the results in Section 3. Section 4 is devoted to the discussion. Finally, we give our conclusion in the final section.
## 2 Numerical Methods
In this study, we numerically solve the GR-RMHD equations in Schwarzschild polar coordinates \((t,r,\theta,\phi)\), assuming axisymmetry with respect to \(\theta=0,\pi\). We use the GR-RMHD simulation code, UWABAMI (Takahashi and Ohsuga, 2017). The moment formalism is adopted to describe the radiation field (Thorne, 1981). M1 closure is used as the closure relation (Levermore, 1984; Kanno et al., 2013; Sadowski et al., 2013). Using the closure, the radiation fields can be updated by solving the 0-th and 1st moment equations of the radiation without solving the radiative transfer equation. Hereafter, Greek suffixes and Latin suffixes represent spacetime components and space components, respectively. The speed of light \(c\) and the gravitational constant \(G\) are set to 1 unless otherwise noted.
### Basic equations
The basic equations are as follows (see e.g., Takahashi et al., 2018), the mass conservation law,
\[\left(\rho u^{\mu}\right)_{;\mu}=0, \tag{1}\]
the energy-momentum conservation laws for magnetohydrodynamics (MHD),
\[\left(T^{\mu\nu}\right)_{;\nu}=G^{\mu}, \tag{2}\]
the energy-momentum conservation laws for the radiation field,
\[\left(R^{\mu\nu}\right)_{;\nu}=-G^{\mu}, \tag{3}\]
the induction equations,
\[\partial_{t}\left(\sqrt{-g}B^{i}\right)+\partial_{j}\left\{\sqrt{-g}\left(b^{ i}u^{j}-b^{j}u^{i}\right)\right\}=0, \tag{4}\]
where \(\rho\) is the proper mass density, \(u^{\mu}\) is the four-velocity of the gas, \(B^{i}\) is the magnetic field vector in the laboratory frame, \(b^{\mu}\) is the magnetic four-vector in the fluid frame, and \(g\) is the determinant of the metric, \(g=\det\left(g_{\mu\nu}\right)\). The energy momentum tensor of ideal MHD \(T^{\mu\nu}\) consists of that of fluid \({T_{\rm gas}}^{\mu\nu}\) and electromagnetic field \(M^{\mu\nu}\):
\[{T_{\rm gas}}^{\mu\nu} = \left(\rho+e+p_{\rm gas}\right)u^{\mu}u^{\nu}+p_{\rm gas}g^{\mu\nu}, \tag{5}\] \[M^{\mu\nu} = b^{2}u^{\mu}u^{\nu}+p_{\rm mag}g^{\mu\nu}-b^{\mu}b^{\nu}, \tag{6}\]
where \(e\) is the internal energy density, \(p_{\rm gas}=(\Gamma-1)e\) is the gas pressure (\(\Gamma=5/3\)), and \(p_{\rm mag}=b^{2}/2\) is the magnetic pressure in the fluid frame. The energy momentum tensor of the radiation field in the M1 formalism is given by Sadowski et al. (2013),
\[R^{\mu\nu}=\left(\bar{E}+p_{\rm rad}\right){u_{\rm R}}^{\mu}{u_{\rm R}}^{\nu} +\frac{1}{3}\bar{E}g^{\mu\nu}. \tag{7}\]
Here, \(\hat{E}\), \(p_{\rm rad}=\hat{E}/3\), and \(u_{\rm R}\mu\) are the radiation energy density, the radiation pressure in the radiation rest frame, and the four-velocity of the radiation rest frame, respectively. The radiation four-force \(G^{\mu}\), which describes the interaction between the ideal MHD and radiation field, is defined as
\[G^{\mu}= -\rho\kappa_{\rm abs}\left(R^{\mu\alpha}u_{\alpha}+4\pi\hat{B}u^{ \mu}\right)\] \[-\rho\kappa_{\rm sca}\left(R^{\mu\alpha}u_{\alpha}+R^{\alpha\beta }u_{\alpha}u_{\beta}u^{\mu}\right)+G_{\rm comp}\mu, \tag{8}\]
where \(\kappa_{\rm abs}\) and \(\kappa_{\rm sca}\) are the opacity for free-free absorption and isotropic electron scattering, respectively:
\[\kappa_{\rm abs} = 6.4\times 10^{22}\rho T_{\rm e}^{-3.5}\quad[{\rm cm}^{2}\ {\rm g}^{-1}], \tag{9}\] \[\kappa_{\rm sca} = 0.4\quad[{\rm cm}^{2}\ {\rm g}^{-1}]. \tag{10}\]
Here, \(T_{\rm e}\) is the electron temperature. The blackbody intensity is given by \(\hat{B}=aT_{\rm e}^{4}/4\pi\), where \(a\) is the radiation constant. In this study, we consider the thermal comptonization defined as follows (Fragile et al., 2018; Utsumi et al., 2022) :
\[G_{\rm comp}\mu=-\kappa_{\rm sca}\rho\hat{E}\frac{4k(T_{\rm e}-T_{\rm r})}{m_{ \rm e}}u^{\mu}. \tag{11}\]
Here, \(\hat{E}\) is the radiation energy density in the fluid frame, \(T_{r}=(\hat{E}/a)^{1/4}\) is the radiation temperature and \(m_{\rm e}\) is the electron rest mass. We take \(T_{\rm e}=T_{\rm g}\) for simplicity, where \(T_{\rm g}\) is the gas temperature. The gas temperature can be derived from
\[T_{\rm g}=\frac{\mu_{\rm w}m_{\rm p}p_{\rm gas}}{\rho k}, \tag{12}\]
where \(m_{\rm p}\) is the proton mass, \(k\) is the Boltzmann constant, and \(\mu_{\rm w}=0.5\) is the mean molecular weight. We consider the mean-field dynamo in our simulations (Sadowski et al., 2015).
### Numerical models
In this study, we take \(M_{\rm NS}=1.4M_{\odot}\) and \(r_{\rm NS}=10\) km, where \(M_{\rm NS}\) and \(r_{\rm NS}\) are the mass and radius of the NS, and consider the NSs with dipole magnetic fields (Wasserman & Shapiro, 1983). The dipole magnetic field strength at the NS surface \(B_{\rm NS}\) applied in this study, which is relatively weaker than the typical value observed in the X-ray pulsar, \(10^{11-13}\) G (see Section 4.5), is described in Table 1. The table also reports the maximum gas density of the initial torus \(\rho_{0}\), the end time of the simulation \(t_{\rm end}\), the time after which the mass outflow rate is almost constant \(t_{\rm eq}\), and numerical grid points \((N_{r},N_{\theta})\). The light-crossing time for the gravitational radius of the NS, \(r_{\rm g}=M_{\rm NS}=2.1\) km, is denoted by \(t_{\rm g}\). The computational domain consists of \(r=[r_{\rm NS},r_{\rm out}]\), where \(r_{\rm out}=2100\) km, and \(\theta=[0,\pi]\). The size of the radial grid exponentially increases with \(r\), and the size of the polar grid is uniform (Takahashi & Ohsuga, 2017). Multipole components of the NS magnetic field are not considered. GR-RMHD simulations considering the multipole components are left as an important future work, although the GR-MHD simulations employing quadrupole fields were recently performed by Das et al. (2022). The rotation of the NSs is ignored since the rotation period of the NSs observed in ULXPs, \(1-10\) s, is much longer than the rotational timescale of the accretion disk, \(\sim 10^{-2}\) s, even within \(r\sim 100\) km. There remains the possibility that ULX is a millisecond pulsar (Kluzniak & Lasota, 2015), which we will discuss later. We assume that the magnetic axis coincides with the rotation axis of the accretion disk. The convergence of the simulation results is confirmed using models B1e10D001, B1e10D001_a, B1e10D001_b, and B1e10D001_c. These models have the same initial parameters except for the resolution.
We initially set the equilibrium torus given by Fishbone & Moncrief (1976), as can be seen in Figure 1, where \((R,z)=(r\sin\theta,r\cos\theta)\). The gas pressure \(p_{\rm gas}\) inside the torus is replaced with \(p_{\rm gas}+p_{\rm rad}\) by assuming a local thermodynamic equilibrium, \(T_{\rm g}=T_{\rm r}\). The inner edge of the torus and the maximum pressure radius are set to 210 km and 304.5 km, respectively. The poloidal-loop magnetic fields inside the torus are considered in addition to the dipole magnetic field of the NS. The vector potential of this loop magnetic field is proportional to \(\rho\). The embedded loop magnetic field is anti-parallel to the dipole magnetic field at the inner edge of the torus (Romanova et al., 2011;
Takahashi & Ohsuga, 2017; Parfrey & Tchekhovskoy, 2017). We set the maximum \((p_{\rm gas}+p_{\rm rad})/p_{\rm mag}\) to 100 inside the torus. We give a perturbation on \(p_{\rm gas}+p_{\rm rad}\) by 10% to break an equilibrium state of the torus.
The NS and torus are initially surrounded by relatively hot and low-density corona,
\[\rho=\max\left[\rho_{\rm fr},\rho_{\rm c}\left(\frac{r_{\rm NS}}{r}\right)^{- 5}\right],\]
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multicolumn{1}{c}{ Parameter} & \(B_{\rm NS}\) & \(\rho_{0}\) & \(t_{\rm end}\) & \(t_{\rm eq}\) & \((N_{r},N_{\theta})\) \\ Unit & [G] & [g cm\({}^{-3}\)] & [\(t_{\rm g}\)] & [\(t_{\rm g}\)] & \\ \hline B3e9D001 & \(3.3\times 10^{9}\) & 0.01 & 50000 & 30000 & (592, 412) \\ B3e9D01\_np & \(3.3\times 10^{9}\) & 0.1 & 50000 & 40000 & (592, 412) \\ B1e10D001 & \(10^{10}\) & 0.01 & 50000 & 30000 & (592, 512) \\ B1e10D002 & \(10^{10}\) & 0.021 & 50000 & 30000 & (592, 512) \\ B1e10D004 & \(10^{10}\) & 0.046 & 50000 & 35000 & (592, 512) \\ B1e10D01 & \(10^{10}\) & 0.1 & 50000 & 35000 & (592, 412) \\ B1e10D1\_np & \(10^{10}\) & 1 & 50000 & 40000 & (592, 512) \\ B3e10D001 & \(3.3\times 10^{10}\) & 0.01 & 50000 & 40000 & (732, 512) \\ B3e10D01 & \(3.3\times 10^{10}\) & 0.1 & 50000 & 35000 & (732, 512) \\ B3e10D02 & \(3.3\times 10^{10}\) & 0.21 & 50000 & 40000 & (592, 512) \\ B3e10D04 & \(3.3\times 10^{10}\) & 0.46 & 50000 & 40000 & (592, 512) \\ B3e10D1 & \(3.3\times 10^{10}\) & 1 & 50000 & 40000 & (592, 412) \\ B1e10D001\_a & \(10^{10}\) & 0.01 & 40000 & 25000 & (732, 512) \\ B1e10D001\_b & \(10^{10}\) & 0.01 & 40000 & 30000 & (592, 412) \\ B1e10D001\_c & \(10^{10}\) & 0.01 & 40000 & 30000 & (592, 326) \\ \hline \end{tabular} Note. – \(B_{\rm NS}\), \(\rho_{0}\), and \((N_{r},N_{\theta})\) are the surface magnetic field strength of the NS, the maximum gas density of the initial torus, and numerical grid points, respectively. The simulation continues until \(t=t_{\rm end}\). The mass outflow rate is almost constant after \(t=t_{\rm eq}\). The model without accretion columns is denoted by _mp at the end of the model name.
\end{table}
Table 1: Parameters for different models
Figure 1: Snapshot of density for initial equilibrium torus in the case of model B1e10D01. Solid lines represent magnetic field lines.
\[p_{\rm gas} = \max\left[p_{\rm fr},\min\left[\frac{k\rho\mathrm{T_{g,max}}}{\mu_{ \rm w}m_{\rm p}},\frac{k\rho_{c}T_{\rm init}}{\mu_{\rm w}m_{\rm p}}\left(\frac{r _{\rm NS}}{r}\right)^{-5}\right]\right],\]
where \(\rho_{\rm c}\) is determined by \(\sigma_{\rm c}=B_{\rm NS}^{2}/(4\pi\rho_{\rm c})=10^{3}\), and \(T_{\rm init}\) is set to \(5\times 10^{11}\ {\rm K}\). The floor value of the gas density \(\rho_{\rm fr}\) and gas pressure \(p_{\rm fr}\) are described in Section 2.3. The radiation energy density of the corona is \(\bar{E}=a\bar{T}_{\rm rad}^{4}\), where \(\bar{T}_{\rm rad}=10^{5}\ {\rm K}\). The velocity of the gas \(u^{i}\) and radiation rest frame \(u_{\rm R}^{i}\) are set to zero in the corona. The present result is not affected by the corona gas which is set initially. This is because these hot gases are blown away by the radiation force from the accretion disk after some time has elapsed since the simulations began.
### Boundary and floor condition
We adopt the outgoing boundary at \(r=r_{\rm out}\), and the reflective boundary at \(\theta=0,\pi\). At the inner boundary, \(r=r_{\rm NS}\), we assume that the boundary, at which the gas can be swallowed by the NS but the energy can not be swallowed by the NS (Ohsuga, 2007). At inner ghost cells, \(\rho\), \(p_{\rm gas}\), and \(\bar{E}\) are the free boundary, and \(u^{i}\) and \(u_{\rm R}^{i}\) are set to zero. For the magnetic field, the radial component is fixed to be the dipole field (Wasserman and Shapiro, 1983), while the tangential component is set according to the free boundary condition (Parfrey and Tchekhovskoy, 2017; Abarca et al., 2021). The numerical flux in mass conservation law, energy conservation law for MHD, and energy conservation law for the radiation field are zero at the NS surface. The numerical flux of the induction equation is also set to zero at the NS surface to satisfy the ideal MHD condition. We also impose the following condition on the gas density and pressure at the NS surface:
\[\rho(t) = \min\left[\rho_{*}(t),\ \rho(t=0)\right], \tag{13}\] \[p_{\rm gas}(t) = \frac{k\rho(t)T_{\rm g,*}(t)}{\mu m_{\rm p}}, \tag{14}\]
where \(\rho_{*}\) and \(T_{\rm g,*}\) are the gas density and gas temperature calculated from the conservative variables, respectively. In this case, the gas density is set to initial value by keeping the gas temperature constant at the NS surface. The amount of reduced kinetic and thermal energy densities by applying (13) and (14) is added to the radiation energy density \(R_{t}^{t}\). Here, the kinetic and thermal energy densities are defined as follows:
\[U_{\rm kinetic} = \rho u^{t}(u_{t}-\sqrt{-g_{\rm tt}}), \tag{15}\] \[U_{\rm thermal} = (e+p_{\rm gas})u^{t}u_{t}+p_{\rm gas}g_{t}^{t}. \tag{16}\]
The mass reduced by the treatment of (13) is considered to be swallowed by the NSs.
We impose a floor condition on \(\rho\) and \(p_{\rm gas}\) to solve the GR-RMHD equations stably:
\[\rho_{\rm flr} = \max\left[\frac{b^{2}}{\sigma_{\rm max}},\ 10^{-4}\rho_{0}\left(\frac{r}{r_{\rm g}}\right)^{-1.5} \right], \tag{17}\] \[p_{\rm flr} = 10^{-6}\left(\Gamma-1\right)\rho_{0}\left(\frac{r}{r_{\rm g}} \right)^{-2.5}, \tag{18}\]
where \(\sigma_{\rm max}\) is set to \(10^{3}\). The upper and lower limit of the gas temperature are set to \(5\times 10^{11}\ {\rm K}\) and \(5\times 10^{5}\ {\rm K}\), respectively.
The gas density should be low near \(\theta=0,\pi\) because the strong magnetic pressure and tension due to the open poloidal magnetic field lines inhibit the mass flow toward the rotation axis in the very vicinity of the rotation axis. In the numerical simulation, however, the gas reaches the region of \(\theta=0,\pi\) due to the numerical diffusion. Then, the unphysical mass outflow driven by the radiation force is formed close to the polar axis. We decided to ignore the gas-radiation interaction to avoid this problem in this region. Practically, we set the absorption and scattering opacity to zero at \(\sigma>\sigma_{\rm rad}\), where \(\sigma=b^{2}/\rho\) is the magnetization. In this study, we set \(\sigma_{\rm rad}=10\) in all models. This value is set so that the opacity near the rotation axis is zero while the radiation from the dense accretion flows at the NS surface can be solved.
## 3 Results
In all models, the initial torus deviates from the equilibrium state due to the magnetorotational instability (MRI) caused by the differential rotation in the torus after the simulation starts. The angular momentum of the gas is
transported outward, and then the accretion disk is formed around the equatorial plane (\(z=0\) plane). The radiatively driven outflows are launched from the accretion disk. The accretion disk in models B3e9D001, B1e10D001, B1e10D002, B1e10D004, B1e10D01, B3e10D001, B3e10D01, B3e10D02, B3e10D04 and B3e10D1 is truncated before it reaches to the NS surface. Then, the accreting gas moves along magnetic field lines, and column accretion flows (accretion columns) are formed near the poles of the NS. On the other hand, the accretion disk in models B3e9D01_np and B1e10D1_np directly connects to the NS surface without being truncated.
### Quasi-steady state structure
We use the numerical data from period \(t_{\rm eq}\) to \(t_{\rm end}\) shown in Table 1 for investigating a quasi-steady structure in the present work, since the flows become quasi-steady and the mass outflow rate is almost constant after \(t_{\rm eq}\). Firstly, we introduce the quasi-steady structure of the accretion flows and outflows around the magnetized NS. Figure 2 shows the time-averaged overview of models B1e10D001 (left) and B1e10D01 (right) in the quasi-steady state. Colors represent \(\rho\) in panels (a) and (c), and \(\bar{E}\) in panels (b) and (d). The white region with \(r<10\) km is the NS, and its center is located at the origin, \((R,z)=(0,0)\). The magnetic field lines are shown by cyan solid lines. We find that the accretion disk is formed near the equatorial plane at \(20\) km \(\lesssim R\lesssim 200\) km (the green region in panels (a) and (c)). In panels (b) and (d), it is clear that \(\bar{E}\) is large in the accretion disk. The accretion column is formed near the NS magnetic poles, \(r\lesssim 20\)km, the details of which are described in Figure 3. Streamlines follow the poloidal velocity vectors in panels (a) and (c). It can be seen that the outflows are emanated from the disk toward the low-density region. In addition, turbulent motions exist inside the accretion disk. The radiative flux is represented by streamlines in panels (b) and (d). We can see that the outward radiative flux from the accretion disk appears. The accretion disk, the outflows, and the outward radiative flux from the accretion disk are common to all models. Both \(\rho\) and \(\bar{E}\) in the accretion disk tend to increase as \(\rho_{0}\) becomes large. Actually, both of \(\rho\) and \(\bar{E}\) in model B1e10D01 (\(\rho_{0}=0.1\) g cm\({}^{-3}\)) are an order of magnitude higher than those in model B1e10D001 (\(\rho_{0}=0.01\) g cm\({}^{-3}\)).
We can see that the gas accretes onto the NS around the pole outside the accretion column (\(r\lesssim 100\)km and \(\theta\lesssim 40^{\circ}\), \(140^{\circ}\lesssim\theta\) in panel (a)). This is because the interaction between the gas and radiation is artificially switched off at the high-\(\sigma\) region (\(\sigma>\sigma_{\rm rad}\)) in our simulation to maintain numerical stability. The radiation force does not work around
Figure 2: Here, we show the time-averaged quasi-steady structure of super-Eddington accretion flows around the magnetized NS for models B1e10D001 and B1e10D01. Panels (a) and (c): Plot of density profile along with streamlines. Panels (b) and (d): Colors show radiation energy density in the radiation rest frame and streamlines indicate the radiative flux. Cyan lines are magnetic field lines.
the polar axis and the gas falls down to the NS. We note that the density in this region is too small to contribute to the mass accretion rate, and the effects on the overall structure are negligibly small.
Next, we explain the dependence of the magnetospheric structure on \(\rho_{0}\). Colors in Figure 3 show \(\rho\), and magnetic field lines are represented by cyan lines. We take models B1e10D001, B1e10D01, and B1e10D1_np. We plot the time-averaged magnetization, \(\langle\sigma\rangle=\langle b^{2}\rangle/\langle\rho\rangle=10\), by yellow contours. Here, the angle brackets denote the time-averaged value. The magnetization \(\langle\sigma\rangle\) on the high-density side of the yellow line is less than 10. In the left panel (model B1e10D001), the high-density accretion column, around \((R,z)=(5\mathrm{km},12\mathrm{km})\), is formed (one-side accretion). Although not shown in Figure 3, the same is true of models B3e9D001, B1e10D002, B3e10D001, B3e10D01, and B3e10D02. In the middle panel (model B1e10D01), the accretion columns are formed at \((R,z)=(6\mathrm{km},\pm 10\mathrm{km})\), which is true of models B1e10D004, B3e10D04, and B3e10D1. At the truncation radius, which is the radius of the inner edge of the accretion disk, the gas begins to move away from the equatorial plane toward the NS poles. The truncation radius is \(R\sim 25\) km in model B1e10D001 and \(R\sim 13\) km in model B1e10D001, and approximately corresponds to the position where \(\langle\sigma\rangle=10\) at the equatorial plane. The density inside this radius is less than 1% of the density of the disk, except inside the accretion column. Also, the truncation radius is roughly comparable to the magnetospheric radius, \(r_{\mathrm{M}}\), \(\sim 27\) km for model B1e10D001 and \(\sim 13\) km for model B1e10D01. In the present work, \(r_{\mathrm{M}}\) is defined as the maximum radius of the region, where \(\langle\beta\rangle_{u_{\phi}}<1\), inside the maximum pressure radius of the initial torus, 304.5 km. Here, \(\langle\beta\rangle_{u_{\phi}}\) is \(u_{\phi}\)-weighted plasma beta,
\[\left\langle\beta\right\rangle_{u_{\phi}}=\frac{1}{\int_{0}^{\pi}\left\langle u _{\phi}\right\rangle\sqrt{g_{\theta\theta}}d\theta}\int_{0}^{\pi}\frac{\left \langle p_{\mathrm{gas}}\right\rangle+\left\langle p_{\mathrm{rad}}\right\rangle }{\left\langle p_{\mathrm{mag}}\right\rangle}\left\langle u_{\phi}\right\rangle \sqrt{g_{\theta\theta}}d\theta. \tag{19}\]
Although \(p_{\mathrm{rad}}\) is the radiation pressure in the radiation rest frame, it is approximately equal to the radiation pressure in the fluid frame since the MHD and radiation field inside the accretion disk are in local thermal equilibrium, and therefore the radiation rest frame coincides with the fluid frame. We note that the gas pressure is negligible in the super-Eddington accretion disk (Takahashi and Ohsuga, 2017), so that \(p_{\mathrm{mag}}\sim p_{\mathrm{rad}}\) at \(\left\langle\beta\right\rangle_{u_{\phi}}=1\). The coincidence between the truncation radius and \(r_{\mathrm{M}}\) indicates that the truncation radius is determined by the balance between \(p_{\mathrm{mag}}\) and \(p_{\mathrm{rad}}\). As can be seen from the left and middle panels in Figure 3, the truncation radius decreases as \(\rho_{0}\) increases.
In model B1e10D1_np, since the radiation pressure of the accretion disk is larger than the magnetic pressure of the dipole field even at the NS surface, the accretion disk reaches the NS surface without being truncated and no accretion column is formed. In this case, the region where \(\left\langle\beta\right\rangle_{u_{\phi}}>1\) reaches the NS surface, and \(r_{\mathrm{M}}\) can not be determined. Although not shown in this figure, accretion columns are not formed in model B3e9D01_np for the same reason. Based on the hypothesis that the radiation from the accretion column is the origin of the X-ray pulse, models B3e9D01_np
Figure 3: The dependence of accretion flows on the maximum gas density of the initial torus \(\rho_{0}\). Colors shows gas density. Yellow lines represents \(\langle\sigma\rangle=10\), where \(\langle\sigma\rangle\) is the time-averaged magnetization. Cyan lines are magnetic field lines.
and B1e10D1_np are inappropriate for explaining ULXPs. Incidentally, the reason why the one-side accretion occurs in model B1e10D001 is described in Section 4.4.
Table 2 lists the time-averaged value of the mass accretion rate \(\dot{M}_{\rm in}\) at \(r=11\) km, mass outflow rate \(\dot{M}_{\rm out}\) at \(r=r_{\rm out}\), radiative luminosity \(L_{\rm rad}\) at \(r=r_{\rm out}\), and kinetic luminosity \(L_{\rm kin}\) at \(r=r_{\rm out}\) of each model. The Eddington mass accretion rate is denoted by \(\dot{M}_{\rm Edd}=L_{\rm Edd}/c^{2}\). The mass accretion rate and outflow rate are defined as
\[\dot{M}_{\rm in}(r) = -\int\min[\rho u^{r},0]\sqrt{-g}d\theta d\phi, \tag{20}\] \[\dot{M}_{\rm out}(r) = \int\max[\rho u^{r},0]\sqrt{-g}d\theta d\phi. \tag{21}\]
The radiative and kinetic luminosity can be calculated using following formula (Sadowski et al., 2016):
\[L_{\rm rad}(r) = -\int\min[R_{t}^{r},0]\sqrt{-g}d\theta d\phi, \tag{22}\] \[L_{\rm kin}(r) = -\int\min[\rho u^{r}\left(u_{t}+\sqrt{-g_{tt}}\right),0]\sqrt{-g} d\theta d\phi. \tag{23}\]
As can be seen from this table, \(\dot{M}_{\rm in}\), \(\dot{M}_{\rm out}\), \(L_{\rm rad}\) and \(L_{\rm kin}\) tend to be large as \(\rho_{0}\) increases. We also find that \(L_{\rm kin}/L_{\rm rad}\) tends to be large as \(\dot{M}_{\rm in}\) increases. Indeed, the mass accretion rate is \(54\dot{M}_{\rm Edd}\) and \(L_{\rm kin}/L_{\rm rad}\sim 0.083\) in the case of model B1e10D001, mass accretion rate is \(500\dot{M}_{\rm Edd}\) and \(L_{\rm kin}/L_{\rm rad}\sim 0.86\) in the case of model B1e10D01, and mass accretion rate is \(4700\dot{M}_{\rm in}\) and \(L_{\rm kin}/L_{\rm rad}\sim 1.2\) in the case of model B1e10D1_np. That is, the kinetic luminosity is comparable to the radiation luminosity for the model with high mass accretion rate. These trends are also reported in non-relativistic radiation hydrodynamics simulation (Ohsuga, 2007).
Figure 4 shows \(r_{\rm M}\) and the time derivative of the NS rotation period (spin-up rate) \(\dot{P}\) estimated using our numerical models as a function of the mass accretion rate. Points in Figure 4 (a) indicate the resulting \(r_{\rm M}\) for \(B_{\rm NS}=3.3\times 10^{9}\) G (red), \(10^{10}\) G (blue), and \(3.3\times 10^{10}\) G (black), in which all models have accretion columns. The magnetospheric radius \(r_{\rm M}\) decreases with increasing \(\dot{M}_{\rm in}\) and increases with increasing \(B_{\rm NS}\). Dashed lines indicate the analytical solution for \(r_{\rm M}\), which can be calculated assuming that the magnetic pressure of the dipole field balances with the radiation pressure of the accretion disk (Takahashi & Ohsuga, 2017, see also Appendix B for detail):
\[r_{\rm M} \sim 2.0\times 10^{6}~{}[{\rm cm}]\]
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ Model & \([\dot{M}_{\rm Edd}]\) & \([\dot{M}_{\rm Edd}]\) & \([L_{\rm Edd}]\) & \([L_{\rm Edd}]\) & \([10^{7}~{}{\rm K}]\) & \([{\rm km}]\) \\ \hline B3e9D001 & 32 & 36 & 3.0 & 0.15 & 0.72 & 57 \\ B3e9D01\_mp & 220 & 2700 & 130 & 47 & 0.63 & 230 \\ B1e10D001 & 54 & 74 & 4.1 & 0.34 & 0.71 & 58 \\ B1e10D002 & 110 & 110 & 9.5 & 1.0 & 0.73 & 65 \\ B1e10D004 & 160 & 730 & 21 & 7.1 & 0.66 & 130 \\ B1e10D01 & 500 & 2600 & 42 & 36 & 0.66 & 240 \\ B1e10D1\_mp & 4700 & 23000 & 500 & 410 & 0.50 & 1100 \\ B3e10D01 & 98 & 170 & 5.5 & 0.89 & 0.75 & 50 \\ B3e10D01 & 720 & 2900 & 140 & 56 & 0.69 & 270 \\ B3e10D02 & 650 & 4600 & 150 & 37 & 0.63 & 250 \\ B3e10D04 & 1500 & 13000 & 320 & 210 & 0.59 & 660 \\ B3e10D1 & 2300 & 28000 & 490 & 460 & 0.55 & 1300 \\ \hline \end{tabular}
\end{table}
Table 2: Time-averaged value for the mass accretion rate, mass outflow rate, radiative luminosity, and kinetic luminosity. The mass accretion rate is measured at \(r=11\) km, and the others are calculated at \(r=r_{\rm out}\).
\[\times\left(\frac{\alpha}{0.1}\right)^{2/7}\left(\frac{\dot{M}_{ \rm in}}{10^{2}\dot{M}_{\rm Edd}}\right)^{-2/7}\left(\frac{B_{\rm NS}}{10^{10} \ {\rm G}}\right)^{4/7}\] \[\times\left(\frac{M_{\rm NS}}{1.4M_{\odot}}\right)^{-3/7}\left( \frac{r_{\rm NS}}{10\ {\rm km}}\right)^{12/7}, \tag{24}\]
where \(\alpha\) is the viscous parameter (Shakura & Sunyaev, 1973) and we take \(\alpha=0.1\). We can find that the resulting \(r_{\rm M}\) is consistent with the analytical one. We note that the results of models B1e10D1_np and B3e9D01_np are not plotted since \(r_{\rm M}\) is not determined in these models as mentioned above.
We estimate \(\dot{P}\) as follows,
\[\dot{P} = \frac{\left\langle\dot{L}\right\rangle}{M_{\rm NS}l_{\rm NS}}P, \tag{25}\] \[\dot{L} = \int M_{\phi}^{r}\sqrt{-g}d\theta d\phi. \tag{26}\]
(Shapiro & Teukolsky, 1986; Takahashi & Ohsuga, 2017). Here, \(l_{\rm NS}=2\pi r_{\rm NS}^{2}/P\) and \(P\) are the specific angular momentum and rotation period of the NS, respectively. For numerical estimation, we assume \(P=9.8\ {\rm s}\), which corresponds to that observed in Swift J0243.6+6124. Figure 4 (b) shows that \(\dot{P}\) is in the range from \(-10^{-9}\ {\rm s}\ {\rm s}^{-1}\) to \(-10^{-7}\ {\rm s}\ {\rm s}^{-1}\), which is not inconsistent with the observed value of Swift J0243.6+6124, \(\dot{P}\sim-10^{-8}\ {\rm s}\ {\rm s}^{-1}\) (see Section 3.3 for detail). It is also clearly seen that the larger \(\dot{M}_{\rm in}\) and higher \(B_{\rm NS}\) leads to large \(\dot{P}\). Dashed lines indicate the analytical solution for \(\dot{P}\), which can be calculated assuming that the Keplerian angular momentum at \(r_{\rm M}\) is transported to the NS without dissipation (Takahashi & Ohsuga, 2017, see also Appendix B):
\[\dot{P} \sim -2.2\times 10^{-9}\ [{\rm s}\cdot{\rm s}^{-1}] \tag{27}\] \[\times \left(\frac{\alpha}{0.1}\right)^{1/7}\left(\frac{\dot{M}_{\rm in} }{10^{2}\dot{M}_{\rm Edd}}\right)^{6/7}\left(\frac{B_{\rm NS}}{10^{10}\ {\rm G}}\right)^{2/7}\] \[\times \left(\frac{M_{\rm NS}}{1.4M_{\odot}}\right)^{2/7}\left(\frac{r_ {\rm NS}}{10\ {\rm km}}\right)^{-8/7}\left(\frac{P}{10\ {\rm s}}\right)^{2}.\]
The resulting \(\dot{P}\) is roughly comparable to the analytical one.
Figure 4: Panel (a): The relations between \(r_{\rm M}\) and the mass accretion rate at the NS surface with different surface magnetic field strengths, \(B_{\rm NS}=3.3\times 10^{9}\ {\rm G}\) (red), \(10^{10}\ {\rm G}\) (blue), and \(3.3\times 10^{10}\ {\rm G}\) (black). Dashed lines show the analytical model of \(r_{\rm M}\)(Takahashi & Ohsuga, 2017). Panel (b): \(\dot{P}\) as a function of the mass accretion rate at the NS surface with different surface magnetic field strengths. Dashed lines show the analytical model of \(\dot{P}\)(Takahashi & Ohsuga, 2017). Parameters for the analytical model : \(M_{\rm NS}=1.4M_{\odot}\), \(r_{\rm NS}=10\ {\rm km}\) and \(\alpha=0.1\).
### Powerful outflows
In Figure 5, we show that powerful outflows are launched from the super-Eddington accretion disk and are driven by radiation and centrifugal forces. The top panel in Figure 5 shows time-averaged \(\dot{M}_{\rm in}\) (thick solid line), \(\dot{M}_{\rm out}\) (dashed line), and net flow rate \(\dot{M}_{\rm net}=\dot{M}_{\rm in}-\dot{M}_{\rm out}\) (thin solid line) as a function of \(r\) in the case of model B1e1OD01. The position of \(r_{\rm M}\) in this model is represented by a vertical line. This panel indicates that outflows mainly originate from the accretion disk. This is clearly understood from the fact that \(\dot{M}_{\rm out}\) increases significantly with increasing \(r\), and \(\dot{M}_{\rm out}\) for \(r<r_{\rm M}\) is much smaller than that for \(r>r_{\rm M}\). The mass accretion rate \(\dot{M}_{\rm in}\) decreases with decreasing \(r\) due to efficient mass ejection from the accretion disk. The mass outflow rate \(\dot{M}_{\rm out}\) is much smaller than the mass accretion rate at \(r<r_{\rm M}\), and we find \(\dot{M}_{\rm net}\sim\dot{M}_{\rm in}\gg\dot{M}_{\rm out}\) inside the magnetosphere.
Next, we investigate the radial profile of forces acting on fluid elements. The steady-state axisymmetric equations of motion in the \(r\)- and \(\theta\)-direction in the statistic observer frame are,
\[f_{\rm grav}^{(r)}+f_{\rm thermal}^{(r)}+f_{\theta,{\rm cent}}^{ (r)}+f_{\phi,{\rm cent}}^{(r)}+f_{\rm rad}^{(r)}+f_{\rm mag}^{(r)}+f_{\rm adv}^{ (r)} = 0, \tag{28}\] \[f_{\rm inertial}^{(\theta)}+f_{\rm thermal}^{(\theta)}+f_{\phi,{ \rm cent}}^{(\theta)}+f_{\rm rad}^{(\theta)}+f_{\rm mag}^{(\theta)}+f_{\rm adv} ^{(\theta)} = 0, \tag{29}\]
where superscript \((r)\) and \((\theta)\) denote the \(r\)- and \(\theta\)-component of the vectors in the static observer frame. The forces acting on fluid elements consist of the gravitational force \(f_{\rm grav}\), thermal force \(f_{\rm thermal}\), radiation force \(f_{\rm rad}\), Lorentz force \(f_{\rm mag}\), centrifugal force \(f_{\rm cent}\), inertial force \(f_{\rm inertial}\), and advection force \(f_{\rm adv}\). The centrifugal force is divided into two components, \(f_{\theta,{\rm cent}}\) and \(f_{\phi,{\rm cent}}\), which are caused by the polar (\(\theta\)-) and azimuthal (\(\phi\)-) motions, respectively (see Appendix C for detail). We define the total force \(f_{\rm tot}\) as the sum of all forces without \(f_{\rm grav}\), and therefore \(-f_{\rm tot}^{(r)}/f_{\rm grav}^{(r)}=1\) holds.
Figure 5: Top: time-averaged radial profiles of the mass accretion rate (thick solid lines), mass outflow rate (dashed lines), and net flow rate (thin solid lines). The vertical line is the position of the magnetospheric radius. Middle: time and shell-averaged mass flux weighted radial forces normalized by the gravitational force. Bottom: time-averaged mass flux weighted vertical forces divided by the vertical gravitational force at \(\theta=\pi/2-\Theta_{\rm H}\).
The middle panel in Figure 5 shows time and shell-averaged mass flux weighted radial forces normalized by the gravitational force as a function of \(r\), which can be calculated as follows:
\[\left\langle f^{(r)}\right\rangle_{\rho u^{r},\theta\phi}=\frac{1}{\left\langle \right.\dot{M}_{\rm out}(r)\left.\right\rangle}\left\langle\int f^{(r)}\max[ \rho u^{r},\ 0]\sqrt{-g}d\theta d\phi\right\rangle. \tag{30}\]
The bottom panel in Figure 5 shows time-averaged mass flux weighted \(z\)-component of the forces normalized by the vertical gravitational force,
\[\left\langle f^{(z)}\right\rangle_{\rho u^{r}} = \frac{\left\langle\max[\rho u^{r},0]f^{(z)}\right\rangle}{\left\langle \max[\rho u^{r},0]\right\rangle}, \tag{31}\] \[f^{(z)} = f^{(r)}\cos\theta-f^{(\theta)}\sin\theta, \tag{32}\]
at the disk surface. The polar angle of the disk surface is defined as \(\theta=\pi/2-\Theta_{\rm H}\), where
\[\Theta_{\rm H}=\left\langle\sqrt{\frac{\int\rho(\pi/2-\theta)^{2}\sqrt{g_{ \theta\theta}}d\theta}{\int\rho\sqrt{g_{\theta\theta}}d\theta}}\right\rangle. \tag{33}\]
It is clear that the outflows are mainly driven by the radiation force for 20 km \(<r<\) 50 km, where the radiation force is stronger than the other forces in both \(r\)- and \(z\)-directions. For 50 km \(<r<\) 200 km, the gas is launched from the accretion disk by the radiation force and moves outward by centrifugal force. The Lorentz force is dominant at \(r<r_{\rm M}\), but as already mentioned, most of the outflows are of disk origin (\(r>r_{\rm M}\)). The thermal force, centrifugal force due to the motion in the \(\theta\)-direction, advection force, and inertial force are significantly small over the whole region. The radial distribution of forces for \(r>r_{\rm M}\) in models B3e9D001, B3e9D01, B1e10D001, B1e10D002, B1e10D004, B3e10D001, B3e10D01, B3e10D02, B3e10D04, and B3e10D1 is similar to that in model B1e10D01. The radial profile of forces outside \(r=r_{\rm NS}\) in models B3e9D01_np and B1e10D1_np, where the magnetosphere is not formed, is the same as that in the disk region outside \(r=r_{\rm M}\) in model B1e10D01.
In Figure 6, we show the gas temperature distribution and the location of the photosphere. In all models, the gas temperature tends to be high near the \(z\)-axis and in the equatorial plane at \(r<750\) km. Red lines indicate the
Figure 6: Distributions of time-averaged gas temperature of models B1e10D001, B1e10D01, B1e10D1_np, Red lines show locations of the photosphere, \(\tau_{\rm eff}=1\).
photospheres, \(\left\langle\tau_{\rm eff}\right\rangle=1\), where \(\left\langle\tau_{\rm eff}\right\rangle\) is the time-averaged effective optical depth measured as,
\[\left\langle\tau_{\rm eff}(r)\right\rangle=\int_{r}^{r_{\rm out}}\left\langle \rho\right\rangle\sqrt{\left\langle\kappa_{\rm abs}\right\rangle(\left\langle \kappa_{\rm abs}\right\rangle+\kappa_{\rm sca})}\sqrt{g_{rr}}dr, \tag{34}\]
where \(\left\langle\kappa_{\rm abs}\right\rangle=6.4\times 10^{22}\left\langle\rho \right\rangle\left\langle T_{\rm g}\right\rangle^{-3.5}\) [cm\({}^{2}\) g\({}^{-1}\)]. As can be seen in this figure, the photosphere extends to surround the disk region. The larger the mass accretion rate, the wider the region surrounded by the photosphere. This can be understood from a comparison of models B1e10D001, B1e10D01, and B1e10D1_np. Also, within the range of parameters employed in the present study, the position of the photosphere does not depend much on \(B_{\rm NS}\). It is clear from the fact that the photosphere of models B1e10D01 and B3e10D01 hardly change (see red lines in Figure 6). In all models, the gas temperature on the photosphere is \(\sim 10^{7}\) K over a wide range of polar angles. From this, it is expected that blackbody radiation with a temperature of \(\sim 10^{7}\) K can be observed.
Hereafter, we define the thermalization radius \(r_{\rm th}\) as the radius where \(\left\langle\tau_{\rm eff}\right\rangle=1\). That is, the radius of the photosphere is represented as \(r_{\rm th}\). In Figure 7, we plot \(r_{\rm th}\) (upper panel) and the time-averaged gas temperature at the photospheres \(T_{\rm th}\) (middle panel) as a function of polar angle \(\theta\). In model B1e10D1_np, which has the largest \(\dot{M}_{\rm in}\) in our models, \(r_{\rm th}\gtrsim 1000\) km at \(30^{\circ}<\theta<150^{\circ}\). Models with smaller \(\dot{M}_{\rm in}\) tend to have smaller \(r_{\rm th}\). The thermalization radius \(r_{\rm th}\) is slightly less than 1000 km at \(40^{\circ}<\theta<140^{\circ}\) in models B1e10D01 and B3e10D01 and is \(\sim 500\) km at \(60^{\circ}<\theta<120^{\circ}\) in model B1e10D001. The thermalization radius cannot be determined close to the rotation axis (\(\theta\lesssim 30^{\circ}\), \(150^{\circ}\lesssim\theta\)) since the gas is optically thin and \(\left\langle\tau_{\rm eff}\right\rangle<1\) even at the NS surface. The range of \(\theta\), within which \(r_{\rm th}\) can be obtained, narrows as \(\dot{M}_{\rm in}\) decreases. In all models, \(T_{\rm th}\) is almost constant at the angular range where \(r_{\rm th}\) can be determined. The bottom panel in Figure 7 shows the polar angle dependence of the radial gas velocity at the photosphere, \(v^{(r)}(r_{\rm th})\), where \(v^{(r)}=u^{(r)}/u^{(t)}\). We can see \(v^{(r)}(r_{\rm th})>0\) on the photosphere, which indicates that most of the photosphere is located inside the optically thick outflows. Hereafter, we refer to such an outflow as an effectively optically thick outflow (Urquhart & Soria, 2016).
Figure 7: The polar angle (\(\theta\)) dependencies of the thermalization radius \(r_{\rm th}\) (top panel), the gas temperature at the photospheres \(T_{\rm th}\) (middle panel), and the gas velocity at the photospheres \(v_{\rm th}\) (bottom panel).
Here, we estimate the blackbody temperature \(T_{\rm bb}\) by taking \(\theta\)-averaged \(T_{\rm th}\) on the photosphere as
\[T_{\rm bb}=\frac{1}{\int\sqrt{-g(r_{\rm th}(\theta),\theta)d\theta}}\int T_{\rm th }(\theta)\sqrt{-g(r_{\rm th}(\theta),\theta)}d\theta. \tag{35}\]
This integration is performed on the surface where the thermalization radius can be estimated. The corresponding blackbody radius \(r_{\rm bb}\) can be calculated as
\[r_{\rm bb} = \left[\frac{\left\langle L_{\rm ISO}\right\rangle_{\theta}}{4\pi \sigma T_{\rm bb}^{4}}\right]^{1/2}, \tag{36}\]
where the \(\theta\)-averaged isotropic luminosity \(\left\langle L_{\rm ISO}\right\rangle_{\theta}\) is
\[\left\langle L_{\rm ISO}\right\rangle_{\theta} = -\frac{4\pi r_{\rm out}^{2}}{\int\sqrt{-g(r_{\rm out},\theta)}d \theta}\int\left\langle R_{t}^{r}(r_{\rm out})\right\rangle\sqrt{-g(r_{\rm out },\theta)}d\theta. \tag{37}\]
Here, \(\theta\) integration is also conducted only in the range where the thermalization radius can be obtained. The resulting \(T_{\rm bb}\) and \(r_{\rm bb}\) are presented in Table 2. The blackbody temperature is not so sensitive to the accretion rate and is \(\sim 10^{7}\) K, which is consistent with the blackbody temperature observed in Swift J0243.6+6124 at the super-Eddington phase (Tao et al., 2019). The resulting \(r_{\rm bb}\) is roughly fitted as
\[r_{\rm bb}=3.2\left(\frac{\dot{M}}{\dot{M}_{\rm Edd}}\right)^{0.71}\ [{\rm km}]. \tag{38}\]
Tao et al. (2019) showed that the thermal photons would come from \(r=100-500\) km for Swift J0243.6+6124. Using equation (38), our model can explain this radius when the accretion rate is \(130\dot{M}_{\rm Edd}<\dot{M}_{\rm in}<1200\dot{M}_{\rm Edd}\). In the models treated in the present study, there is no clear result that \(r_{\rm bb}\) depends on \(B_{\rm NS}\). We will discuss this point later.
We run three models, B1e10D001_a, B1e10D001_b, and B1e10D001_c, which have the same initial parameters as model B1e10D001 except for the resolution, to assess the effect of the resolution. We confirm that \(r_{\rm M}\) and \(\dot{M}_{\rm out}\) of models B1e10D001_a and model B1e10D001_b are almost the same as those of model B1e10D001, while those of model B1e10D001_c are different from those of model B1e10D001. The resolution of the simulation models adopted in this study is the same as or higher than that of model B1e10D001_b. Therefore, our simulation results are independent of the resolution.
### Magnetic field strength of the neutron star in Swift J0243.6+6124
Here, we suggest how to constrain \(B_{\rm NS}\) using analytical solutions of \(r_{\rm M}\) and \(\dot{P}\) and the conditions under which effectively optically thick outflows, which can reproduce the blackbody radiation observed in Swift J0243.6+6124, occur.
It is necessary to simultaneously satisfy \(r_{\rm M}<r_{\rm co}\), where \(r_{\rm co}\) is the corotation radius, and \(r_{\rm M}\) can be defined (that is, \(r_{\rm M}>r_{\rm NS}\)) in ULXPs. The condition of \(r_{\rm M}<r_{\rm co}\) is for gas to accrete onto the NS (Illarionov & Sunyaev, 1975). Otherwise, the gas would all be blown away due to the propeller effect. The accretion columns are formed in the case that the second condition is satisfied (see Figure 3 for detail). Substituting equations (24) and (27) into these condition, we get
\[B_{\rm NS} < 1.6\times 10^{14}\,[{\rm G}] \tag{39}\] \[\times \left(\frac{\alpha}{0.1}\right)^{-1/2}\left(\frac{\dot{P}}{-10^{- 8}\ {\rm s\ s^{-1}}}\right)^{1/2}\] \[\times \left(\frac{M_{\rm NS}}{1.4M_{\odot}}\right)\left(\frac{r_{\rm NS }}{10^{6}\ {\rm cm}}\right)^{-2},\]
and
\[B_{\rm NS} > 7.5\times 10^{9}\ [{\rm G}] \tag{40}\] \[\times \left(\frac{\alpha}{0.1}\right)^{-1/2}\left(\frac{\dot{P}}{-10^{- 8}\ {\rm s\ s^{-1}}}\right)^{1/2}\left(\frac{M_{\rm NS}}{1.4M_{\odot}}\right)^{1/2}\] \[\times \left(\frac{r_{\rm NS}}{10^{6}\ {\rm cm}}\right)^{-1/2}\left(\frac{P}{10 \ {\rm s}}\right)^{-1}.\]
If the magnetic field strength at the NS surface satisfies the inequalities of (39) and (40), then the column accretion onto the magnetic poles of the NS appears, and such objects can be the candidates for the ULXPs.
Next, we consider the emergence of the outflows from the super-Eddington disks. The radiatively driven outflows are thought to be launched in the slim disk region, which is inside the spherization radius (or trapping radius), \(r_{\rm sph}\). This is discussed in Shakura and Sunyaev (1973) and revealed by the numerical simulations (Kitaki et al., 2021; Yoshioka et al., 2022). The spherization radius is roughly estimated as \(r_{\rm sph}=(3/2)(\dot{M}_{\rm in}/\dot{M}_{\rm Edd})r_{\rm g}\). Since the disk is truncated at \(r=r_{\rm M}\), the ULXPs accompanying the outflows appear if the condition of \(r_{\rm M}<r_{\rm sph}\) is satisfied in addition to \(r_{\rm NS}<r_{\rm M}<r_{\rm co}\)(Mushtukov et al., 2019). Using equations (24) and (27), \(r_{\rm M}<r_{\rm sph}\) becomes
\[B_{\rm NS} < 1.5\times 10^{12}\ \ [{\rm G}] \tag{41}\] \[\times \left(\frac{\alpha}{0.1}\right)^{-1/2}\left(\frac{\dot{P}}{-10^ {-8}\ {\rm s\ s^{-1}}}\right)^{3/2}\] \[\times \left(\frac{M_{\rm NS}}{1.4M_{\odot}}\right)\left(\frac{P}{10\ { \rm s}}\right)^{-3}.\]
Equations (39)-(41) are the general conditions to explain the ULXPs, while we concentrate on the discussion about Swift J0243.6+6124 in the following. As described in Section 3.2, the range of the mass accretion rate that can reproduce the blackbody radiation observed in this object is \(130\dot{M}_{\rm Edd}<\dot{M}_{\rm in}<1200\dot{M}_{\rm Edd}\) from relation (38). Using equation (27), this condition is transformed as
\[B_{\rm NS} < 9.8\times 10^{11}\ [{\rm G}] \tag{42}\] \[\times \left(\frac{\alpha}{0.1}\right)^{-1/2}\left(\frac{\dot{P}}{-10^ {-8}\ {\rm s\ s^{-1}}}\right)^{7/2}\left(\frac{M_{\rm NS}}{1.4M_{\odot}}\right)^{-1}\] \[\times \left(\frac{r_{\rm NS}}{10^{6}\ {\rm cm}}\right)^{4}\left(\frac{P}{10\ { \rm s}}\right)^{-7},\] \[B_{\rm NS} > 1.3\times 10^{9}\ [{\rm G}]\] (43) \[\times \left(\frac{\alpha}{0.1}\right)^{-1/2}\left(\frac{\dot{P}}{-10^ {-8}\ {\rm s\ s^{-1}}}\right)^{7/2}\left(\frac{M_{\rm NS}}{1.4M_{\odot}}\right)^{-1}\] \[\times \left(\frac{r_{\rm NS}}{10^{6}\ {\rm cm}}\right)^{4}\left(\frac{P}{10\ { \rm s}}\right)^{-7}.\]
Among ULXPs with outflows, objects that meet the conditions (41) and (42) would have effectively optically thick outflows, and the blackbody radiation with \(r_{\rm bb}>100\) km would appear in the radiation spectrum.
Figure 8 summarizes the conditions shown above in the \(\dot{P}-B_{\rm NS}\) plane. Here, we assume \(P=9.8\ {\rm s}\), which is the rotation period observed in Swift J0243.6+6124 (Doroshenko et al., 2018; Chen et al., 2021). We take \(\alpha=0.1\), \(M_{\rm NS}=1.4M_{\odot}\), and \(r_{\rm NS}=10\) km. The red shaded region indicates the ULXP with outflow, which satisfies both \(r_{\rm NS}<r_{\rm M}<r_{\rm co}\) and \(r_{\rm M}<r_{\rm sph}\). The ULXP without outflow is in the white region, where \(r_{\rm NS}<r_{\rm M}<r_{\rm co}\) is satisfied but \(r_{\rm M}<r_{\rm sph}\) is not satisfied. The condition of ULXPs such as \(r_{\rm M}>r_{\rm co}\) (propeller regime) or \(r_{\rm M}<r_{\rm NS}\) (non-pulsating regime) is not satisfied in grey shaded regions. In the blue hatched region where the accretion rate is \(130\dot{M}_{\rm Edd}<\dot{M}_{\rm in}<1200\dot{M}_{\rm Edd}\), effectively optically thick outflows with \(r_{\rm bb}=100-500\)km and \(T_{\rm bb}\sim 10^{7}\)K appear so that the blackbody radiation, as observed in Swift J0243.6+6124, is thought to be reproduced. Here, it is confirmed that the models that can successfully reproduce such effectively optically thick outflows and accretion columns (B1e10D004, B1e10D01, B3e10D01, and B3e10D02) are located in the blue hatched region (filled circles). In contrast, other models are located outside the blue hatched region (filled squares). In observations of Swift J0243.6+6124, \(P\) is almost constant, while \(\dot{P}\) is reported to be \(-2.22\times 10^{-8}\ {\rm s\ s^{-1}}\) (Obs. 1), \(-1.75\times 10^{-8}\ {\rm s\ s^{-1}}\) (Obs. 2), and \(-6.8\times 10^{-9}\ {\rm s\ s^{-1}}\) (Obs. 3) (Doroshenko et al., 2018; Chen et al., 2021). These observed spin-up rates are denoted by black dashed lines in Figure 8. In Obs. 1 and 2, the observed isotropic luminosity exceeds \(L_{\rm Edd}\), and the thermal emission of which the observed blackbody radius is \(100-500\) km is observed (obsID: 90302319004, 90302319006 and 90302319008 in Tao et al., 2019). Then, the magnetic field strength is restricted to be \(2\times 10^{10}\ {\rm G}<B_{\rm NS}<5\times 10^{12}\ {\rm G}\) for Obs. 1 and \(10^{10}\ {\rm G}<B_{\rm NS}<4\times 10^{12}\ {\rm G}\) for Obs. 2, which are represented by black solid lines. In Obs. 3, on the other hand, the
luminosity is lower than \(L_{\rm Edd}\), and the thermal emission is not observed. Such features are realized in the condition that the magnetic field is in the range of \(3\times 10^{11}\) G \(<B_{\rm NS}<10^{14}\) G (see black solid line for Obs. 3). Therefore, the magnetic field strength that satisfies the three observations is between \(3\times 10^{11}\) G and \(4\times 10^{12}\) G. Using the equation (27), the mass accretion rate corresponding to this range is estimated to be \(200\dot{M}_{\rm Edd}\lesssim\dot{M}_{\rm in}\lesssim 500\dot{M}_{\rm Edd}\) for Obs. 1 and 2, and \(60\dot{M}_{\rm Edd}\lesssim\dot{M}_{\rm in}\lesssim 100\dot{M}_{\rm Edd}\) for Obs. 3. Since the luminosity has been reported to be \(\sim 10^{39}\) erg s\({}^{-1}\) at the super-Eddington phase in Swift J0243.6+6124 (Obs.1 and 2), the radiative efficiency of this object is 1-5%, which is smaller than the standard accretion efficiency. This indicates that Swift J0243.6+6124 is observed at a relatively large viewing angle within the range where the pulse emission is detected. The magnetic field strength evaluated in the present study is not inconsistent with previous studies (Tsygankov et al., 2018; Doroshenko et al., 2020). Tsygankov et al. (2018) estimated the upper limit for \(B_{\rm NS}\) in Swift J0243.6+6124, \(<6\times 10^{12}\) G, from the fact that the transition to the propeller regime was not detected even at a luminosity of \(6\times 10^{35}\) erg s\({}^{-1}\). Doroshenko et al. (2020) concluded that \(B_{\rm NS}\) in Swift J0243.6+6124 is in \((3-9)\times 10^{12}\) G and likely at the lower limit of this range. We note that \(B_{\rm NS}\) here is assumed not to change during the observation. This assumption would be reasonable since it has been pointed out that the NS magnetic field strength does not change for a sufficiently long period, \(>1\) Myr, based on the observations of some X-ray pulsars and the ULXP, NGC 1313 X-2 (Makishima et al., 1999; Sathyaprakash et al., 2019). Although the NSs might have multipole magnetic fields, we focus on the case of the dipole magnetic fields in this study. We will discuss multipole fields later.
## 4 Discussions
### Geometrical beaming
In our all models, the radiation is highly beamed by the outflows from the accretion disk. The amplification factor \(\left<L_{\rm iso}\right>/\left<L_{\rm rad}\right>\equiv 1/b\) exceeds 100 at maximum, which is higher than that indicated by King and Lasota (2019) (typically \(1/b\sim 1-10\)) (see also Takahashi and Ohsuga, 2017; Abarca et al., 2021). The cause of this difference is probably the magnetic field strength. The difference will be smaller if we employ a strong \(B_{\rm NS}\) by which \(r_{\rm M}\) nearly equals \(r_{\rm sph}\), although \(r_{\rm M}\) is much smaller than \(r_{\rm sph}\) in the present study. As \(r_{\rm M}\) approaches \(r_{\rm sph}\), the outflow rate would decrease since the outflows mainly occur in \(r_{\rm M}<r<r_{\rm sph}\). In this case, the outflow can no longer effectively collimate the radiation, and therefore \(1/b\) would decrease (Abarca et al., 2021). Although simulating \(r_{\rm M}\sim r_{\rm sph}\) is difficult and beyond the scope of the present study (see also Section 4.5), we plan to conduct such simulations in the future.
### Comparison with observations of NGC 5907 ULX1 and NGC 1313 X-2
Here, we show, based on the present simulations, that the ULXPs, NGC 5907 ULX1 and NGC 1313 X-2, are thought to be powered by the highly super-Eddington accretion onto NSs with relatively weak magnetic fields. In NGC 5907
Figure 8: Here, the conditions (39)-(43) are summarized in the \(\dot{P}-B_{\rm NS}\) plane (see text for detail). The spin-up rates of models B1e10D004, B1e10D01, B3e10D01, and B3e10D02 are shown by filled circles, while square markers are that of the other models. We employ table 1 in Chen et al. (2021) for the observed \(\dot{P}\)(see also Doroshenko et al., 2018). Parameters for the conditions: \(\alpha=0.1\), \(M_{\rm NS}=1.4M_{\odot}\), \(r_{\rm NS}=10^{6}\) cm, and \(P=9.8\) s.
ULX1, the mechanical power (\(L_{\rm mec}^{\rm obs}\)), which is evaluated from the observed nebula emission of the ULX bubbles, is reported to \(1.3\times 10^{41}\) erg s\({}^{-1}\)(Belfiore et al., 2020), and is comparable to \(\langle L_{\rm kin}\rangle\) for the higher mass accretion rate model (B3e10D1), \(\sim 8\times 10^{40}\) erg s\({}^{-1}\). In addition, the isotropic luminosity \(\langle L_{\rm ISO}\rangle\) in model B3e10D1 at \(\theta\sim 30^{\circ}\) or \(150^{\circ}\) is almost the same as the observed X-ray luminosity, \(L_{\rm ISO}^{\rm obs}\sim 2.2\times 10^{41}\) erg s\({}^{-1}\)(Israel et al., 2017), where the isotropic luminosity is calculated as \(\langle L_{\rm ISO}\rangle=-4\pi r_{\rm out}^{2}\left\langle R_{\rm r}^{\prime }\right\rangle\). Thus, NGC 5907 ULX1 probably has a mass accretion rate of \(>10^{3}\dot{M}_{\rm Edd}\) and is viewed from at the polar angle 20-30 degrees away from the \(z\)-axis. The magnetic field strength of the NS is stronger than \(10^{10}\) G. This is because \(\langle L_{\rm kin}\rangle\) and \(\langle L_{\rm ISO}\rangle\) of model B1e10D1_np are almost the same as those of model B3e10D1, but no accretion column appears in this model.
Also, \(\langle L_{\rm kin}\rangle\) for our slightly lower mass accretion models (B1e10D01, B3e10D01, and B3e10D02) are comparable to \(L_{\rm mec}^{\rm obs}\) in NGC 1313 X-2, \(\sim 10^{40}\) erg s\({}^{-1}\)(Pakull and Grise, 2008). In these models, the viewing angle, at which \(L_{\rm ISO}\) is consistent with \(L_{\rm ISO}^{\rm obs}\sim(1.4-2.0)\times 10^{40}\) erg s\({}^{-1}\), is about 30 to 40 degrees measured from the z-axis. Therefore, NGC 1313 X-2 would be almost a face-on object with a mass accretion rate of several \(10^{2}\dot{M}_{\rm Edd}\). Since no accretion column is formed in model B3e9D01_np, of which the kinetic and isotropic luminosity are similar to those of the above three models, \(B_{\rm NS}\) would be stronger than \(3.3\times 10^{9}\) G.
According to our numerical models, both NGC 5907 ULX1 and NGC 1313 X-2 would also have effectively optically thick outflows with \(r_{\rm bb}>100\) km. Indeed, the thermal emission, probably originated from outflows, was reported in NGC 1313 X-2 (Qiu and Feng, 2021). However, such emission is not detected in NGC 5907 ULX1. This inconsistency can be resolved by conducting simulations of a large \(r_{\rm M}\). If \(B_{\rm NS}\) is so strong that \(r_{\rm M}\) is comparable to \(r_{\rm sph}\), then the outflow rate is thought to decrease, and thermal emission is not detected. In this case, \(1/b\) decreases as discussed in Section 4.1, and the observer's viewing angle should be smaller than that estimated above for \(\langle L_{\rm ISO}\rangle\) to be consistent with \(L_{\rm ISO}^{\rm obs}\sim(1.4-2.0)\times 10^{41}\) erg s\({}^{-1}\). However, increasing \(B_{\rm NS}\) may change the intrinsic luminosity, and therefore simulations with a high \(B_{\rm NS}\) is necessary to perform. Post-processing radiative transfer simulations are needed to obtain detailed radiation spectra and compare the observation (Kawashima et al., 2012; Kitaki et al., 2017; Narayan et al., 2017).
On the other hand, no nebular has been detected around Swift J0243.6+6124. The reason is that the object is a transient source. In order to form a nebula extending over 100 pc, an energy injection, \(\sim 10^{40}\) erg s\({}^{-1}\), from the central object is needed to last for at least \(10^{5}\) yr (Pakull and Grise, 2008; Belfiore et al., 2020).
### Ultraluminous supersoft sources
Our simulations are consistent with the hypothesis proposed by Urquhart and Soria (2016) in which ULXs are observed as ultraluminous supersoft sources (ULSs) by an edge-on observer, as far as the angular range of \(\tau_{\rm eff}>1\) and its accretion rate dependence are concerned. According to their model, the optical depth of the outflows \(\tau_{\rm eff}\) is larger than unity at \(35^{\circ}<\theta<145^{\circ}\) for \(\dot{M}_{\rm in}\sim 100\dot{M}_{\rm Edd}\), and the polar angle at which \(\tau_{\rm eff}>1\) widens as the mass accretion rate increases. These features are also obtained in our simulations. Indeed, simulated outflows are effectively optically thick at polar angles greater than 40 degrees away from the \(z\)-axis for models B1e10D01 and B3e10D01 (see Figures 6 and 7 for detail). The mass accretion rates in these models are \(500\dot{M}_{\rm Edd}\) and \(720\dot{M}_{\rm Edd}\), respectively (Table 2). This angular range and the mass accretion rates do not so contradict the suggestion by Urquhart and Soria (2016) mentioned above. Other models, in which the accretion rate is \(\sim 10^{2-3}\dot{M}_{\rm Edd}\) and the accretion columns form, show approximately the same results. In addition, the angular range where \(\tau_{\rm eff}>1\) widens with increasing in the mass accretion rate (see Figure 7). However, the blackbody temperature estimated from the simulation, \(\sim 10^{7}\) K, is higher than that observed in the ULSs, \(\sim 10^{6}\) K. Such a discrepancy might be resolved by the simulations with the initial torus located far away since the winds launched from the outer part of the accretion disks work to expand the photosphere and lower the temperature there (we will discuss this point later). If this is the case, our simulations are consistent with the hypothesis in which ULSs are edge-on sources of ULXs.
### Why does the one-side accretion occur?
The accretion to only one of the poles (one-side accretion) occurs in models with a relatively large truncation radius (B3e9D001, B1e10D001, B1e10D002, B3e10D001, B3e10D01, and B3e10D02) since the distorted magnetic field lines inhibit the gas accretion onto the opposite side. Initially, the shape of the NS magnetic field is symmetric with respect to the equatorial plane. Then, the magnetic field on the equatorial plane is perpendicular to the equatorial plane. However, when the gas accumulates at the truncation radius and flows toward either pole, the magnetic field lines are distorted (see magnetic field lines in the left panel of Figure 3). The magnetic field on the equatorial plane at the
truncation radius will be tilted with respect to the equatorial plane. Since the gas tends to move to the side with the lower gravitational potential, the gas flows in the direction in which the preceding gas accretes. The accretion column on the opposite side is less likely to be formed. This one-side accretion is more pronounced for the model with a large truncation radius (B3e9D001, B1e10D001, B1e10D002, B3e10D001, B3e10D01, and B3e10D02). In contrast, for the case where the truncation radius is very small, the gas on the surface of the thick disk accretes to the opposite pole, forming two accretion columns (models B1e10D01, B1e10D004, B3e10D04, and B3e10D1). The one-side accretion described above was also shown by some MHD simulations (see e.g., Lii et al., 2014; Takasao et al., 2022). We discuss the importance of three-dimensional simulations to study the detailed structure of accretion flow in the subsequent section.
### Future issues
Three-dimensional simulations of non-axisymmetric accretion flows onto a rotating NS have yet to be performed. In this study, we assume an axisymmetric structure, in which the magnetic axis of the NS coincides with the rotation axis of the accretion disk, and perform two-dimensional simulations. However, the gas is thought to accrete onto the NS surface through multiple streams due to the magnetic Rayleigh-Taylor or interchange instability (Kulkarni and Romanova, 2008). Furthermore, the pulse emission is produced by the misalignment between the magnetic axis and rotation axis of the NS. Thus, we need to treat a non-axisymmetric structure with three-dimensional simulations. The misalignment of two axes might make the one-side accretion difficult to occur since the gas in the accretion disk would preferentially accrete to the closest pole (see e.g., Romanova et al., 2003). In addition, if the NS rotates as fast as millisecond pulsars, the outflow rate might increase due to the effective acceleration by a rapidly rotating magnetosphere of the NS (e.g., Lovelace et al., 1999). Since the size of the photosphere increases as the mass outflow rate increases, the gas temperature decreases. As a result, \(r_{\rm bb}\) increases, and \(B_{\rm NS}\) is larger than that estimated in Figure 8. Simulations taking account of the rotation of the NS have been performed by Parfrey and Tchekhovskoy (2017) and Das et al. (2022).
In our simulations, the gas-radiation interaction is switched off in the very vicinity of the rotation axis near the NS where \(\sigma>\sigma_{\rm rad}\) (see Section 2.3). This treatment would not affect the resulting size and temperature of the photosphere in Section 3.2. In the present simulations, the radiatively driven outflows are launched from the accretion column base as well as the accretion disk (see also Abolmasov and Lipunova, 2022). However, the opacity of free-free absorption of such outflows is quite small due to the low-density and high temperature. This leads to the smaller size of the photosphere for \(\theta\lesssim\pi/6,5\pi/6\lesssim\theta\) compared with that formed by outflows from the accretion disk (\(r_{\rm th}\sim\) several 100 km, see red lines in Figure 6). If we do not ignore the gas-radiation interaction near the rotation axis, the outflowing gas launched from the accretion column is more effectively accelerated by the radiation force near the rotation axis. This further reduces the gas density and size of the photosphere. Therefore, neglecting the gas-radiation interaction in the vicinity of the rotation axis near the NS does not affect the size and temperature of the photosphere presented in Section 3.2.
We need to investigate the influence of the boundary condition. In this study, we assumed that the kinetic and thermal energy reaching the inner boundary (NS surface) is immediately converted into radiative energy. Under this boundary condition, the energy carried to the NS core is ignored. In reality, a fraction of the energy reaching the NS surface is converted into the thermal energy of the NS atmosphere. The heat flux from the atmosphere to the core occurs due to thermal conduction (see Wijnands et al., 2017, and references therein). Then, the radiative energy generated on the NS surface might decrease, leading to a decrease in radiative luminosity. Also, the accretion flow geometry might change if we set different boundary conditions (see Appendix A for detail). Investigation of how the accretion flow depends on the boundary conditions is left as future work.
GR-RMHD simulations of large-scale accretion flows are also needed to evaluate \(r_{\rm bb}\) and \(T_{\rm bb}\), more accurately. It has been pointed out that the simulations in which the initial torus is located inside the spherization radius overestimate the mass outflow rate (see Kitaki et al., 2021; Yoshioka et al., 2022). Therefore, the accretion rate to reproduce effectively optically thick outflows would be higher than that estimated in the present study. This point should be clarified in long-term simulations with the initial torus placed far enough away from the spherization radius. In such simulations, the disk wind is expected to blow from the outer region of the accretion disks (\(r\lesssim r_{\rm sph}\)). The blackbody radius would be larger, and the blackbody temperature would be smaller in the region near the equatorial plane. If this is the case, edge-on observers might identify the objects as the ULSs.
In the present study, we adopt a relatively weak magnetic field of the NS \(3.3\times 10^{9-10}\) G compared to the typical X-ray pulsar \(10^{11-13}\) G. As \(B_{\rm NS}\) increases while the accretion rate remains fixed, the amount of the outflowing gas probably reduces since the area where radiatively driven outflows mainly occur, \(\pi(r_{\rm sph}^{2}-r_{\rm M}^{2})\), decreases. This leads the photosphere to be small and the gas temperature at the photosphere to be high. It is expected that the blackbody radius tends to decrease with an increase in \(B_{\rm NS}\). In this case, the magnetic field strength of Swift J0243.6+6124 estimated in Section 3.3 would decrease. Although \(r_{\rm bb}\) may depend on \(B_{\rm NS}\) as described above, \(r_{\rm bb}\) is a function of mass accretion rate only (see Table 1 and equation 38) in our results since simulations of \(r_{\rm M}\ll r_{\rm sph}\) are performed. In addition, it has been reported that when \(B_{\rm NS}\) is strong enough that \(r_{\rm M}>r_{\rm sph}\), \(r_{\rm M}\) is no longer dependent on \(B_{\rm NS}\)(Chashkina et al., 2017). We need to perform simulations for the strong \(B_{\rm NS}\) case.
The M1 closure, which is employed in the present study, leads to unphysical solutions when the system is optically thin and the radiation fields are anisotropic (see e.g., Ohsuga and Takahashi, 2016). In our simulations, especially in the models where the one-side accretion occurs, the radiation fields are quite anisotropic. In order to accurately calculate the radiation fields, we have to solve the radiative transfer equation. Such radiation MHD simulations, in which the radiative transfer equation is solved, are performed by Jiang et al. (2014), Ohsuga and Takahashi (2016), Asahina et al. (2020), Zhang et al. (2022), and Asahina and Ohsuga (2022).
The modeling of Swift J0243.6+6124 considering multipolar magnetic field components is to be further investigated. Although we constrain the dipole magnetic field strength at the NS surface, \(3\times 10^{11}\) G \(<B_{\rm NS}<4\times 10^{12}\) G, in the present work (see also Tsygankov et al., 2018; Doroshenko et al., 2020), a cyclotron resonance scattering feature (CRSF) corresponding to \(1.6\times 10^{13}\) G was reported by Kong et al. (2022). This discrepancy is resolved if CRSF originates from multipole magnetic fields, as already suggested by some authors (see e.g., Israel et al., 2017). In the case that the multipole magnetic field component is dominant over the dipole component, the accretion flow geometry is expected to be more complex (Long et al., 2007; Das et al., 2022), and the position of the magnetospheric radius changes. We plan to perform the GR-RMHD simulations of super-Eddington accretion flows onto the NS with multipole magnetic fields.
Our models might exhibit a high polarization degree, as recently observed in the X-ray binary (Veledina et al., 2023; Ratheesh et al., 2023). The hard X-ray photons produced in the accretion column base are scattered by the inner wall of funnel, which consists of the effectively optically thick outflows, and pass through the low-density region near the rotation axis. If such photons are observed, a high polarization degree might be detected. Indeed, Ratheesh et al. (2023) reported that the polarization degree increases from approximately 6% at 2 keV to 10% at 8 keV. Post-process polarized radiative transfer simulations are needed to compare our models with such observations.
## 5 Conclusions
We performed two-dimensional axisymmetric GR-RMHD simulations of super-Eddington flows around NSs with a dipole magnetic field for modeling the galactic ULXP, Swift J0243.6+6124. In our simulations, the accretion columns near the magnetic poles, the super-Eddington disk outside the magnetospheric radius, and outflows launched from the disk appear. If the magnetospheric radius (truncation radius) is smaller than the spherization radius, the outflows are generated since the radiation force mainly drives the outflows inside the spherization radius. When the accretion rate is large enough while satisfying the above condition, effectively optically thick outflows are launched from the disk and would be responsible for the blackbody radiation. The blackbody temperature of the effectively thick outflow, \(\sim 10^{7}\) K, is roughly consistent with the observations by Tao et al. (2019). The blackbody radius increases with an increase in the mass accretion rate and agrees with observations, \(100-500\) km, when the accretion rate is about \((130-1200)\dot{M}_{\rm Edd}\). Since the blackbody radiation was detected in two observations with \(\dot{P}\) of \(-2.22\times 10^{-8}\) s s\({}^{-1}\) and \(-1.75\times 10^{-8}\) s s\({}^{-1}\), but not in another with \(\dot{P}\sim-6.8\times 10^{-9}\) s s\({}^{-1}\), the surface magnetic field strength of the NS in Swift J0243.6+6124 is limited to be \(3\times 10^{11}\) G \(\lesssim B_{\rm NS}\lesssim 4\times 10^{12}\) G. The accretion rate is evaluated to be \((200-500)\dot{M}_{\rm Edd}\) when the blackbody radiation is detected and \((60-100)\dot{M}_{\rm Edd}\) when the blackbody radiation is not observed. Our results support the hypothesis that the super-Eddington phase in the 2017-2018 giant outburst of Swift J0243.6+6124 is powered by super-Eddington accretion flows onto a magnetized NS.
We would like to thank an anonymous reviewer for fruitful comments. This work was supported by JSPS KAKENHI Grant Numbers JP21J21040 (A.I.), JP21H04488, JP18K03710 (K.O.), JP20K11851, JP20H01941, JP20H00156 (H.R.T) and JP18K13591 (Y.A.). A part of this research has been funded by the MEXT as " Program for Promoting Researches on the Supercomputer Fugaku" (Toward an unified view of the universe: from large scale structures to planets, JPMXP1020200109) (K.O., H.R.T., Y.A., and A.I.), and by Joint Institute for Computational Fundamental Science (JICFuS, K.O.). Numerical computations were performed with computational resources provided by the Multidisciplinary Cooperative Research Program in the Center for Computational Sciences, University of Tsukuba, Oakforest-PACS operated by the Joint Center for Advanced High-Performance Computing (JCAHPC), Cray XC 50 at the Center for Computational Astrophysics (CfCA) of the National Astronomical Observatory of Japan (NAOJ), the FUJITSU Supercomputer PRIMEHPC FX1000 and FUJITSU Server PRIMERGY GX2570 (Wisteria/BDEC-01) at the Information Technology Center, The University of Tokyo.
## Appendix A Dependence of the accretion flows on the boundary conditions
Here, we show the dependence of the accretion flows on the inner boundary condition of the radiative flux. Figure 9 shows the time evolution of the gas density and radiation energy density with different boundary conditions. The top panel is the result of model Be10D01 where the radiative flux at the NS surface is set to zero. On the other hand, the results of the model in which the radiative flux is set according to the free boundary condition are presented in the bottom panel (free boundary model). Cyan lines show the magnetic field lines. We can see that the low-density void region is formed between two accretion columns (\(10\ {\rm km}<r<13\ {\rm km}\) and \(50^{\circ}<\theta<130^{\circ}\)). A part of the photons emitted from the accretion columns is transported to the void region. These photons push the gas outward and the void is suffered from expansion (see, the right top panel of Figure 9). This void repeatedly contracts and expands. The period of repeated expansion is about \(\sim 500t_{g}\simeq 10^{-3}\ {\rm s}\). However, the variation in the mass accretion rate at the NS surface due to the expansion is negligibly small. Such a structure can not be seen in the free boundary model. This is because the photon can freely penetrate the NS surface, and the outward radiation force considerably becomes small compared with model B1e10D01.
We also find that the thickness of the accretion column in the free boundary model is thinner than that of model B1e10D01. It may originate from the decrease in the radiation energy of the accretion column due to the free boundary condition. It can be seen that the radiation energy at the base of the accretion column in model B1e10D01 is about one hundred times larger than that in the free boundary model.
## Appendix B Analytic formula of magnetospheric radius and spin-up rate
Takahashi & Ohsuga (2017) derived the analytical formulas of the magnetospheric radius (24), and spin-up rate (27), applying the self-similar solutions for the slim disk proposed by Watarai & Fukue (1999). Here we summarize the derivations of these formulas.
In the slim disk model, the energy balances are \(Q^{-}_{\rm adv}=Q^{+}_{\rm vis}+Q^{-}_{\rm rad}=fQ^{+}_{\rm vis}\)(Narayan & Yi, 1994), where \(Q^{-}_{\rm adv}\) is the advected energy, \(Q^{+}_{\rm vis}\) is the viscous-dissipated energy, \(Q^{-}_{\rm rad}\) is the radiative cooling energy, and \(f\) represents the fraction of the advective cooling to the viscous-dissipated energy. Using the height-integrated radiation pressure described in equation 12 in Watarai & Fukue (1999) and the scale height of the accretion disk \(H\), presented in equation 11 in Watarai & Fukue (1999), we get
\[p_{\rm rad}=\Pi_{\rm rad}/2H=\frac{\dot{M}}{4\pi\alpha}\frac{c_{3}^{1/2}}{c_{ 1}}\sqrt{\frac{GM_{\rm NS}}{r^{5}}},\] (B1)
where \(\alpha\) is the viscous parameter (Shakura & Sunyaev, 1973), \(c_{1}\) and \(c_{3}\) is the constant depenting on \(\Gamma\), \(f\) and \(\alpha\) (see Watarai & Fukue, 1999, for detail). In this study, we take \(\Gamma=1.5\), \(f=0.1\) for the analytical formula. The reason why we do not employ \(\Gamma=5/3\) is that the rotational velocity of the disk is zero with \(\Gamma=5/3\)(Narayan & Yi, 1994). The magnetic pressure at radius \(r\) by the dipole magnetic field is \(p_{\rm mag}=B_{\rm NS}^{2}/8\pi(r_{\rm NS}/r)^{6}\). The magnetospheric radius
is defined as the radius where \(p_{\rm mag}\) balances with \(p_{\rm rad}\):
\[r_{\rm M} = \left(\frac{1}{8\pi}\frac{c_{1}}{c_{3}^{1/2}}\frac{c\kappa_{\rm sca} }{G^{3/2}}\right)^{2/7} \tag{10}\] \[\times \alpha^{2/7}\left(\frac{\dot{M}_{\rm in}}{\dot{M}_{\rm Edd}} \right)^{-2/7}B_{\rm NS}^{4/7}M_{\rm NS}^{-3/7}r_{\rm NS}^{12/7}.\]
Next we give the derivation of the spin-up rate. The spin-up rate can be calculated by assuming that the Keplerian angular momentum at \(r_{\rm M}\) is transported to the NS without dissipation. In this case, spin-up rate is represented by \(\dot{P}=-\dot{M}l(r_{\rm M})/M_{\rm NS}l_{\rm NS}\)(Shapiro & Teukolsky, 1986), where \(l=\sqrt{GM_{\rm NS}r}\) is the specific Keplerian angular momentum. Using (10), the spin-up can be calculated as follows:
\[\dot{P} = -\frac{2G^{3/2}}{c\kappa_{\rm sca}}\left(\frac{1}{8\pi}\frac{c_{1 }}{c_{3}^{1/2}}\frac{c\kappa_{\rm sca}}{G^{3/2}}\right)^{1/7}\]
Figure 9: Time evolution of the mass density and the radiation energy density. The colors show the gas density and radiation energy density. The cyan lines represent the magnetic field lines. The top panel is the results of model B1e10D01. The results of the model, where the free boundary condition is adopted for radiative flux (free boundary model), are presented in the bottom panel. All the same initial parameters as B1e10D01 are adopted in the free boundary model except for the boundary condition of the radiative flux.
AASTeX v6.3.1 Sample article
\[\times \alpha^{1/7}\left(\frac{\dot{M}_{\rm in}}{\dot{M}_{\rm Edd}} \right)^{6/7}\] \[\times B_{\rm NS}^{2/7}M_{\rm NS}^{2/7}r_{\rm NS}^{-8/7}P^{2}.\] (B3)
## Appendix C Equations of motion in general relativity
Here, we derive the equations of motion, which are used in this study, according to Mihalas & Mihalas (1984). Equation (2) can be transformed to
\[\partial_{\mu}T^{\mu\nu}=-T^{\mu\nu}\frac{\partial_{\mu}\sqrt{-g}}{\sqrt{-g}}- \Gamma^{\nu}_{\alpha_{\mu}}T^{\alpha\mu}+G^{\nu}.\] (C4)
Subtracting the time component of (C4) multiplied by \(u^{i}/u^{t}\) from the spatial component of (C4), the following relation can be obtained:
\[\partial_{\mu}T^{\mu i} - \frac{u^{i}}{u^{t}}\partial_{\mu}T^{\mu t}\] (C5) \[= - \left(T^{\mu i}-\frac{u^{i}}{u^{t}}T^{\mu t}\right)\frac{ \partial_{\mu}\sqrt{-g}}{\sqrt{-g}}\] \[- \left(\Gamma^{i}_{\alpha\mu}-\frac{u^{i}}{u^{t}}\Gamma^{t}_{ \alpha\mu}\right)T^{\alpha\mu}+G^{i}-\frac{u^{i}}{u^{t}}G^{t}.\]
We define the relativistic enthalpy, \(w=\rho+p_{\rm gas}+e+b^{2}\), and equation (C5) can be written as
\[\partial_{\mu}T^{\mu i}-\frac{u^{i}}{u^{t}}\partial_{\mu}T^{\mu t} = wu^{\mu}\left[\partial_{\mu}\left\{e^{i}_{(\alpha)}\right\}- \frac{u^{i}}{u^{t}}\partial_{\mu}\left\{e^{t}_{(\alpha)}\right\}\right]u^{(\alpha)}\] (C6) \[+ w\left[e^{i}_{(\alpha)}-\frac{u^{i}}{u^{t}}e^{t}_{(\alpha)} \right]\frac{du^{(\alpha)}}{d\tau}\] \[+ \partial_{\mu}\left\{(p_{\rm gas}+p_{\rm mag})g^{\mu i}\right\}\] \[- \frac{u^{i}}{u^{t}}\partial_{\mu}\left\{(p_{\rm gas}+p_{\rm mag}) g^{\mu t}\right\}\] \[- \partial_{\mu}\left(b^{\mu}b^{i}\right)+\frac{u^{i}}{u^{t}} \partial_{\mu}\left(b^{\mu}b^{t}\right),\]
where \(e^{(\mu)}{}_{\alpha}\) is the orthonormal tetrad that transforms vectors in the laboratory frame (e.g., Boyer-Lindquist coordinate) to the zero angular momentum observer frame (ZAMO frame, Bardeen et al., 1972), \(u^{(\mu)}=e^{(\mu)}{}_{\alpha}u^{\alpha}\) is the four-velocity of the gas in the ZAMO frame, \(e^{\mu}{}_{(\alpha)}\) is the inverse matrix of \(e^{(\mu)}{}_{\alpha}\), and \(d/d\tau=u^{\mu}\partial_{\mu}\) is the derivative of proper time, \(\tau\)(Mihalas & Mihalas, 1984). Now using equation (C5) and (C6), we get
\[w\left[e^{i}_{(\alpha)}-\frac{u^{i}}{u^{t}}e^{t}_{(\alpha)} \right]\frac{du^{(\alpha)}}{d\tau}\] (C7) \[= -wu^{\mu}\left[\partial_{\mu}\left\{e^{i}_{(\alpha)}\right\}- \frac{u^{i}}{u^{t}}\partial_{\mu}\left\{e^{t}_{(\alpha)}\right\}\right]u^{(\alpha)}\] \[- \partial_{\mu}\left\{(p_{\rm gas}+p_{\rm mag})g^{\mu i}\right\}+ \frac{u^{i}}{u^{t}}\partial_{\mu}\left\{(p_{\rm gas}+p_{\rm mag})g^{\mu t}\right\}\] \[+ \partial_{\mu}\left(b^{i}b^{\mu}\right)-\frac{u^{i}}{u^{t}} \partial_{\mu}\left(b^{t}b^{\mu}\right)\] \[- \left(T^{\mu i}-\frac{u^{i}}{u^{t}}T^{\mu t}\right)\frac{\partial _{\mu}\sqrt{-g}}{\sqrt{-g}}\] \[- \left(\Gamma^{i}_{\alpha\mu}-\frac{u^{i}}{u^{t}}\Gamma^{t}_{\alpha \mu}\right)T^{\alpha\mu}+G^{i}-\frac{u^{i}}{u^{t}}G^{t}.\]
We calculate the equation of motion in the \(r\)-direction. In order to obtain a concrete expression, it is necessary to fix the space-time. We consider the Boyer-Lindquist metric with a spin parameter of zero. In this case, all nondiagonal
components of \(e^{(\mu)}{}_{\alpha}\) are zero. Assuming steady (\(\partial_{t}=0\)) and axisymmetric (\(\partial_{\phi}=0\)) flow, the following equation of motion in the radial direction is obtained:
\[f_{\rm grav}^{(r)}+f_{\rm thermal}^{(r)}+f_{\theta,\rm cent}^{(r)}+f_{\phi,\rm cent }^{(r)}+f_{\rm rad}^{(r)}+f_{\rm mag}^{(r)}+f_{\rm adv}^{(r)}=0\] (C8)
where \(f_{\rm grav}^{(r)}\), \(f_{\rm thermal}^{(r)}\), \(f_{\theta,\rm cent}^{(r)}\), \(f_{\rm od,\rm cent}^{(r)}\), \(f_{\rm rad}^{(r)}\), \(f_{\rm mag}^{(r)}\), and \(f_{\rm adv}^{(r)}\) describe the gravitational force, thermal force, centrifugal force due to the poloidal motion, centrifugal force due to the toroidal motion, radiation force, Lorentz force, and advection force, respectively. These force can be described as follows:
\[f_{\rm grav}^{(r)} = \frac{M_{\rm NS}}{r^{2}}w\left(u^{t}u_{t}+u^{r}u_{r}\right),\] (C9) \[f_{\rm thermal}^{(r)} = -\left(1-\frac{2M_{\rm NS}}{r}\right)\partial_{r}p_{\rm gas},\] (C10) \[f_{\theta,\rm cent}^{(r)} = \left(1-\frac{2M_{\rm NS}}{r}\right)w\frac{u^{\theta}u_{\theta}} {r},\] (C11) \[f_{\phi,\rm cent}^{(r)} = \left(1-\frac{2M_{\rm NS}}{r}\right)w\frac{u^{\phi}u_{\phi}}{r},\] (C12) \[f_{\rm rad}^{(r)} = G^{r}-\frac{u^{r}}{u^{t}}G^{t},\] (C13) \[f_{\rm mag}^{(r)} = -\left(1-\frac{2M_{\rm NS}}{r}\right)\partial_{r}\left(\frac{b^{ 2}}{2}\right)\] (C14) \[+ \partial_{\mu}\left(b^{\mu}b^{r}\right)-\frac{u^{r}}{u^{t}} \partial_{\mu}\left(b^{\mu}b^{t}\right)\] \[+ \left(\frac{2b^{r}}{r}+\frac{b^{\theta}}{\tan\theta}\right)\left( b^{r}-\frac{u^{r}}{u^{t}}b^{t}\right)\] \[- \frac{M_{\rm NS}}{r}\frac{b^{t}b_{t}+b^{r}b_{r}}{r}\] \[- \left(1-\frac{2M_{\rm NS}}{r}\right)\frac{b^{\theta}b_{\theta}+b ^{\phi}b_{\phi}}{r}\] \[- \frac{2M_{\rm NS}}{r^{2}}\frac{u^{r}}{u^{t}}b^{t}b_{r},\] \[f_{\rm adv}^{(r)} = -w\gamma\left(1-\frac{2M_{\rm NS}}{r}\right)^{1/2}\left(u^{r} \partial_{r}+u^{\theta}\partial_{\theta}\right)v^{(r)}\] (C15)
where \(\gamma=u^{(t)}\) is the Lorentz factor in the ZAMO frame (i.e. statistic observer frame), and \(v^{(r)}=u^{(r)}/r^{(t)}\). The equation of motion in the \(\theta\)-direction can also be obtained by the same procedure.
\[f_{\rm inertial}^{(\theta)}+f_{\rm thermal}^{(\theta)}+f_{\phi,\rm cent}^{( \theta)}+f_{\rm rad}^{(\theta)}+f_{\rm mag}^{(\theta)}+f_{\rm adv}^{(\theta)}=0,\] (C16)
where \(f_{\rm inertial}^{(\theta)}\) is the inertial force caused by the motion in the direction of \(\theta\). These force are written as follows:
\[f_{\rm inertial}^{(\theta)} = -wu_{r}u^{\theta}\left(1-\frac{3M_{\rm NS}}{r}\right),\] (C17) \[f_{\rm thermal}^{(\theta)} = -\frac{1}{r}\partial_{\theta}p_{\rm gas},\] (C18) \[f_{\phi,\rm cent}^{(\theta)} = w\frac{u^{\phi}u_{\phi}}{r\tan\theta},\] (C19) \[f_{\rm rad}^{(\theta)} = r\left(G^{\theta}-\frac{u^{\theta}}{u^{t}}G^{t}\right),\] (C20) \[f_{\rm mag}^{(\theta)} = -\frac{1}{r}\partial_{\theta}\left(\frac{b^{2}}{2}\right)+r \partial_{\mu}\left(b^{\mu}b^{\theta}\right)\] (C21) \[- r\frac{u^{\theta}}{u^{t}}\partial_{\mu}\left(b^{\mu}b^{t}\right) +r\left(\frac{2b^{r}}{r}+\frac{b^{\theta}}{\tan\theta}\right)\left(b^{\theta }-\frac{u^{\theta}}{u^{t}}b^{t}\right)\]
\[+2b^{r}b^{\theta}-\frac{b^{\phi}b_{\phi}}{r\tan\theta}-\frac{u^{ \theta}}{u^{t}}\frac{2M}{r}b^{t}b_{r},\] (C21) \[f^{(\theta)}_{\rm adv} = -w\gamma\left(u^{r}\partial_{r}+u^{\theta}\partial_{\theta}\right)v ^{(\theta)},\] (C22)
where \(v^{(\theta)}=u^{(\theta)}/u^{(t)}\). The relativistic equations of motion derived by this method reduce to the Newtonian equations of motion for \(\mathcal{O}((w-\rho)/\rho)\to 0\), \(\mathcal{O}(v/c)\to 0\), and \(\mathcal{O}(M/r)\to 0\).
|
2304.05526 | Necessary and Sufficient Conditions for Simultaneous State and Input
Recovery of Linear Systems with Sparse Inputs by $\ell_1$-Minimization | The study of theoretical conditions for recovering sparse signals from
compressive measurements has received a lot of attention in the research
community. In parallel, there has been a great amount of work characterizing
conditions for the recovery both the state and the input to a linear dynamical
system (LDS), including a handful of results on recovering sparse inputs.
However, existing sufficient conditions for recovering sparse inputs to an LDS
are conservative and hard to interpret, while necessary and sufficient
conditions have not yet appeared in the literature. In this work, we provide
(1) the first characterization of necessary and sufficient conditions for the
existence and uniqueness of sparse inputs to an LDS, (2) the first necessary
and sufficient conditions for a linear program to recover both an unknown
initial state and a sparse input, and (3) simple, interpretable recovery
conditions in terms of the LDS parameters. We conclude with a numerical
validation of these claims and discuss implications and future directions. | Kyle Poe, Enrique Mallada, René Vidal | 2023-04-11T22:40:38Z | http://arxiv.org/abs/2304.05526v1 | Necessary and Sufficient Conditions for Simultaneous State and Input Recovery of Linear Systems with Sparse Inputs by \(\ell_{1}\)-Minimization
###### Abstract
The study of theoretical conditions for recovering sparse signals from compressive measurements has received a lot of attention in the research community. In parallel, there has been a great amount of work characterizing conditions for the recovery both the state and the input to a linear dynamical system (LDS), including a handful of results on recovering sparse inputs. However, existing sufficient conditions for recovering sparse inputs to an LDS are conservative and hard to interpret, while necessary and sufficient conditions have not yet appeared in the literature. In this work, we provide (1) the first characterization of necessary and sufficient conditions for the existence and uniqueness of sparse inputs to an LDS, (2) the first necessary and sufficient conditions for a linear program to recover both an unknown initial state and a sparse input, and (3) simple, interpretable recovery conditions in terms of the LDS parameters. We conclude with a numerical validation of these claims and discuss implications and future directions.
## I Introduction
A foundational concept in systems theory is that of _observability_, the condition guaranteeing uniqueness of the initial state of a system given knowledge of the inputs and a sufficient number of measurements for the output [1]. Introduced later was the more stringent notion of _strong observability_, which further guarantees the uniqueness of the initial condition even in the presence of unknown inputs, and is known to be equivalent to the system having no invariant zeros [2]. These conditions have been used to concisely characterize conditions under which either the initial state or inputs to a system, or both, can be recovered, even in the absence of the other [3, 4, 5, 6]. Of particular relevance to time-critical applications is the development of deadbeat or finite-time input reconstructors, which in the discrete setting have been formulated in terms of solutions to a block Toeplitz system [4].
A linear system is, in particular, a compact means of describing a linear relationship \(\boldsymbol{y}=\boldsymbol{\Psi}\boldsymbol{u}\) between a sequence of of inputs \(\boldsymbol{u}\) and a sequence of observations \(\boldsymbol{y}\), from which even in the most optimistic circumstances generic \(\boldsymbol{u}\) can only be reconstructed up to \(\ker\boldsymbol{\Psi}\). However, by assuming that \(\boldsymbol{u}\) is sparse in an appropriate sense, established results in sparse recovery provide favorable guarantees on exact reconstruction for an appropriate choice of optimization algorithm. The most common such case considered is when \(\boldsymbol{u}\) is assumed to have support of size not greater than \(s\), and is termed _regular sparsity_. Other support-based notions of sparsity include block [7], group [8], and tree-based sparsity [9], and are each subsumed by the more general notion of model-based sparsity [10].
For each of these sparsity patterns, the literature has provided tailored optimization problems and recovery guarantees, with varying levels of robustness to noise, ease of checking, and conceptual nuance. For many applications in the noiseless setting, simple \(\ell_{1}\)-minimization has proven to be the approach of choice due to its relative conceptual ease, implementability as a linear program, and favorable performance even when compared with tailored regularizers [11]. For regular sparsity, the necessary and sufficient condition for successful unique recovery is the satisfaction of the so-called _nullspace property_ (NUP), which requires vectors in the nullspace of \(\boldsymbol{\Psi}\) to have smaller \(\ell_{1}\) norm on \(s\)-sparse supports than on the complement of \(s\)-sparse supports (see Def. 1). Recent results have even shown that for any support-based notion of sparsity, there exists a straightforward extension of the NUP, termed the _generalized nullspace property_, which provides necessary and sufficient guarantees [12].
In light of the success of this approach to signal reconstruction, recent literature has provided tailored algorithms for sparse recovery in linear dynamical systems (LDSs), where the assumption of sparsity has been variously made on the initial conditions [13, 14, 15, 16], dynamics [17], measurement noise [18], and inputs [19, 20, 21, 22]. Even with all of this prior work, there are few existing guarantees on the performance of these algorithms, and the guarantees that have been produced are typically probabilistic in nature or make restrictive assumptions on the sparsity patterns, such as the state and inputs being simultaneously sparse with respect to an orthogonal dictionary. As a result, many results for the general, noiseless setting, including the establishment of necessary and sufficient guarantees, have not yet appeared in the literature. Our focus in this work is on establishing such guarantees for the basis-pursuit style optimization problem introduced in [20], where the initial state is not sparse, but the inputs are assumed to follow an appropriate generalized support pattern. Existing conditions for even the basic version of this problem are very conservative, and necessary and sufficient conditions have not yet made an appearance.
In this work, we consider the problem of jointly inferring the initial state \(\boldsymbol{x}_{0}\) and sparse inputs \(U_{N}=(\boldsymbol{u}_{0},\ldots,\boldsymbol{u}_{N-1})\)
of an LDS \(\Sigma=(\mathbf{A},\mathbf{\Psi},\mathbf{C})\) without a feedthrough term, i.e.,
\[\mathbf{x}_{k+1} =\mathbf{A}\mathbf{x}_{k}+\mathbf{\Psi}\mathbf{u}_{k}, \mathbf{x}_{k}\in\mathbb{R}^{n}, \mathbf{u}_{k}\in\mathbb{R}^{m} \tag{1}\] \[\mathbf{y}_{k} =\mathbf{C}\mathbf{x}_{k}, \mathbf{y}_{k}\in\mathbb{R}^{p},\]
from \(N+1\) output measurements \(Y_{N}=(\mathbf{y}_{0},\ldots,\mathbf{y}_{N})\). In particular, this work makes the following contributions:
1. Necessary and sufficient conditions for uniqueness of \(\mathbf{x}_{0}\) and sparse \(U_{N}\) given \(Y_{N}\).
2. Necessary and sufficient conditions for the \(\ell_{1}\)-minimization approach of [20] to uniquely recover \(\mathbf{x}_{0}\) and sparse \(U_{N}\) given \(Y_{N}\).
3. Interpretable conditions which are respectively necessary or sufficient for unique solutions to the \(\ell_{1}\)-minimization approach of [20] in terms of the system parameters, and elaboration on situations where these conditions achieve equality.
4. Illustration of the accuracy of these conditions and provision of intuition for when they are most informative through simulations of random LDSs.
## II Preliminaries
### _Notation_
#### Ii-A1 Sets and Vector Spaces
Define \(\mathbb{N}=\{0,1,2,\ldots\}\), \(\llbracket n\rrbracket:=\{0,1,2,\ldots,n-1\}\). We use capital script letters to denote vector subspaces \(\mathcal{V}\subseteq\mathbb{R}^{n}\). When \(\mathcal{V},\mathcal{U}\subseteq\mathbb{R}^{n}\), we denote \(\mathcal{V}+\mathcal{U}:=\{\mathbf{v}+\mathbf{u}:\mathbf{v}\in\mathcal{V},\mathbf{u}\in \mathcal{U}\}\subseteq\mathbb{R}^{n}\) and \(\mathcal{V}^{\perp}:=\{\mathbf{x}\in\mathbb{R}^{n}:\forall\mathbf{v}\in\mathcal{V}, \langle\mathbf{x},\mathbf{v}\rangle=0\}\).
#### Ii-A2 Linear Operators
For any matrix \(\mathbf{A}\) and subspace \(\mathcal{U}\), define \(\mathcal{U}\mathcal{:=}\{\mathbf{A}\mathbf{x}:\mathbf{x}\in\mathcal{U}\}\). \(\mathbf{A}^{-1}\) is defined to be the inverse matrix of \(\mathbf{A}\) if it exists, and for any affine subspace \(\mathcal{V}\subseteq\mathbb{R}^{n}\), \(\mathbf{A}^{-1}\mathcal{V}=\{\mathbf{x}\in\mathbb{R}^{m}:\mathbf{A}\mathbf{x}\in\mathcal{V}\}\). We likewise denote the Moore-Penrose pseudoinverse as \(\mathbf{A}^{+}\).
#### Ii-A3 Supports and Norms
For \(\mathbf{x}\in\mathbb{R}^{m}\), denote \(\mathrm{supp}(\mathbf{x}):=\{i\in\llbracket m\rrbracket:x_{i}\neq 0\}\). Denote column \(i\) of a matrix \(\mathbf{A}\) to be \(\mathbf{A}_{i}\), and block column \(i\) if \(\mathbf{A}\) is a block matrix. For any subset \(S\subseteq\llbracket m\rrbracket\), \(\mathbf{A}_{S}\) is the submatrix of \(\mathbf{A}\in\mathbb{R}^{n\times m}\) with columns \((\mathbf{A}_{S})_{i}=\mathbf{A}_{S_{i}}\). If \(S=(S_{k})\) is a tuple of sets and \(\mathbf{\Gamma}\) is a block matrix, denote \(\mathbf{\Gamma}_{S}\) the block matrix with block columns \((\mathbf{\Gamma}_{S})_{k}=(\mathbf{\Gamma}_{k})_{S_{k}}\). For a block vector \(U\), \((U_{S})_{k}=(U_{k})_{S_{k}}\). Likewise if \(S,S^{\prime}\) are two tuples of sets with the same length, define \(S\cup S^{\prime}\) the tuple of sets s.t. \((S\cup S^{\prime})_{k}=S_{k}\cup S^{\prime}_{k}\). We denote the \(\ell_{0}\) semi-norm as \(\|\mathbf{x}\|_{0}=|\mathrm{supp}(\mathbf{x})|\) and the \(\ell_{1}\) and \(\ell_{2}\) norms as \(\|\cdot\|_{1}\) and \(\|\cdot\|_{2}\), respectively.
### _The Nullspace Property_
Given a matrix \(\mathbf{\Theta}\in\mathbb{R}^{n\times m}\), where we assume that \(\mathbf{\Theta}\) has linearly dependent columns (\(\mathrm{rank}\,\mathbf{\Theta}<m\)), a central problem of sparse recovery is to inquire, under which assumptions on the support of \(\mathbf{x}\) are there unique solutions to \(\mathbf{y}=\mathbf{\Theta}\mathbf{x}\), and what are the algorithms with such unique recovery guarantees? A standard approach is to begin with the optimization problem \(P_{0}\) that finds the sparsest solution, known to be NP-hard, and proceed to the convex relaxation \(P_{1}\), known as basis pursuit:
\[\min_{\mathbf{x}}\|\mathbf{x}\|_{0},\text{ such that }\mathbf{y}=\mathbf{ \Theta}\mathbf{x}\] ( \[P_{0}\] ) \[\min_{\mathbf{x}}\|\mathbf{x}\|_{1},\text{ such that }\mathbf{y}=\mathbf{\Theta}\mathbf{x}\] ( \[P_{1}\] )
Denote \(\Delta_{s}(m):=\{S\subseteq\llbracket m\rrbracket:|S|\leq s\}\) to be the set of \(s\)-sparse supports for vectors in \(\mathbb{R}^{m}\), or simply \(\Delta_{s}\) when \(m\) is fixed. A classic result in sparse recovery is that any \(s\)-sparse solution to \(P_{1}\) is the unique solution, if and only if \(\mathbf{\Theta}\) satisfies the \(s\)-NUP:
**Definition 1** (Nullspace Property (NUP)): _The matrix \(\mathbf{\Theta}\in\mathbb{R}^{n\times m}\) satisfies the nullspace property of order \(s\) (\(s\)-NUP) if \(\forall\mathbf{h}\in\ker\mathbf{\Theta}\setminus\{0\}\), \(\forall S\in\Delta_{s}\), \(\|\mathbf{h}_{S}\|_{1}<\|\mathbf{h}_{S^{c}}\|_{1}\)._
### _The Generalized Nullspace Property_
The motivating observation of [12] is that sparsity structures tend to satisfy the property that if \(S\) is a valid sparse support, so too is \(S^{\prime}\subseteq S\). This relationship describes an abstract simplicial complex:
**Definition 2** (Abstract Simplicial Complex (ASC)): _Let \(\Delta\) be a set of sets. \(\Delta\) is an abstract simplicial complex if \(\forall S\in\Delta\), \(\forall S^{\prime}\subseteq S\), \(S^{\prime}\in\Delta\). If for some \(m\in\mathbb{N}\), \(\Delta\subseteq\{S:S\subseteq\llbracket m\rrbracket\}\), we say that \(\Delta\) is an ASC over \(\llbracket m\rrbracket\)._
One can quickly check that \(\Delta_{s}(m)\) is an ASC over \(\llbracket m\rrbracket\), so ASCs comprise a strict generalization of regular sparsity. We will thus refer to any vector \(\mathbf{x}\) such that \(\mathrm{supp}\,\mathbf{x}\in\Delta\) as \(\Delta\)-sparse. We additionally make the convenient definition:
\[\mathcal{S}(\Delta):=\{\mathbf{x}\in\mathbb{R}^{m}:\mathrm{supp}\,\mathbf{x}\in\Delta\} \tag{2}\]
which may be geometrically interpreted as the union of subspaces spanned by basis vectors \(\{e_{k}\}_{k\in S}\), where \(S\in\Delta\). The associated result of [12] is key:
**Definition 3** (Generalized Nullspace Property (GNUP)): _Let \(\mathbf{\Psi}\in\mathbb{R}^{n\times m}\), and let \(\Delta\) be an ASC over \(\llbracket m\rrbracket\). We say that \(\mathbf{\Psi}\) satisfies the generalized nullspace property with respect to \(\Delta\) (\(\Delta\)-NUP) if \(\forall\mathbf{h}\in\ker\mathbf{\Psi}\setminus\{0\}\), \(\forall S\in\Delta\), \(\|\mathbf{h}_{S}\|_{1}<\|\mathbf{h}_{S^{c}}\|_{1}\). Equivalently,_
\[\mathrm{nsc}(\mathbf{\Psi},\Delta):=\max_{S\in\Delta}\max_{\mathbf{h}\in\ker\mathbf{\Psi} \setminus\{0\}}\frac{\|\mathbf{h}_{S}\|_{1}}{\|\mathbf{h}\|_{1}}<\frac{1}{2} \tag{3}\]
_where \(\mathrm{nsc}(\mathbf{\Psi},\Delta)\) is called the nullspace constant._
**Proposition 1**: _Let \(\Delta\) be a simplicial complex over \(\llbracket m\rrbracket\). Then any \(\Delta\)-sparse solution to \(P_{1}\) is the unique solution, if and only if \(\mathbf{\Theta}\) satisfies the \(\Delta\)-NUP._
This result potentially enables necessary and sufficient conditions for much more general types of sparsity patterns than are classically admissible for \(P_{1}\). As a simple example, one can remark that the \(\Delta_{s}\)-NUP is equivalent to the \(s\)-NUP, so this is a generalization; but it also encompasses e.g. group sparsity. Since the GNUP is essentially a statement about the kernel of a particular matrix, it is natural to extend this characterization to any subspace with an appropriate choice of basis: for a subspace \(\mathcal{U}\subseteq\mathbb{R}^{n}\) define
\[\mathrm{nsc}(\mathcal{U},\Delta):=\max_{S\in\Delta}\max_{\mathbf{h}\in\mathcal{U} \setminus\{0\}}\frac{\|\mathbf{h}_{S}\|_{1}}{\|\mathbf{h}\|_{1}}\]
and say that \(\mathcal{U}\) satisfies the \(\Delta\)-NUP if \(\mathrm{nsc}(\mathcal{U},\Delta)<\frac{1}{2}\).
### _Linear Dynamical Systems with Sparse Inputs_
In this section, we will state the main problem of the paper. For an LDS \(\Sigma=(\mathbf{A},\mathbf{\Psi},\mathbf{C})\) whose state-space equations are defined in (1), we define the associated block matrices:
\[\mathbf{\mathcal{O}}_{N} =\begin{bmatrix}\mathbf{C}\\ \mathbf{C}\mathbf{A}\\ \vdots\\ \mathbf{C}\mathbf{A}^{N}\end{bmatrix},\,Y_{N}=\begin{bmatrix}\mathbf{y}_{0}\\ \mathbf{y}_{1}\\ \vdots\\ \mathbf{y}_{N}\end{bmatrix},\,U_{N}=\begin{bmatrix}\mathbf{u}_{0}\\ \mathbf{u}_{1}\\ \vdots\\ \mathbf{u}_{N-1}\end{bmatrix} \tag{4}\] \[\mathbf{\Gamma}_{N} =\begin{bmatrix}0&0&\cdots&0\\ \mathbf{C}\mathbf{\Psi}&0&\cdots&0\\ \mathbf{C}\mathbf{A}\mathbf{\Psi}&\mathbf{C}\mathbf{\Psi}&\cdots&0\\ \vdots&\vdots&\ddots&\\ \mathbf{C}\mathbf{A}^{N-1}\mathbf{\Psi}&\mathbf{C}\mathbf{A}^{N-2}&\cdots&\mathbf{C}\mathbf{\Psi}\end{bmatrix},\]
where we refer to \(\mathbf{\mathcal{O}}_{N}\) as the _observability matrix_ of \(\Sigma\) and to \(\mathbf{\Gamma}_{N}\) as the _input-output matrix_. The reader should note the discrepancy in the number of entries of \(Y_{N}\) and \(U_{N}\); we have opted to admit this asymmetry for notational convenience.
It follows that
\[Y_{N}=\mathbf{\mathcal{O}}_{N}\mathbf{x}_{0}+\mathbf{\Gamma}_{N}U_{N}, \tag{5}\]
Of central interest in this paper is the case where \(\forall k\), \(\operatorname{supp}\mathbf{u}_{k}\in\Delta\), where \(\Delta\) is a simplicial complex over \([\![m]\!]\). In this case, we say that the input \(\mathbf{u}\) and respectively the block vector \(U_{N}\) is _entrywise \(\Delta\)-sparse_; we may equivalently refer to \(U_{N}\) as \(\Delta^{N}\)-sparse. We might then ask, under what conditions on what optimization problems can we recover \(\mathbf{x}_{0}\) and \(U_{N}\) from \(Y_{N}\), given the assumption that \(U_{N}\) is \(\Delta^{N}\)-sparse?
The optimization problem we focus on is the following, introduced in [20]:
\[\min_{\mathbf{x}_{0},U_{N}}\|U_{N}\|_{1}\text{ s.t. }Y_{N}=\mathbf{\mathcal{O}}_{N}\mathbf{x }_{0}+\mathbf{\Gamma}_{N}U_{N}.\] ( \[D_{1}\] )
This optimization problem may be thought of as an implementation of basis pursuit (\(P_{1}\)) for linear systems. Analogously, we would like to characterize the conditions under which this problem is well posed-i.e. no two entrywise \(\Delta\)-sparse inputs and generic initial conditions produce the same output-and when (\(D_{1}\)) uniquely recovers such \(\Delta\)-sparse inputs and initial conditions jointly. The former condition may be thought of as an injectivity condition, in the sense that the output \(Y_{N}\) uniquely specifies the initial condition and inputs up to \(\Delta^{N}\)-sparsity. Of additional interest are the cases in which the state space is not sufficiently observable to uniquely determine \(x\) given \(Y_{N}\), but we may still recover or uniquely characterize \(\Delta^{N}\)-sparse \(U_{N}\). To these ends, we define the following:
**Definition 4**: _Let \(\Sigma=(\mathbf{A},\mathbf{\Psi},\mathbf{C})\) be a linear system._
* \(\Sigma\) _is jointly_ \(\Delta^{N}\)_-injective_ _if_ \(U,U^{\prime}\in\mathcal{S}(\Delta^{N})\) _and_ \(\mathbf{\mathcal{O}}_{N}\mathbf{x}_{0}^{\prime}+\mathbf{\Gamma}_{N}U_{N}^{\prime}=\mathbf{ \mathcal{O}}_{N}\mathbf{x}_{0}+\mathbf{\Gamma}_{N}U_{N}\implies(\mathbf{x}_{0},U_{N})=( \mathbf{x}_{0}^{\prime},U_{N}^{\prime})\)_. We write the set of all such_ \(\Sigma\) _as_ \(\mathcal{R}_{0}^{*}(\Delta,N)\)_._
* \(\Sigma\) _is input_ \(\Delta^{N}\)_-injective_ _if_ \(U,U^{\prime}\in\mathcal{S}(\Delta^{N})\) _and_ \(\mathbf{\mathcal{O}}_{N}\mathbf{x}_{0}^{\prime}+\mathbf{\Gamma}_{N}U_{N}^{\prime}=\mathbf{ \mathcal{O}}_{N}\mathbf{x}_{0}+\mathbf{\Gamma}_{N}U_{N}\implies U_{N}=U_{N}^{\prime}\)_. We write the set of all such_ \(\Sigma\) _as_ \(\mathcal{R}_{0}(\Delta,N)\)_._
* \(\Sigma\) _is jointly_ \(\Delta^{N}\)_-recoverable with (_\(D_{1}\)_) if any solution_ \((\mathbf{x}_{0},U_{N})\) _to (_\(D_{1}\)_) s.t._ \(U_{N}\in\mathcal{S}(\Delta^{N})\) _is necessarily the unique solution. We write the set of all such_ \(\Sigma\) _as_ \(\mathcal{R}_{1}^{*}(\Delta,N)\)_._
* \(\Sigma\) _is input_ \(\Delta^{N}\)_-recoverable with (_\(D_{1}\)_) if any two solutions_ \((\mathbf{x}_{0},U_{N}),(\mathbf{x}_{0}^{\prime},U_{N}^{\prime})\) _to (_\(D_{1}\)_) s.t._ \(U_{N},U_{N}^{\prime}\in\mathcal{S}(\Delta^{N})\) _satisfy_ \(U_{N}=U_{N}^{\prime}\)_. We write the set of all such_ \(\Sigma\) _as_ \(\mathcal{R}_{1}(\Delta,N)\)_._
The condition established in [20] for when \(\Sigma\) is jointly \(\Delta^{N}_{s}\)-recoverable with (\(D_{1}\)) is based on the coherence \(\mu:\mathbb{R}^{n\times m}\to[0,1]\), defined as
\[\mu(\mathbf{\Theta})=\max_{i\neq j}\frac{|\langle\mathbf{\Theta}_{i},\mathbf{\Theta}_{j} \rangle|}{\|\mathbf{\Theta}_{i}\|_{2}\|\mathbf{\Theta}_{j}\|_{2}} \tag{6}\]
Henceforth, we define \(\mathbf{P}_{N}^{\perp}:=\mathbf{I}-\mathbf{\mathcal{O}}_{N}\mathbf{\mathcal{O}}_{N}^{+}\), the orthogonal projection onto the orthogonal complement of the column space of \(\mathbf{\mathcal{O}}_{N}\). The main result of [20], which may be implicitly read as a sufficient condition for \(\mathbf{P}_{N}^{\perp}\mathbf{\Gamma}_{N}\) to satisfy the \(Ns\)-NUP, is as follows:
**Proposition 2**: _If \(\ker\mathbf{\mathcal{O}}_{N}=0\) and_
\[\mu(\mathbf{P}_{N}^{\perp}\mathbf{\Gamma}_{N})\ <\ \frac{1}{2Ns-1}\]
_then \(\Sigma\) is jointly \(\Delta^{N}_{s}\)-recoverable with (\(D_{1}\))._
As is typical of coherence-based sparse recovery guarantees, it was found that this bound was enormously conservative for even modest \(N\). It is also remarked that the condition in proposition 2 also guarantees that any \(\mathbf{x}_{0}\) and \(U_{N}\) such that \(|\operatorname{supp}(U_{N})|\leq Ns\) is also recovered uniquely by (\(D_{1}\)), so there is reason to suspect this condition could be tightened.
## III Necessary and Sufficient Conditions for Joint State and Sparse Input Recovery
Let \(\Delta\) be an ASC over \([\![m]\!]\) and \(\Sigma\) a linear system as in (1). Our first observation is that the difference between joint and input-only recoverability/injectivity is just a matter of observability:
**Lemma 1**: \(\Sigma\in\mathcal{R}_{p}^{*}(\Delta,N)\) _if and only if \(\ker\mathbf{\mathcal{O}}_{N}=0\) and \(\Sigma\in\mathcal{R}_{p}(\Delta,N)\)._
By definition, \(\mathcal{R}_{p}^{*}(\Delta,N)\subseteq\mathcal{R}_{p}(\Delta,N)\). Suppose \(\ker\mathbf{\mathcal{O}}_{N}\neq 0\), then \(\mathbf{x}_{0}\) cannot be uniquely determined by a constraint on \(\mathbf{\mathcal{O}}_{N}\mathbf{x}_{0}\), a contradiction. Hence \(\Sigma\in\mathcal{R}_{p}^{*}(\Delta,N)\) implies \(\ker\mathbf{\mathcal{O}}_{N}=0\) and \(\Sigma\in\mathcal{R}_{p}(\Delta,N)\).
Now suppose \(\Sigma\in\mathcal{R}_{p}(\Delta,N)\) and that \(\ker\mathbf{\mathcal{O}}_{N}=0\). Then \(\forall(\mathbf{x}_{0},U_{N}),(\mathbf{x}_{0}^{\prime},U_{N}^{\prime})\) such that \(U_{N},U_{N}^{\prime}\in\mathcal{S}(\Delta^{N})\), \(\mathbf{\mathcal{O}}_{N}\mathbf{x}_{0}^{\prime}+\mathbf{\Gamma}_{N}U_{N}^{\prime}=\mathbf{ \mathcal{O}}_{N}\mathbf{x}_{0}+\mathbf{\Gamma}_{N}U_{N}\implies U_{N}=U_{N}^{\prime}\), and therefore \(\mathbf{\mathcal{O}}_{N}\mathbf{x}_{0}=\mathbf{\mathcal{O}}_{N}\mathbf{x}_{0}^{\prime}\). Since \(\ker\mathbf{\mathcal{O}}_{N}=0\), \(\mathbf{x}_{0}=\mathbf{x}_{0}^{\prime}\).
In [20], implicit in the use of coherence for the main result was that when \(\ker\mathbf{\mathcal{O}}_{N}=0\), \(\mathbf{P}_{N}^{\perp}\mathbf{\Gamma}_{N}\) satisfying the \(Ns\)-NUP is sufficient \(\Sigma\) to be jointly \(\Delta^{N}\)
For determining sparse input recoverability and injectivity then, we will see it is only necessary to consider properties of \(\ker\mathbf{P}_{N}^{\perp}\mathbf{\Gamma}_{N}\).
In the case of regular sparsity, there are several equivalent ways to establish uniqueness of sparse solutions. The typical way is a rank condition on all collections of \(2s\) columns of a matrix. Here we generalize this slightly, to \(\Delta\)-sparsity:
**Lemma 2**: _Let \(\Delta\) be an ASC over \(\llbracket m\rrbracket\). \(\forall\mathbf{x},\mathbf{x}^{\prime}\in\mathcal{S}(\Delta),\mathbf{\Theta}\mathbf{x}=\mathbf{ \Theta}\mathbf{x}^{\prime}\implies\mathbf{x}=\mathbf{x}^{\prime}\) if and only if \(\forall S,S^{\prime}\in\Delta\), \(\ker\mathbf{\Theta}_{\mathcal{S}\cup S^{\prime}}=0\). When either condition is satisfied, we say \(\mathbf{\Theta}\) is \(\Delta\)-injective._
Suppose \(\exists S,S^{\prime}\in\Delta\), \(\ker\mathbf{\Theta}_{\mathcal{S}\cup S^{\prime}}\neq 0\), then let \(\mathbf{x},\mathbf{x}^{\prime}\in\mathbb{R}^{m}\) distinct such that \(\operatorname{supp}\mathbf{x}=S\) and \(\operatorname{supp}\mathbf{x}^{\prime}=S^{\prime}\), and \(\mathbf{\Theta}(\mathbf{x}-\mathbf{x}^{\prime})=0\). Then \(\mathbf{\Theta}\mathbf{x}=\mathbf{\Theta}\mathbf{x}^{\prime}\) but \(\mathbf{x}\neq\mathbf{x}^{\prime}\), a contradiction. Conversely, suppose \(\mathbf{x},\mathbf{x}^{\prime}\in\mathcal{S}(\Delta)\) such that \(\mathbf{\Theta}\mathbf{x}=\mathbf{\Theta}\mathbf{x}^{\prime}\) but \(\mathbf{x}\neq\mathbf{x}^{\prime}\), then where \(S=\operatorname{supp}\mathbf{x}\) and \(S^{\prime}=\operatorname{supp}\mathbf{x}^{\prime}\), \(\mathbf{\Theta}_{\mathcal{S}\cup S^{\prime}}(\mathbf{x}-\mathbf{x}^{\prime})_{\mathcal{S} \cup S^{\prime}}=0\), so \(\ker\mathbf{\Theta}_{\mathcal{S}\cup S^{\prime}}\neq 0\).
This notion enables concise necessary and sufficient characterizations of when \(\Delta\)-sparse solutions are unique, and we will put it to good use in Theorem 1.
Shortly after the publication of [20], Kafashan et al. produced sufficient conditions for both joint \(\Delta_{s}^{N}\)-recovery with (\(D_{1}\)) and a related optimization problem incorporating the assumption of noise [21]. Lemma 1 of [21] gets close to being a necessary condition when \(\Delta=\Delta_{s}\), but it makes use of the restricted isometry constant rather than purely rank-type constraints, and is not stated as necessary and sufficient. We provide a version of the necessary and sufficient condition in the style of [21] to reflect this contribution, making only the change of the restricted isometry condition to \(\Delta\)-injectivity:
**Theorem 1** (\(\Delta^{N}\)-Injectivity of \(\Sigma\)): _Let \(\Delta\) be an ASC over \(\llbracket m\rrbracket\) and \(\Sigma=(\mathbf{A},\mathbf{\Psi},\mathbf{C})\) a linear system with state space of dimension \(n\). The following are equivalent:_
1. \(\Sigma\) _is jointly_ \(\Delta^{N}\)_-injective and_ \(\Sigma=(\mathbf{A},\mathbf{\Psi},\mathbf{C})\) _is_ \(\Delta^{N}\)_-injective._
2. \(\forall S,S^{\prime}\in\Delta^{N}\)_,_ \(\operatorname{rank}\left[\mathbf{\mathcal{O}}_{N}\left(\mathbf{\Gamma}_{N}\right)_{ \mathcal{S}\cup S^{\prime}}\right]=n+\operatorname{rank}((\mathbf{\Gamma}_{N})_{ \mathcal{S}\cup S^{\prime}})\)_, and_ \(C\mathbf{\Psi}\) _is_ \(\Delta\)_-injective._
3. \(\ker\mathbf{\mathcal{O}}_{N}=0\) _and_ \(\mathbf{P}_{N}^{\perp}\mathbf{\Gamma}_{N}\) _is_ \(\Delta^{N}\)_-injective._
Suppose \(\Sigma\) is jointly \(\Delta^{N}\)-injective, then if \(U_{N},U_{N}^{\prime}\in\mathcal{S}(\Delta)\), \(\mathbf{\mathcal{O}}_{N}\mathbf{x}_{0}+\mathbf{\Gamma}_{N}U_{N}=\mathbf{\mathcal{O}}_{N}\mathbf{ x}_{0}^{\prime}+\mathbf{\Gamma}_{N}U_{N}^{\prime}\implies\mathbf{x}_{0}^{\prime}=\mathbf{x}_{0}\) and \(U_{N}=U_{N}^{\prime}\). It follows that for every \(S,S^{\prime}\in\Delta^{N}\), \(\left[\mathbf{\mathcal{O}}_{N}\left(\mathbf{\Gamma}_{N}\right)_{\mathcal{S}\cup S^{ \prime}}\right]\) is full column rank, so \(\operatorname{rank}\left[\mathbf{\mathcal{O}}_{N}\left(\mathbf{\Gamma}_{N}\right)_{ \mathcal{S}\cup S^{\prime}}\right]=n+\operatorname{rank}((\mathbf{\Gamma}_{N})_{ \mathcal{S}\cup S^{\prime}})\), and \(\ker(\mathbf{\Gamma}_{N})_{\mathcal{S}\cup S^{\prime}}=0\). It follows that every \(((\mathbf{\Gamma}_{N})_{\mathcal{S}\cup S^{\prime}_{i}})_{\mathcal{S}\cup S^{ \prime}_{i}}\) is full rank, so \((\mathbf{\mathcal{O}}_{N-k}\mathbf{\Psi})_{\mathcal{S}\cup S^{\prime}_{i}}\) is full rank \(\forall k\in\llbracket N\rrbracket\). In particular we may take \(k=0\), so \(\forall S,S^{\prime}\in\Delta\), \((\mathbf{\mathcal{O}}_{0}\mathbf{\Psi})_{\mathcal{S}\cup S^{\prime}}=(\mathbf{C}\mathbf{\Psi} )_{\mathcal{S}\cup S^{\prime}}\) is full rank, and therefore \(\mathbf{C}\mathbf{\Psi}\) is \(\Delta\)-injective.
Likewise, if \((\mathbf{C}\mathbf{\Psi})_{\mathcal{S}\cup S^{\prime}}\) is full rank, since \(\mathbf{\Gamma}_{N}\) is block triangular with \(\mathbf{C}\mathbf{\Psi}\) on the diagonal, we have that \((\mathbf{\Gamma}_{N})_{\mathcal{S}\cup S^{\prime}}\) is full rank for every \(S,S^{\prime}\in\Delta^{N}\). It follows that if \(\operatorname{rank}\left[\mathbf{\mathcal{O}}_{N}\left(\mathbf{\Gamma}_{N}\right)_{ \mathcal{S}\cup S^{\prime}}\right]=n+\operatorname{rank}((\mathbf{\Gamma}_{N})_{ \mathcal{S}\cup S^{\prime}})\), then \(\left[\mathbf{\mathcal{O}}_{N}\left(\mathbf{\Gamma}_{N}\right)_{\mathcal{S}\cup S^{ \prime}}\right]\) is full rank, so if \(U_{N}\) is \(\Delta^{N}\)-sparse and \(\mathbf{x}_{0}\) is generic such that \(Y_{N}=\mathbf{\mathcal{O}}_{N}\mathbf{x}_{0}+\mathbf{\Gamma}_{N}U_{N}\), they are unique. Therefore, \(\Sigma\) is jointly \(\Delta^{N}\)-injective. We have thus shown that \((1)\iff(2)\).
Note that from the above, we have \((2)\) if and only if \(\forall S,S^{\prime}\in\Delta^{N}\), \(\ker\left[\mathbf{\mathcal{O}}_{N}\left(\mathbf{\Gamma}_{N}\right)_{\mathcal{S}\cup S^{ \prime}}\right]=0\), which is the case if and only if \(\ker\mathbf{\mathcal{O}}_{N}=0\), \(\ker(\mathbf{\Gamma}_{N})_{\mathcal{S}\cup S^{\prime}}=0\), and \(\operatorname{im}\mathbf{\mathcal{O}}_{N}\cap\operatorname{im}(\mathbf{\Gamma}_{N})_{ \mathcal{S}\cup S^{\prime}}=0\). The third point holds iff \(\operatorname{rank}\mathbf{P}_{N}^{\perp}(\mathbf{\Gamma}_{N})_{\mathcal{S}\cup S^{ \prime}}=\operatorname{rank}(\mathbf{\Gamma}_{N})_{\mathcal{S}\cup S^{\prime}}\), so this is again equivalent to \(\ker\mathbf{\mathcal{O}}_{N}=0\) and \(\ker\mathbf{P}_{N}^{\perp}(\mathbf{\Gamma}_{N})_{\mathcal{S}\cup S^{\prime}}=\ker(\mathbf{P}_{N}^ {\perp}\mathbf{\Gamma}_{N})_{\mathcal{S}\cup S^{\prime}}=0\). We conclude by noting that this is equivalent to \(\ker\mathbf{\mathcal{O}}_{N}=0\) and \(\mathbf{P}_{N}^{\perp}\mathbf{\Gamma}_{N}\) being \(\Delta^{N}\)-injective.
Having shown the result for uniqueness of solutions/sparse injectivity, we proceed to the problem of recoverability. Recalling that we are only interested in \(\Delta^{N}\)-sparse solutions \(U_{N}\), we could obtain a necessary and sufficient condition from the GNUP if this support pattern is an ASC. Technically, as \(\Delta^{N}\) is a set of tuples of sets, it cannot be a simplicial complex, but if one instead interprets these tuples as disjoint unions, we uncover an ASC structure:
**Lemma 3**: \(\Delta^{N}\) _is a simplicial complex up to bijection._
It is clear that every \((S_{k})_{k\in\llbracket N\rrbracket}\in\Delta^{N}\) can be identified with the set \(\tilde{S}=\bigcup_{k\in\llbracket N\rrbracket}\{k\}\times S_{k}\). Let \(S\in\Delta^{N}\) and suppose \(\tilde{S}^{\prime}\subseteq\tilde{S}\). Then for all \(k\), \(\{k\}\times S^{\prime}_{k}\subseteq\{k\}\times S_{k}\implies S^{\prime}_{k} \subseteq S_{k}\), so \(S^{\prime}_{k}\in\Delta\). Therefore \(S^{\prime}\in\Delta^{N}\).
It is then clear that we can apply the GNUP in this context, to obtain necessary and sufficient conditions on \(\Delta^{N}\)-recovery.
**Theorem 2** (\(\
an analogous condition for \(\Delta^{N}\)-recoverability. Once we have \(\ker\mathbf{{\cal O}}_{N}=0\), each of these conditions are entirely determined by attributes of \(\ker P_{N}^{\perp}\mathbf{{\Gamma}}_{N}\). This space is precisely the subspace of inputs \(U_{N}\) for which there exists an initial condition \(\mathbf{{x}}_{0}\), \(\mathbf{{\Gamma}}_{N}U_{N}=\mathbf{{\cal O}}_{N}\mathbf{{x}}_{0}\). By constructing lower and upper bounds on \(\operatorname{nsc}(P_{N}^{\perp}\mathbf{{\Gamma}}_{N},\Delta^{N})\) via \(\operatorname{nsc}(\mathbf{{C}}\mathbf{{\Psi}},\Delta)\) and \(\operatorname{nsc}((\mathbf{{C}}\mathbf{{\Psi}})^{-1}\mathbf{{C}}\mathbf{{A}}\ker\mathbf{{C}},\Delta)\), we obtain conditions that are respectively necessary and sufficient for \(\Sigma\) to be input \(\Delta^{N}\)-recoverable.
### _A Necessary Condition_
In [20], it was empirically observed that \(\mu(\mathbf{{C}}\mathbf{{\Psi}})\leq\mu(\mathbf{{ \Gamma}}_{N})\leq\mu(\mathbf{{P}}_{N}^{\perp}\mathbf{{ \Gamma}}_{N})\). The intuition one is tempted to derive from this is that a more incoherent \(\mathbf{{C}}\mathbf{{\Psi}}\) indicates a greater chance of (\(D_{1}\)) succeeding. To concretize this idea, one need look no further than the question of recovering the very last input to a system, left untouched by the system dynamics. Specifically, it is useful to consider the one-to-one correspondence between elements of \(\ker\mathbf{{C}}\mathbf{{\Psi}}\) and inputs \(U_{N}\in\ker\mathbf{{P}}_{N}^{\perp}\mathbf{{\Gamma}}_{N}\) with all zero entries except for the last:
**Lemma 4**: _Suppose \(\boldsymbol{u}\) is an input such that \(\forall k<N,\boldsymbol{u}_{k}=0\) and \(\boldsymbol{u}_{N-1}\in\ker\mathbf{{C}}\mathbf{{\Psi}}\). Then \(U_{N}\in\ker\mathbf{{P}}_{N}^{\perp}\mathbf{{\Gamma}}_{N}\)._
Suppose \(U_{N}\) is as described, then with \(\boldsymbol{x}_{0}=0\), \(k<N\implies\mathbf{{x}}_{k}=0\implies\mathbf{{y}}_{k}=0\). We also have that \(\boldsymbol{y}_{N}=\mathbf{{C}}\mathbf{{x}}_{N}=\mathbf{{C}}\mathbf{{\Psi}}\mathbf{{u}}_{N-1}=0\). Therefore, \(U_{N}\in\ker\mathbf{{\Gamma}}_{N}\subseteq\ker\mathbf{{P}}_{N }^{\perp}\mathbf{{\Gamma}}_{N}\). This suggests the following natural necessary condition:
**Proposition 3**: \(\Sigma\) _can only be input \(\Delta^{N}\)-recoverable with (\(D_{1}\)) (resp. injective) if \(\mathbf{{C}}\mathbf{{\Psi}}\) satisfies the \(\Delta\)-NUP (resp. is \(\Delta\)-injective)_
The case of injectivity is implied by characterization 2 of Theorem 1. Suppose \(\Sigma\) is \(\Delta^{N}\)-recoverable. Then by lemma 4, for any input \(\boldsymbol{u}\) such that \(\forall k<N\), \(\boldsymbol{u}_{k}=0\) and \(\boldsymbol{u}_{N-1}\in\ker\mathbf{{C}}\mathbf{{\Psi}}\), \(U_{N}\) is the unique input component of the solution to (\(D_{1}\)) if \(U_{N}\) is \(\Delta^{N}\)-sparse, so \(\min_{U}\|U_{N}\|_{1}=\min_{\boldsymbol{v}}\|\boldsymbol{v}\|_{1}\) s.t. s.t. \(\mathbf{{C}}\mathbf{{\Psi}}\mathbf{{u}}_{N-1}=\mathbf{{C}}\mathbf{{\Psi}}\mathbf{{v}}\) always recovers \(\Delta\)-sparse \(\boldsymbol{u}_{N-1}\), hence \(\mathbf{{C}}\mathbf{{\Psi}}\) satisfies the \(\Delta\)-NUP. Another way to put this is in terms of the nullspace constant:
**Corollary 1**: \(\Sigma\) _can only be input \(\Delta^{N}\)-recoverable if \(\operatorname{nsc}(\mathbf{{C}}\mathbf{{\Psi}},\Delta)<0.5\). It is clear that if \(\mathbf{{C}}\mathbf{{\Psi}}\) does not satisfy \(\Delta\)-NUP, there will always exist entrywise \(\Delta\)-sparse inputs which cannot be recovered by the problem (\(D_{1}\)). As this will be the case for any \(N\), this is in a sense the tightest necessary condition for this class of problems._
### _A Sufficient Condition_
We will now show that whenever the output of a system is uniformly zero, at each time point \(k\)_regardless_ of initial condition, one must have \(\boldsymbol{u}_{k}\in(\mathbf{{C}}\mathbf{{\Psi}})^{-1} \mathbf{{C}}\mathbf{{A}}\ker\mathbf{{C}}\):
**Lemma 5**: _Let \(\boldsymbol{u}\) be an input such that \(U_{N}\in\ker\mathbf{{P}}_{N}^{\perp}\mathbf{{\Gamma}}_{N}\). Then \(\forall k<N\), \(\boldsymbol{u}_{k}\in(\mathbf{{C}}\mathbf{{\Psi}})^{-1} \mathbf{{C}}\mathbf{{A}}\ker\mathbf{{C}}\)._
Let \(\boldsymbol{u}\) be as above, and let \(\mathbf{{x}}_{0}\) be such that \(\mathbf{{\cal O}}_{N}\mathbf{{x}}_{0}+\mathbf{{\Gamma} }_{N}U_{N}=Y_{N}=0\). Suppose the claim is false. Then \(\exists k\leq N-2\), \(\mathbf{{C}}\mathbf{{\Psi}}\mathbf{{u}}_{k}\notin \mathbf{{C}}\mathbf{{A}}\ker\mathbf{{C}}\), so if \(\mathbf{{x}}_{k}\in\ker\mathbf{{C}}\), \(\mathbf{{C}}(\mathbf{{A}}\mathbf{{x}}_{k}+\mathbf{{\Psi}}\mathbf{{u}}_{k})=\mathbf{{y}}_{k+1}\neq 0\). Since we assumed \(Y_{N}=0\implies\mathbf{{y}}_{k+1}=0\), this cannot be, so \(\mathbf{{x}}_{k}\notin\ker\mathbf{{C}}\), which gives our contradiction as \(\mathbf{{y}}_{k}=\mathbf{{C}}\mathbf{{x}}_{k}\neq 0\).
A fact not immediately visible is that this subspace arises as the kernel of \(\mathbf{{P}}_{1}^{\perp}\mathbf{{\Gamma}}_{1}\):
**Lemma 6**: \[\ker\mathbf{{P}}_{1}^{\perp}\mathbf{{\Gamma}}_{1}=(\mathbf{{C}}\mathbf{{\Psi}})^{-1}\mathbf{{C}}\mathbf{{A}}\ker\mathbf{{C}}\]
\[\ker\mathbf{{P}}_{1}^{\perp}\mathbf{{\Gamma}}_{1} =\{\boldsymbol{u}:\mathbf{{\Gamma}}_{1}\boldsymbol{u}\in \operatorname{im}\mathbf{{\cal O}}_{1}\}\] \[=\{\boldsymbol{u}:\exists\mathbf{{x}}_{0},\mathbf{ {C}}\mathbf{{x}}_{0}=0\mbox{ and }\mathbf{{C}}\mathbf{{\Psi}}\mathbf{{u}}=\mathbf{{C}}\mathbf{{A}}\mathbf{{x}}_{0}\}\] \[=\{\boldsymbol{u}:\mathbf{{C}}\mathbf{{\Psi}} \mathbf{{u}}\in\mathbf{{C}}\mathbf{{A}}\ker \mathbf{{C}}\}\] \[=(\mathbf{{C}}\mathbf{{\Psi}})^{-1}\mathbf{{C}}\mathbf{{A}}\ker\mathbf{{C}}\]
This leads us to the following:
**Proposition 4**: \(\Sigma\) _is input \(\Delta^{N}\)-recoverable for every \(N\) if and only if \(\mathbf{{P}}_{1}^{\perp}\mathbf{{\Gamma}}_{1}\) satisfies the \(\Delta\)-NUP._
\(\Sigma\) is input \(\Delta^{N}\)-recoverable for every \(N\) only if \(\Sigma\) is input \(\Delta\)-recoverable; by Theorem 2, we have the forward implication. Since lemma 5 ensures that every entry \(\boldsymbol{u}_{k}\) of an input \(U_{N}\in\ker\mathbf{{P}}_{N}^{\perp}\mathbf{{\Gamma}}_{N}\) satisfies \(\boldsymbol{u}_{k}\in(\mathbf{{C}}\mathbf{{\Psi}})^{-1}\mathbf{{C}}\mathbf{{A}}\ker\mathbf{{C}}\stackrel{{ \mbox{\scriptsize lemma \ref
e.g. systems that quickly divulge information about their initial conditions, may actually result in nullspace constants of these spaces that are further apart. We also note that \((\mathbf{C}\mathbf{\Psi})^{-1}\mathbf{C}\mathbf{A}\ker\mathbf{C}=(\mathbf{C}\mathbf{\Psi})^{+}\mathbf{C}\mathbf{A} \ker\mathbf{C}+\ker\mathbf{C}\mathbf{\Psi}\), so characteristics of the space \((\mathbf{C}\mathbf{\Psi})^{+}\mathbf{C}\mathbf{A}\ker\mathbf{C}\) directly mediate how close the conditions established in this section are to one another. We might then expect that the gap between them to scale with \(\dim\ker\mathbf{C}\), and therefore to be small when \(p\) is not much less than \(n\).
## V Numerical Validation
To validate and provide intuition for our results, we perform two types of numerical experiments on ensembles of random LDSs with \(s\)-sparse inputs, i.e. \(\Delta\)-sparse with \(\Delta\in\{\Delta_{s}:s\in\mathbb{N}\}\):
1. We evaluate the ability of (\(D_{1}\)) to jointly recover the initial state and sparse inputs for different system parameters and sparsity levels.
2. We explore the relationship between \(\operatorname{nsc}(\mathbf{C}\mathbf{\Psi},\Delta_{s})\) and \(\operatorname{nsc}(\mathbf{P}_{1}^{\perp}\mathbf{\Gamma}_{1},\Delta_{s})\) as a function of system parameters and sparsity levels.
Throughout, we employ the 1-step TSA branch-and-bound algorithm of [24] to compute nullspace constants to a tolerance of \(\pm 0.05\). This level of precision was chosen to permit the estimation of nullspace constants for much larger sparsity levels with a statistically significant amount of systems than would be permitted if we were to compute them exactly.
### _System Generation and Computing Information_
For a given \(n,m,p\), systems \(\Sigma=(\mathbf{A},\mathbf{\Psi},\mathbf{C})\) were generated randomly, mirroring the strategy of [20]:
* \(\mathbf{A}\) is i.i.d. Gaussian in each entry with variance \(1/n\), s.t. each eigenvalue \(\lambda\) of \(\mathbf{A}\) satisfies \(|\lambda|<0.9\).
* \(\mathbf{\Psi}\) is i.i.d Gaussian in each entry with variance \(1/n\).
* \(\mathbf{C}\) is i.i.d. Gaussian with unit variance.
For all systems considered, we choose \(m=20\), to ensure tractability of \(\Delta_{s}\)-nullspace constant estimates. We also restrict our attention to \(n\leq m\) and \(p\leq m\), focusing on the usual assumption of "overcompleteness" of \(\mathbf{\Psi}\). We analyze a total of 25173 systems across all parameter ranges. We conducted all experiments and analysis in Python, using CVXPY with the GUROBI solver for implementation of linear programming.
### _Success of recovery_
The first analysis we present provides numerical support for the claims of necessity and sufficiency of the conditions established in corollaries 1 and 2, that \(\operatorname{nsc}(\mathbf{C}\mathbf{\Psi},\Delta_{s})<0.5\) is necessary and \(\operatorname{nsc}(\mathbf{P}_{1}^{\perp}\mathbf{\Gamma}_{1},\Delta_{s})<0.5\) is sufficient for \(\Sigma\) to be input \(\Delta_{s}^{N}\)-recoverable.
For each system, we choose a random \(s\leq 10\), and simulate 30 random combinations of initial conditions and entrywise \(s\)-sparse inputs of length \(N=n\). As in [20], we sample each entry of the initial conditions uniformly from \([-5,5]\), choose every support \(S_{k}\subseteq\llbracket m\rrbracket\) uniformly without replacement s.t. \(|S_{k}|=s\), and sample each nonzero input entry uniformly on \([-5,5]\). In Table I, we provide the fraction of systems for which there was at least one imperfect joint recovery of the 30 trials conducted as a function of the nullspace constants \(\operatorname{nsc}(\mathbf{C}\mathbf{\Psi},\Delta_{s})\) and \(\operatorname{nsc}(\mathbf{P}_{1}^{\perp}\mathbf{\Gamma}_{1},\Delta_{s})\), which were determined up to being less than \(0.5\), greater than \(0.5\), or in \([0.45,0.55]\). From the table, we see our conditions behave as expected: all systems with \(\operatorname{nsc}(\mathbf{P}_{1}^{\perp}\mathbf{\Gamma}_{1},\Delta_{s})<0.5\) exhibit perfect recovery; a majority of systems with \(\operatorname{nsc}(\mathbf{P}_{1}^{\perp}\mathbf{\Gamma}_{1},\Delta_{s})>0.5\) exhibit imperfect recovery; and \(\operatorname{nsc}(\mathbf{P}_{1}^{\perp}\mathbf{\Gamma}_{1},\Delta_{s})\geq\operatorname {nsc}(\mathbf{P}_{1}^{\perp}\mathbf{\Gamma}_{1},\Delta_{s})\) for all systems. In particular, there were many systems such that, for the sparsity level considered, \(\operatorname{nsc}(\mathbf{C}\mathbf{\Psi},\Delta_{s})<0.5\) was not sufficient for perfect input recovery, as \(\operatorname{nsc}(\mathbf{P}_{1}^{\perp}\mathbf{\Gamma}_{1},\Delta_{s})>0.5\).
Figure 1 visualizes this data in a different way, presenting a joint recovery phase transition plot over \(p\) and \(s\) for \(n\in\{12,16,20\}\) and \(N=n\). The intensity of each pixel indicates the empirical probability of a system with dimensions \(p,n,m\) and sparsity level \(s\) exhibiting imperfect joint recovery. The colormap normalization is in log-scale to better illustrate the case of perfect recovery for all trials, which we plot as white. The red and blue outlined regions indicate \((p,n,m,s)\) such that every system simulated satisfied \(\operatorname{nsc}(\mathbf{C}\mathbf{\Psi},\Delta_{s})>0.5\) (red, failure of necessary condition) or \(\operatorname{nsc}(\mathbf{P}_{1}^{\perp}\mathbf{\Gamma}_{1},\Delta_{s})<0.5\) (blue, satisfaction of sufficient condition). We observe that there are no white pixels contained in the red outlined regions, supporting the claim of \(\operatorname{nsc}(\mathbf{C}\mathbf{\Psi},\Delta_{s})>0.5\) being necessary for joint \(\Delta_{s}^{n}\)-recoverability. The red outlined region also becomes strictly smaller as \(n\) increases, as expected as this indicates \(\mathbf{C}\mathbf{\Psi}\) with generically higher rank. Likewise, we observe that there are no colored pixels contained in the blue outlined regions, supporting the claim that \(\operatorname{nsc}(\mathbf{P}_{1}^{\perp}\mathbf{\Gamma}_{1},\Delta_{s})<0.5\) is sufficient for joint \(\Delta_{s}^{n}\)-recoverability. The blue regions appear to shrink to the right as \(n\) increases, widening the gap in \(s\) between the red and blue outlined regions for smaller \(p\). This reflects the fact that increased \(n\) will generically result in an increase in the dimension of \(\ker\mathbf{C}\), and thus an increase in the dimension of \(\ker\mathbf{P}_{1}^{\perp}\mathbf{\Gamma}_{1}\).
### _Analysis of NSC Relationship_
We conclude our analysis by taking a closer look at the relationship between the two main quantities motivated in this work, \(\operatorname{nsc}(\mathbf{C}\mathbf{\Psi},\Delta_{s})\) and \(\operatorname{nsc}(\mathbf{P}_{1}^{\perp}\mathbf{\Gamma}_{1},\Delta_{s})\). Here we do not terminate once determining whether the constant is above or below 0.5, as was done for the previous section, opting instead to compute bounds on these constants for each system
\begin{table}
\begin{tabular}{c|c|c|c} & \(\sigma_{\mathbf{C}\mathbf{\Psi}}<0.5\) & \(\sigma_{\mathbf{C}\mathbf{\Psi}}\sim 0.5\) & \(\sigma_{\mathbf{C}\mathbf{\Psi}}>0.5\) \\ \hline \(\sigma_{\mathbf{P}_{1}^{\perp}\mathbf{\Gamma}_{1}}>0.5\) & \(153/800\) & \(34/289\) & \(9068/9866\) \\ \hline \(\sigma_{\mathbf{P}_{1}^{\perp}\mathbf{\Gamma}_{1}}\sim 0.5\) & \(0/116\) & \(60/336\) & \(2/10\) \\ \hline \(\sigma_{\mathbf{P}_{1}^{\perp}\mathbf{\Gamma}_{1}}<0.5\) & \(0/2100\) & \(0/0\) & \(0/0\) \\ \end{tabular}
\end{table} TABLE I: Number of systems with at least one imperfect joint recovery across 30 trials out of total occurrences of systems, with given ranges of \(\sigma_{\mathbf{C}\mathbf{\Psi}}:=\operatorname{nsc}(\mathbf{C}\mathbf{\Psi},\Delta_{s})\) and \(\sigma_{\mathbf{P}_{1}^{\perp}\mathbf{\Gamma}_{1}}:=\operatorname{nsc}(\mathbf{P}_{1}^{ \perp}\mathbf{\Gamma}_{1},\Delta_{s})\). \(n\in\{12,16,20\}\), \(1\leq p\leq 20\), \(1\leq s\leq 5\) and \(m=20\). \(\sigma\sim 0.5\) if \(|\sigma-0.5|\leq 0.05\) (experimental tolerance).
up to a tolerance of \(\pm 0.05\). In figure 2, for various \(n,m,p,s\) we plot \(\operatorname{nsc}(\mathbf{C}\mathbf{\Psi},\Delta_{s})\) against \(\operatorname{nsc}(\mathbf{P}_{1}^{\perp}\mathbf{\Gamma}_{1},\Delta_{s})\). Each plotted point represents the midpoint of the computed bounds on these constants for a given \(\Sigma\) and sparsity level \(s\). We see that for every set of parameters considered, the nullspace constants tend to increase with the sparsity level, and that \(\operatorname{nsc}(\mathbf{C}\mathbf{\Psi},\Delta_{s})\leq\operatorname{nsc}(\mathbf{P}_ {1}^{\perp}\mathbf{\Gamma}_{1},\Delta_{s})\) as expected from the fact that \(\ker\mathbf{C}\mathbf{\Psi}\subseteq(\mathbf{C}\mathbf{\Psi})^{+}\mathbf{C}\mathbf{A}\ker\mathbf{C}+\ker \mathbf{C}\mathbf{\Psi}=\ker\mathbf{P}_{1}^{\perp}\mathbf{\Gamma}_{1}\).
In the top set of figures we fix \(n=19\) and \(m=20\), and illustrate the trend of \(\operatorname{nsc}(\mathbf{C}\mathbf{\Psi},\Delta_{s})\) and \(\operatorname{nsc}(\mathbf{P}_{1}^{\perp}\mathbf{\Gamma}_{1},\Delta_{s})\) as \(p\) increases. For \(p=11\), \(\operatorname{nsc}(\mathbf{P}_{1}^{\perp}\mathbf{\Gamma}_{1},\Delta_{s})\) is quite pessimistic, echoing the wide gap between the red and blue regions of figure 1 for \(n=20\) for moderate \(p\). As \(p\) increases, points approach the diagonal and tend to decrease in magnitude, reflecting the intuitively more favorable recovery properties of a larger number of measurements. When \(p=n\), we see that the two constants become equal; this reflects the fact that \(\mathbf{C}\in\mathbb{R}^{p\times n}\) will be generically full rank and thus \(\ker\mathbf{P}_{1}^{\perp}\mathbf{\Gamma}_{1}=(\mathbf{C}\mathbf{\Psi})^{+}\mathbf{C}\mathbf{A}\ker \mathbf{C}+\ker\mathbf{C}\mathbf{\Psi}=\ker\mathbf{C}\mathbf{\Psi}\).
In the bottom set of figures we fix \(p=11\) and \(m=20\), and illustrate the trend of \(\operatorname{nsc}(\mathbf{C}\mathbf{\Psi},\Delta_{s})\) and \(\operatorname{nsc}(\mathbf{P}_{1}^{\perp}\mathbf{\Gamma}_{1},\Delta_{s})\) as \(n\) increases. Up to our tolerance, we see again that for \(n=11\), \(n=p\) and so the constants are equal. With the increase in \(n\), we observe that \(\operatorname{nsc}(\mathbf{C}\mathbf{\Psi},\Delta_{s})\) stays roughly fixed, while \(\operatorname{nsc}(\mathbf{P}_{1}^{\perp}\mathbf{\Gamma}_{1},\Delta_{s})\) increases. This is qualitatively similar to the effect of decreasing \(p\), but in that case, _both_ constants were affected. This comes as no surprise, as for \(p\leq n\leq m\) we expect the distribution of \(\mathbf{C}\mathbf{\Psi}\) to be relatively unaffected, but \(\dim\ker\mathbf{C}\) to increase, and thus for \(\dim\ker\mathbf{P}_{1}^{\perp}\mathbf{\Gamma}_{1}\) to increase. Another, more intuitive interpretation is that by increasing \(n\) with \(p\) fixed, there is an increased amount of interference from the initial condition mixed in with the sparse inputs, and so guarantees on the recovery of even the sparse inputs alone will naturally worsen for recovery with a given number of time steps \(N\). The fact that recovery empirically does better across the board for larger \(n\) in figure 1 is explained by the fact that with a larger state space, a larger number of time steps will continue to yield insight about previous inputs and the initial condition.
Overall, these plots support the point that propositions 3 and 4 can actually be fairly tight, provided that \(p\) is not much less than \(n\). Larger \(p\) is in general seen to be better across the board, while increasing \(n\) relative to \(p\) results in less favorable sufficient guarantees. Importantly, they also reinforce the fact that in most, but not all, cases, both \(\operatorname{nsc}(\mathbf{C}\mathbf{\Psi},\Delta_{s})\) and \(\operatorname{nsc}(\mathbf{P}_{1}^{\perp}\mathbf{\Gamma}_{1},\Delta_{s})\) will be either less than or greater than \(0.5\). The cases for which they disagree are few, indicating potential practical utility for these quantities for determining joint \(\Delta^{N}\)-recovery with (\(D_{1}\)) in applications.
## VI Conclusions and Future Work
We have presented the first necessary and sufficient conditions under which the joint recovery of generic initial conditions and sparse inputs for a linear dynamical system is well-posed and may be carried out via \(\ell_{1}\)-minimization. Leveraging this characterization, we further provide two simple necessary, and sufficient conditions for joint \(\Delta^{N}\)-recoverability. In contrast to previous work, these conditions have intuitive justifications and can be computationally verified for most systems. Through \(\ell_{1}\)-recovery experiments on random systems, we showed that these conditions are useful indicators of recovery performance.
We have identified several exciting avenues for future research building on the necessary and sufficient conditions introduced here. One direction is to extend these results to general systems with feedthrough terms (nonzero \(\mathbf{D}\) matrix) and broaden the scope of recovery results to incorporate recovery with delay. More generally, we aim to explore notions of strong observability for linear systems with sparse inputs. Indeed, there are many significant results in linear systems theory that we believe admit direct extensions to the case of sparse inputs, and for which necessary and sufficient conditions using \(\ell_{1}\)-minimization and related characterizations may be achieved by building on the generalized nullspace property, just as was illustrated here.
## Acknowledgements
The authors would like to thank Dr. Adam Charles for invaluable discussions during the preparation of this manuscript.
Fig. 1: Empirical probability of imperfect joint recovery of entrywise \(\Delta_{s}\)-sparse signals and generic initial conditions as a function of \(s\) and \(p\) with \(m~{}=~{}20\), \(n\in\{12,16,20\}\), and \(N=n\); white indicates all signals perfectly recovered. Within the red outlines are \((s,p)\) such that all simulated systems satisfied \(\operatorname{nsc}(\mathbf{C}\mathbf{\Psi},\Delta_{s})>0.5\). Within the blue outline are \((s,p)\) such that all simulated systems satisfied \(\operatorname{nsc}(\mathbf{P}_{1}^{\perp}\mathbf{\Gamma}_{1},\Delta_{s})<0.5\). |
2310.19090 | gafro: Geometric Algebra for Robotics | Geometry is a fundamental part of robotics and there have been various
frameworks of representation over the years. Recently, geometric algebra has
gained attention for its property of unifying many of those previous ideas into
one algebra. While there are already efficient open-source implementations of
geometric algebra available, none of them is targeted at robotics applications.
We want to address this shortcoming with our library gafro. This article
presents an overview of the implementation details as well as a tutorial of
gafro, an efficient c++ library targeting robotics applications using geometric
algebra. The library focuses on using conformal geometric algebra. Hence,
various geometric primitives are available for computation as well as rigid
body transformations. The modeling of robotic systems is also an important
aspect of the library. It implements various algorithms for calculating the
kinematics and dynamics of such systems as well as objectives for optimisation
problems. The software stack is completed by python bindings in pygafro and a
ROS interface in gafro_ros. | Tobias Löw, Philip Abbet, Sylvain Calinon | 2023-10-29T17:43:21Z | http://arxiv.org/abs/2310.19090v1 | # _gafro_: Geometric Algebra for Robotics
###### Abstract
Geometry is a fundamental part of robotics and there have been various frameworks of representation over the years. Recently, geometric algebra has gained attention for its property of unifying many of those previous ideas into one algebra. While there are already efficient open-source implementations of geometric algebra available, none of them is targeted at robotics applications. We want to address this shortcoming with our library _gafro_. This article presents an overview of the implementation details as well as a tutorial of _gafro_, an efficient c++ library targeting robotics applications using geometric algebra. The library focuses on using conformal geometric algebra. Hence, various geometric primitives are available for computation as well as rigid body transformations. The modeling of robotic systems is also an important aspect of the library. It implements various algorithms for calculating the kinematics and dynamics of such systems as well as objectives for optimisation problems. The software stack is completed by python bindings in _pygafro_ and a ROS interface in _gafro_ros_.
## I Introduction
Geometric algebra can be considered a high-level mathematical language for geometric reasoning. As such it is very well suited for general problems in robotics.
Although geometric algebra has great potential for modeling, learning and control in robotics, it has not been widely adopted in robotics research. One reason for this is the lack of easy-to-use libraries for robotics applications, while at the same time tools based on matrix algebra are very mature and readily available. We aim to change that by providing a ready-to-use geometric algebra library for robotics that can be used with the most popular programming frameworks, namely c++, python and ROS.
For robot modeling and control it is necessary to compute the kinematics and dynamics of robots. These algorithms are well studied and have been implemented in various software frameworks. In our _gafro_ library we provide an implementation of the geometric algebra variant of these algorithms. It is important to point out that geometric algebra can be used to compute these important quantities that are classically computed using matrix algebra, while also offering a richer toolset, i.e. it also includes tools for geometric reasoning that matrix algebra does not have.
In this article we want to explain the implementation details of our geometric algebra library _gafro_ and give a tutorial on how to use it for common robotics problems such as inverse kinematics and optimal control. Our aim is to make geometric algebra more accessible for robotics research by providing this ready-to-use library. This should result in a wider adoption and facilitate the research on using this powerful framework for robotics.
This article is organized as follows: in Section II we give an overview of the available programming interfaces, in Section III we explain the implementation details of the algebra, in Section IV we show how to model robots using the library, in Section V we compare _gafro_ to other GA and robot modeling libraries and finally in Section VI we demonstrate various applications and give a tutorial on how to use the library. The documentation and the links to all repositories can be found on our website [https://geometric-algebra.tobiloew.ch/gafro](https://geometric-algebra.tobiloew.ch/gafro).
## II Programming Interfaces
In this section we explain the programming interfaces that _gafro_ offers. The main library is written in c++ for which we provide python bindings called _pygafro_ as well as the ROS package _gafro_ros_. All mentioned repositories can be found at [https://gitlab.com/gafro](https://gitlab.com/gafro).
### _c++ Library_
The core implementation of _gafro_ is done in c++20 and relies heavily on templates. Apart from the standard library the only mandatory dependency is the _Eigen1_ library. We use the the _yaml-cpp2_ library as an optional dependency for loading of robot descriptions. Several robot descriptions are already available and can be used via the respective classes.
Footnote 1: [https://eigen.tuxfamily.org](https://eigen.tuxfamily.org)
Footnote 2: [https://github.com/beder/yaml-cpp](https://github.com/beder/yaml-cpp)
### _ROS Package_
The Robot Operating System (ROS) is the de facto standard for building robot applications. Hence, we provide a ROS package called _gafro_ros_ to use our library within the ROS framework. It allows passing geometric primitives between nodes via messages. It also enables the visualization of these geometric primitives in Rviz via custom plugins. Furthermore, since most robot descriptions are available in the Universal Robot Description Format (URDF), this package supports loading this file format.
### _Python Bindings_
Since the python programming language is a popular tool for rapid prototyping and is generally more accessible than the c++ language, we are providing language bindings in python to _gafto_ using the _pybind113_ library.
Footnote 3: [https://github.com/pybind/pybind11](https://github.com/pybind/pybind11)
## III Implementation of Conformal Geometric Algebra
In this section we will explain in detail our implementation of conformal geometric algebra (CGA). The aspects that are highlighted are the implementation of a general multivector and the expressions that are acting on it. The library covers several points that were proposed in [1] as a wishlist of geometric algebra implementations. We have designed the library in an object-oriented way, so the classes also reflect the mathematical inheritance relationships. Furthermore all classes, i.e. all specialized multivectors are instantiated as different types, which allows them to be distinguished at compile-time for type-safety and have persistent storage. These specialized classes, also enable the computation with partial multivectors, i.e. the library exploits the fact that the most commonly multivectors are sparse and only use certain subspaces of the algebra. We address numerical imprecisions by ensuring that only elements of the resulting multivectors of expressions are evaluated that are known to be non-zero. We handle the type explosion of binary operators by automatically evaluation partial expressions when the full expressions get too complex.
### _General Multivector_
The core element of computation in geometric algebra is the multivector. Hence, it is very important to think about the design choices when implementing its structure, as this will determine the memory usage and computational performance. The general structure of a multivector in CGA can be seen in Figure 1. It is composed of 32 basis blades, divided into grades zero to five. A general multivector would therefore be quite heavy in terms of memory and computation. However,
An important structural aspects of CGA that facilitate the design process here are the sparsity of its representations and the fact that the structure of multivector expressions is known at compile-time. Both of these properties mean that we can implement the data vector of a multivector by only storing its known non-zero elements. This is achieved by using a template that takes list of blade indices as input:
``` templatecname=
### _Algebraic Computations using Expression Templates_
When implementing geometric algebra, there are some core binary expressions that need to be available for general algebraic computations. These operators are listed in Table II.
All expressions are implemented as expression templates. The expression templates determine the result type of the expression at compile time. This is achieved via accompanying evaluation classes that are hidden in the detail namespace. The challenge in the implementation here is the fact that the resulting multivectors only rarely have the same blades as the input operands. Note that the expressions are evaluated in a lazy fashion, which means that the blades are evaluated on demand. This makes it possible to e.g. only evaluate a single blade of the resulting multivector, depending on the requirements.
```
1publictimetimeblades)
2require(Camillillibraryplate())
3repeat(Camillibraryget())
4returnstatic_execomentDerived()(table).templateget()()
5}
```
A first example of this can be seen in the Sum expression in Figure 3. The corresponding type evaluation class constructs the type of the resulting multivector at compile-time. In the case of a summation this amounts to a simple bitwise OR operation comparing the bitsets of the input multivectors.
The inner and outer products work essentially in the same way and thus the corresponding expression both inherit from a base Product class, i.e. InnerProduct and OuterProduct. The Product class takes a class structure implementing the corresponding Cayley table as template argument. This Cayley table defines the resulting blades of a blade by blade multiplication under the inner and outer product, respectively. Thus, in the case of CGA, it defines 1024 operations. In order to determine the type of the resulting multivector, we employ fold expressions that allow us to iterate over the blades of both input multivectors at compile-time. In this loop, we obtain the resulting blade per pair of blades using the respective Cayley table and then assemble them into the resulting multivector again using OR operations. Figure 4 shows an example for each the inner and the outer product.
The geometric product class GeometricProduct also inherits from the base Product class and comes with its own Cayley table. So implementation wise it is the same as the inner and outer products. The main difference is that two blades can result from a blade multiplication, which causes the resulting multivector to potentially have both a lower and a higher grade than the inputs, as can be seen in Figure 4. Furthermore, we have products that are based on the geometric
\begin{table}
\begin{tabular}{l c c} \hline \hline & **operator** & **symbol** \\ \hline addition & + & + \\ substraction & - & - \\ outer product & - & \(\wedge\) \\ inner product & \(|\) & \(\cdot\) \\ geometric product & * & \\ \hline \hline \end{tabular}
\end{table} TABLE II: Binary operators.
Fig. 4: The resulting multivector of the inner and outer product operations has a different grade than the inputs.
Fig. 3: Summation
\begin{table}
\begin{tabular}{l l l} \hline \hline
**member function** & **unary expression** & **mathematical symbol** \\ \hline reverse & Reverse & \(\widetilde{X}\) \\ inverse & Inverse & \(X^{-1}\) \\ dual & Dual & \(X^{*}\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: Unary expressions that are implemented as member functions of the Multivector class.
product such as the sandwich product, which is also treated as a binary expression and shown in Figure (b)b.
### _Geometric Primitives_
Since we know the subspaces of all the geometric primitives, we chose to implement them in an object-oriented way by inheriting from the base Multivector class. Hence, the available classes are Vector, DirectionVector, TangentVector, Point, PointPair, Line, Circle, Plane and Sphere. Their corresponding subspaces within the geometric algebra can be seen in Figure 2. Having the geometric primitives as explicit classes allows the implementation of commonly used equations as members functions, which facilitates the usage. For example the constructors of the geometric primitive classes implement the various ways they can be defined. The explicit classes are meant to facilitate the use and construction, but of course, using computation with base multivectors is also possible. This preserves the property of covariant computation within the algebra.
### _Rigid Body Transformations_
The rigid body transformations that are currently available are implemented in the classes Rotor, Translator, Motor and Dilator. They all inherit from the base class Versor. Since all three classes are exponential mappings of bivectors they are accompanied by the expressions *Logarithm and *Exponential, respectively. The main method of the rigid body transformations is apply, which implements the sandwich product \(X^{\prime}=MX\widehat{M}\) and ensures type safety. While \(X\) can technically be any multivector, the intended usage is with the geometric primitives that were presented in Section III-C. Hence, in this context, type safety means that \(X^{\prime}\) stays the same geometric primitive as \(X\), e.g. a Point stays a Point. This ensures that the expression only evaluates blades that are part of the geometric primitive, which not only reduces the number of floating point operations, it also deals with numerical imprecisions in the computation that are known to occur in geometric algebra implementations.
## IV Robot Modeling
The previous sections introduced the features related to the underlying geometric algebra implementation of _garo_. This section will now introduce the higher level features of the library related to robot modeling, that distinguish it from other geometric algebra libraries.
The main aspects of robot modeling are the computation of the kinematics and dynamics of robotic systems. While the forward kinematics and forward/inverse dynamics can be computed using efficient recursive algorithms, the inverse kinematics problem for redundant manipulators is an optimisation problem. Hence, the robot modeling section also covers the necessary tools for solving such optimisation problems.
The base class for the modeling of robotic systems is called System. It implements the main functionality for the computation of the kinematics and dynamics of robotic systems and stores the joints and links. Those are organized in the classes Joint, FixedJoint, RevoluteJoint, PrismaticJoint and Link. A custom robot system can then be created using the functions addJoint and addLink. Note that those functions only add the links and joints to the system, the proper relationships between them need to be specified individually by setting the respective child/parent links/joints. Usually, robotic systems consist of one or more kinematic chains of interests, e.g. the chain from the base link to the end-effector frame for serial manipulators. This concept is implemented in the KinematicChain helper class and can be added to a System via the addKinematicChain function.
The forward and inverse dynamics are implemented in the member functions computeJointAccelerations and computeJointTorques, respectively. These functions are using the conformal geometric algebra versions of the Recursive Newton-Euler Algorithm (RNEA) and Articulated Body Algorithm (ABA).
The Manipulator class that implements a serial manipulator with \(N\) degrees of freedom is a specialization of the base System class. The parameter \(N\) is given as a template argument, which makes it possible to use fixed-size Eigen::Matrix types for the implementation of e.g. the joint configuration vectors and Jacobian matrices. The Manipulator class implements functions that are most commonly of interest when dealing with such types of systems. More specifically, these are functions dealing with the end-effector forward kinematics and Jacobian matrices. Hence, these functions are
* getEEMotor
* getEEAnalyticJacobian
* getEEGeometricJacobian
* getEEFrameJacobian
These functions use the aforementioned KinematicChain class in order to define the end-effector kinematic chain and precompute certain related quantities. Hence, loading a Manipulator from a file additionally requires the name of the end-effector joint.
Fig. 5: The geometric product is a combination of the inner and outer product.
The _gafro_ library includes several predefined serial manipulators that inherit from the Manipulator class, those are implemented in the following classes:
* FrankaEmikaRobot
* UR5
These classes are loading the definition from yaml files and are only available if _yaml-cpp_ is installed on the system. The ROS package contains a program that converts URDF files to the yaml format that is being used by this library.
## V Comparison to Other Libraries
There have been various works that published implementations of geometric algebra. These libraries all have in common that they are meant to be generic geometric algebra implementations focusing on the computational and mathematical aspects of the algebra itself. In contrast to that, our implementation is targeted specifically at robotics applications and thus not only implements the low-level algebraic computations but also features the computation of the kinematics and dynamics of serial manipulators as well as generic cost functions for optimal control.
### _Algebraic Operations Benchmarks_
In order to compare the performance of our _gafro_ library to other geometric algebra libraries we forked the _ga_benchmark5_ repository in order to integrate _gafro_. Our fork can be found at [https://github.com/loewt/ga-benchmark](https://github.com/loewt/ga-benchmark).
Footnote 5: [https://github.com/ga-developers/ga-benchmark](https://github.com/ga-developers/ga-benchmark)
We omitted _TbGAL_ from the plots of the benchmark results, since it is by far the slowest library. The benchmarks show that _gafro_ can compete in terms of performance with _GATL_ and _Versor_, which were previously reported to be the fastest GA libraries. We believe, however, that the API of _gafro_ is a lot more approachable and easy to use. Furthermore, it additionally implements specialized algorithms for robot modeling and optimisation problems.
### _Robotics Algorithms Benchmarks_
Since this library implements robot kinematics and dynamics algorithms, we are comparing and benchmarking _gafro_ against several libraries that are commonly used in robotics applications. This repository actually uses a git pipeline in order to continuously pull the latest changes to the libraries and update the benchmark results. The current benchmarking results on our system can be found in Figure 7. As can be seen, _gafro_ is very competitive when it comes to the computation of the kinematics of a robotic system. The computation of the dynamics, however, especially the forward dynamics, is still slower at this point. The reason for this is an outdated implementation choice in the lower levels of the library. This issue will be addressed and fixed in a future realase of _gafro_, which should make the computation of the dynamics also competitive compared to the established libraries.
## VI Applications and Tutorial
In this section we provide some example applications of how the library can be used. For that purpose, we provide
Fig. 6: Benchmarks of different geometric algebra libraries. All operations are computed using conformal geometric algebra.
\begin{table}
\end{table} TABLE III: Comparison of different libraries.
an accompanying repository _gafro_examples_ that explains the usage of _gafro_, _pygafro_ and _gafro_ros_. Note that in the text we are always referencing the c++ files, but the same examples can also be found in python in the corresponding folder. These examples are using the same naming scheme.
### _Geometric Algebra_
Since many potential users of _gafro_ are likely to be unfamiliar with the concept of geometric algebra we are providing some examples on how to do computations using this algebra.
#### Vi-A1 Multivector Operations
In a first example we are showing how to create different general multivectors and use them for algebraic computations. This example can be found in _multivector.cpp_. In particular, here we show the usage of the different products in geometric algebra and how they are used with their corresponding operators that were listed in Table II.
#### Vi-A2 Geometric Primitives
An example on how to use the library to construct geometric primitives is shown in _geometric_primitives.cpp_. Since these geometric primitives are essentially specializations of the base Multivector class all operations that were presented in the example in Section VI-A1 are also valid for the geometric primitives. Hence, we are showing the construction of the geometric primitives in different ways. First, using their algebraic definition based on the outer product and second, using their constructors. Essentially, the constructors implement the algebraic definition, and serve to simplify the usage of the library. But the example should show the equivalence of the construction. Since geometric algebra facilitates geometric reasoning, we are also providing an example of how to projections, reflections and intersections using the geometric primitives, which can be found in _geometric_primitives_incidence.cpp_.
#### Vi-A3 Conformal Transformations
This part of the tutorial is dedicated to showing the usage of the different Versor classes, i.e. it shows how to apply conformal transformations to geometric primitives. There are actually two parts to this example. The first one, which is implemented in _versor_log_exp_map.cpp_, shows how define the different versors using the exponential mappings and recover the generator bivectors using the corresponding logarithmic mappings. In the second example, which is implemented in _versors_transforming_geometric_primitives.cpp_, we then show how to apply the versors to the geometric primitives.
### _Robot Modeling_
One of the targeted use cases of the _gafro_ library is the modeling of robotic systems. In this part of the tutorial we will show how to that in practice.
#### Vi-B1 Generic System
A generic system can be created using the System class and its addJoint and addLink functions as mentioned in Section IV. Note that in this case the paren-t/child relationships between the joints/links need to handled as well. An easier way is to load the system description from a yaml file, where the library will drop an error message if not all necessary relationships are defined. These two ways of creating a system are shown in the examples _create_system_from_code.cpp_ and _create_system_from_file.cpp_.
#### Vi-B2 Manipulator System
Manipulator systems are specialized robotic systems, which can be seen from the Manipulator class inheriting from the System class. So in order to defined a manipulator, a normal system can be created and then moved to the manipulator system with the additional information about the end-effector joint in order to create the kinematic chain. We are again providing the two examples of creating a manipulator from code or from a file in examples _create_manipulator_from_code.cpp_ and _create_manipulator_from_file.cpp_, respectively.
#### Vi-B3 Robot Kinematics/Dynamics
After creating the model of the system and in order to use the library in practical applications, the computation of the kinematics and dynamics of the systems are necessary. We show how the compute all relevant quantities using the FrankaEmikaRobot class in the example _franka_emika_kinematics_and_dynamics.cpp_.
#### Vi-B4 Robot Kinematics and Geometric Primitives
In contrast to other libraries, _gafro_ uses geometric algebra for the modeling of robotic systems, which allows the usage of various geometric primitives at a kinematic level. In _franka_emika_geometric_primitives.cpp_ we show an example using the FrankaEmikaRobot of how to move those geometric primitives to the end-effector, which will later be exploited in the formulation of the optimisation problems.
### _Optimisation Problems_
Many problems in important domains of robotics, such as learning and control, can be cast as optimization problems. Hence, in this section we are providing some examples on how _gafro_ can be used to simplify the modeling of optimisation problems using geometric algebra.
#### Vi-B1 Inverse Kinematics
A common example in robotics is the computation of the inverse kinematics. Since we are often dealing with redundant manipulators, this becomes an optimisation problem. We are modeling this problem using the
Fig. 7: Benchmarks of robotics algorithms. The benchmark was run on an AMD Ryzen 7 4800U CPU. All libraries were compiled using _gcc 13.1.1_ with the compiler flags -03 -march=native. The reference system is the Franka Emika Robot.
Motor class of _gafro_, which represents poses in Euclidean space. An inverse kinematics problem in geometric algebra, formulated as an optimization problem, thus minimizes the error between two motors, which is expressed via the logarithmic map. The corresponding class that implements this cost function is called SingleManipulatorMotorCost. It computes the value of the cost function, as well as its gradient and hessian, to allow for first- or second-order optimisation. The example for computing the inverse kinematics using the Gauss-Newton algorithm can be found in _inverse_kinematics_cpp_.
#### Vi-C2 Reaching Geometric Primitives
Geometric algebra extends the cost function to be uniformly applicable across the different geometric primitives. It is implemented in the class SingleManipulatorTarget. The template arguments Tool and Target can be different combinations of geometric primitives. Here, we cast the optimisation problem again as an inverse kinematics problem for simplicity, so we are optimizing for the joint angle configuration in which the end-effector reaches a certain geometric primitive. In previous work, however, we have shown the application of CGA to modeling manipulation tasks in an optimal control framework for model predictive control [16], which can of course be achieved using the same cost function. The different examples are listed in Table IV.
### _ROS Visualization_
The main purpose of the _gafro_ros_ package is visualization of the geometric primitives in Rviz. Hence, we are providing an example that visualizes the Franka Emika robot reaching various geometric primitves. The example is implemented _visualizing_geometric_primitives.cpp_ and requires to be compiled in a ROS workspace. The Rviz output of the example is shown in Figure 8.
## VII Conclusion
In this article we presented the implementation details as well as a tutorial for our software stack around _gafro_, which is a c++ library that implements conformal geometric algebra for robotics. The software stack also includes python bindings in _pygafro_ as well as a ROS package in _gafro_ros_. Tutorial material and toy examples can be found in _gafro_examples_.
While showing comparable performance for the robot modeling, geometric algebra also offers an easy and intuitive way to model various geometric relationships. Hence it provides a richer toolset than standard matrix algebra without loosing any of the existing tools. Our library _gafro_ provides these standard tools for robot modeling and augments them with concepts that are exclusive to geometric algebra. Providing this library that makes geometric algebra easily accessible for robotics research should allow for a wider adoption and facilitate the research on using this powerful framework for robotics.
|
2303.09350 | Unsupervised domain adaptation by learning using privileged information | Successful unsupervised domain adaptation is guaranteed only under strong
assumptions such as covariate shift and overlap between input domains. The
latter is often violated in high-dimensional applications like image
classification which, despite this limitation, continues to serve as
inspiration and benchmark for algorithm development. In this work, we show that
training-time access to side information in the form of auxiliary variables can
help relax restrictions on input variables and increase the sample efficiency
of learning at the cost of collecting a richer variable set. As this
information is assumed available only during training, not in deployment, we
call this problem unsupervised domain adaptation by learning using privileged
information (DALUPI). To solve this problem, we propose a simple two-stage
learning algorithm, inspired by our analysis of the expected error in the
target domain, and a practical end-to-end variant for image classification. We
propose three evaluation tasks based on classification of entities in photos
and anomalies in medical images with different types of available privileged
information (binary attributes and single or multiple regions of interest). We
demonstrate across these tasks that using privileged information in learning
can reduce errors in domain transfer compared to baselines, be robust to
spurious correlations in the source domain, and increase sample efficiency. | Adam Breitholtz, Anton Matsson, Fredrik D. Johansson | 2023-03-16T14:31:50Z | http://arxiv.org/abs/2303.09350v3 | # Unsupervised domain adaptation
###### Abstract
Successful unsupervised domain adaptation (UDA) is guaranteed only under strong assumptions such as covariate shift and overlap between input domains. The latter is often violated in high-dimensional applications such as image classification which, despite this challenge, continues to serve as inspiration and benchmark for algorithm development. In this work, we show that access to side information about examples from the source and target domains can help relax these assumptions and increase sample efficiency in learning, at the cost of collecting a richer variable set. We call this domain adaptation by learning using privileged information (DALUPI). Tailored for this task, we propose a simple two-stage learning algorithm inspired by our analysis and a practical end-to-end algorithm for multi-label image classification. In a suite of experiments, including an application to medical image analysis, we demonstrate that incorporating privileged information in learning can reduce errors in domain transfer compared to classical learning.
## 1 Introduction
Deployment of machine learning (ML) systems relies on generalization from training samples to new instances in a target domain. When these instances differ in distribution from the source of training data, performance tends to degrade and guarantees are often weak. For example, a supervised ML model trained to identify medical conditions in X-ray images from one hospital may work poorly in another if the two hospitals have different equipment or examination protocols (Zech et al., 2018). If no labeled examples are available from the target domain, the so-called _unsupervised domain adaptation_ (UDA) problem (Ben-David et al., 2006), strong assumptions are needed for success.
A common assumption in UDA is that the object of the learning task is identical in source and target domains but that input distributions differ (Shimodaira, 2000). This "covariate shift" assumption is plausible in our X-ray example above; doctors are likely to give the same diagnosis based on X-rays of the same patient from similar but different equipment. Additionally, however, guarantees for consistent domain transfer require either distributional overlap between inputs from source and target domains or known parametric forms of the labeling function (Ben-David and Urner, 2012; Wu et al., 2019; Johansson et al., 2019). Without these, adaptation cannot be verified by statistical means.
Input domain overlap is implausible for our running example and for the high-dimensional tasks that have become standard benchmarks in the community, including image classification (Long et al., 2013; Ganin et al., 2016) and sentence labeling (Orihashi et al., 2020). If hospitals have different X-ray equipment, the probability of observing identical images from source and target domains is zero (Zech et al., 2018). Even when covariate shift and overlap are satisfied, large domain differences can have a dramatic effect on sample complexity (Breitholtz and Johansson, 2022). Despite promising developments (Shen et al., 2022), realistic guarantees for practical domain transfer remain elusive.
Incorporating side information in training has been proposed as a means to improve generalization without domain shift. Through learning using privileged information (LUPI) (Vapnik and Vashist, 2009; Lopez-Paz et al., 2016), algorithms that are given access to auxiliary variables during training, but not in deployment, have been proven to learn from fewer labeled examples compared to algorithms learning without these variables (Karlsson et al., 2021). In X-ray classification, privileged information (PI) can come from graphical annotations or clinical notes made by radiologists, which are unavailable when the system is used (see Figure 1). While PI has been used also in domain adaptation, see e.g., (Sarafianos et al., 2017; Vu et al., 2019), the literature has yet to characterize its benefits in generalizing across domains.
Contributions.In this work, we define _unsupervised domain adaptation by learning using privileged information_ (DALUPI), where unlabeled examples with privileged information, are available from the target domain (Section 2.1). For this problem, we give conditions under which it is possible to identify a model which predicts optimally in the target domain, without assuming statistical overlap between source and target input domains (Section 3). Next, we instantiate this problem setting in multi-class and multi-label image classification and propose practical algorithms for these tasks (Section 4). On image classification tasks with boxes indicating regions of interest as PI, including multi-label diagnosis from chest X-rays, we compare our algorithms to baselines for supervised learning and unsupervised domain adaptation. We find that they perform favorably to the alternatives (Section 5), particularly when input overlap is violated and when training set sizes are small.
## 2 Unsupervised domain adaptation with and without privileged information
In unsupervised domain adaptation (UDA), our goal is to learn a hypothesis \(h\) to predict outcomes (or labels) \(Y\in\mathcal{Y}\) for problem instances represented by input covariates \(X\in\mathcal{X}\), drawn from a target domain with density \(\mathcal{T}(X,Y)\). During training, we have access to labeled samples \((x,y)\) only from a source domain \(\mathcal{S}(X,Y)\) and unlabeled samples \(\tilde{x}\) from \(\mathcal{T}(X)\). As running example, we think of \(\mathcal{S}\) and \(\mathcal{T}\) as radiology departments at two different hospitals, of \(X\) as the X-ray image of a patient, and of \(Y\) as their diagnoses. We aim to minimize prediction errors in terms of the _target risk_\(R_{\mathcal{T}}\) of a hypothesis \(h\in\mathcal{H}\) from a hypothesis set \(\mathcal{H}\), with respect to a loss function \(L:\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}_{+}\), solving
\[\min_{h\in\mathcal{H}}R_{\mathcal{T}}(h),\ \ R_{\mathcal{T}}(h)\coloneqq \mathbb{E}_{X,Y\sim\mathcal{T}}[L(h(X),Y)]. \tag{1}\]
A solution to the UDA problem returns a minimizer of (1) without ever observing labeled samples from \(\mathcal{T}\). However, if \(\mathcal{S}\) and \(\mathcal{T}\) are allowed to differ arbitrarily, finding such a solution cannot be guaranteed (Ben-David and Urner, 2012). Under the assumption of _covariate shift_(Shimodaira, 2000), the labeling function and learning task is the same on both domains, but the covariate distribution differs.
**Assumption 1** (Covariate shift).: _For domains \(\mathcal{S},\mathcal{T}\) on \(\mathcal{X}\times\mathcal{Y}\), we say that covariate shift holds with respect to \(X\) if_
\[\exists x:\mathcal{T}(x)\neq\mathcal{S}(x)\ \ \text{and}\ \ \forall x: \mathcal{T}(Y\mid x)=\mathcal{S}(Y\mid x)\.\]
In our example, covariate shift means that radiologists at either hospital would diagnose two patients with the same X-ray in the same way, but that they may encounter different distributions of patients and images. Learning a good target model from labeled source-domain examples appears feasible under Assumption 1. However, to guarantee consistent learning in general, \(\mathcal{S}(x)\) must also sufficiently _overlap_ the target input domain \(\mathcal{T}(x)\).
**Assumption 2** (Domain overlap).: _A domain \(\mathcal{S}\) overlaps another domain \(\mathcal{T}\) with respect to a variable \(Z\) on \(\mathcal{Z}\) if_
\[\forall z\in\mathcal{Z}:\mathcal{T}(Z=z)>0\implies\mathcal{S}(Z=z)>0\.\]
Covariate shift and domain overlap w.r.t. inputs \(X\) are standard assumptions in UDA, used by most informative guarantees (Zhao et al., 2019). However, overlap is often violated in practice. This is true in widely used benchmark tasks such as MNIST vs MNIST (Ganin et al., 2016), where one image domain is in black-and-white and the other in full color, and in the real-world example given by Zech et al. (2018), who found that it was possible to detect which site an X-ray image came from due to differences in protocols. Minimization of generalization bounds based on metrics such as the discrepancy distance (Ben-David et al., 2006) or integral probability metrics (Long et al., 2013) does not rely on overlap, but is not guaranteed to return a target-optimal model either (Johansson et al., 2019). Next, we study how PI can be used to identify such a model.
### UDA with privileged information
In high-dimensional UDA problems, overlap in input covariates \(X\) may be violated due to information that is irrelevant to the learning task. A famous example is background pixels in image object detection (Beery et al., 2018), which may vary across domains and have spurious correlation with the label \(Y\), confusing the learner. Nevertheless, the image may _contain_ information which is both sufficient to make good predictions and supported in both domains. For X-rays, this could be a subset of pixels indicating a medical condition, ignoring parts that merely indicate differences in protocol or equipment (Zech et al., 2018). When a trained system is used, such information is not identified--the whole X-ray is given as input--but can be supplied during training as added supervision in a variable \(W\); for example, a region of interest indicated by a bounding box, akin to the related examples of (Sharmanska et al., 2014). As such, using \(W\) during training is an example of _learning using privileged information_ (LUPI) (Vapnik and Vashist, 2009).
We define _domain adaptation by learning using privileged information_ (DALUPI) as follows. During training, learners observe samples of covariates \(X\), labels \(Y\) and privileged information \(W\in\mathcal{W}\) from \(\mathcal{S}\) in a dataset \(D_{\mathcal{S}}=\{(x_{i},w_{i},y_{i})\}_{i=1}^{m}\), as well as samples of covariates and privileged information from \(\mathcal{T}\), \(D_{\mathcal{T}}=\{(\tilde{x}_{i},\tilde{w}_{i})\}_{i=1}^{n}\). At test time, the trained models only observe covariates \(X\) from \(\mathcal{T}\). Our goal remains to minimize the target risk (1). Figure 1 illustrates several applications with this structure.
In Table 1, we list common learning paradigms for domain adaptation. Supervised learning, SL-S, refers to learning from labeled samples from \(\mathcal{S}\) without privileged information. SL-T refers to supervised learning with (infeasible) oracle access to labeled samples from \(\mathcal{T}\). UDA refers to the setting described in Section 2 and LUPI to traditional learning using privileged information (Vapnik and Vashist, 2009).
## 3 Provable domain adaptation with sufficient privileged information
It is well-known that privileged information can be used to improve sample efficiency (reduce variance) in learning (Vapnik and Izmailov, 2015; Pechyony and Vapnik, 2010; Jung and Johansson, 2022). Next, we will show that privileged information can provide _identifiability_ of the target risk--that it may be computed from the observational distribution--even when overlap is not satisfied in the input variable \(X\).1
Footnote 1: Identifiability of the target risk is related to transportability of causal and statistical parameters (Pearl and Bareinboim, 2011).
To identify \(R_{\mathcal{T}}\) (1) without overlap in \(X\), we require the additional assumption that \(W\) is sufficient for \(Y\) given \(X\).
**Assumption 3** (Sufficiency of privileged information).: _Privileged information \(W\) is sufficient for the outcome \(Y\) given covariates \(X\) if \(Y\perp X\mid W\) in both \(\mathcal{S}\) and \(\mathcal{T}\)._
Assumption 3 is satisfied when \(X\) provides no more information about \(Y\) in the presence of \(W\). In our running example, if \(W\) is the subset of X-ray pixels corresponding to an area indicating a medical problem, the other pixels in \(X\) may be unnecessary to make a good prediction of \(Y\). Moreover, if covariate shift and overlap are satisfied w.r.t. \(W\), i.e., there is some probability of observing the restricted pixel subset in both domains, we have the following proposition.
**Proposition 1**.: _Let Assumptions 1 and 2 be satisfied w.r.t. \(W\) (not necessarily w.r.t. \(X\)) and let Assumption 3 hold as stated. Then, the target risk \(R_{\mathcal{T}}\) is identified and_
\[R_{\mathcal{T}}(h)=\sum_{x}\mathcal{T}(x)\sum_{w}\mathcal{T}(w\mid x)\sum_{y }\mathcal{S}(y\mid w)L(h(x),y)\.\]
Proof.: By definition, \(R_{\mathcal{T}}(h)=\sum_{x,y}\mathcal{T}(x,y)L(h(x),y)\).
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Setting & \multicolumn{2}{c}{Observed \(\mathcal{S}\)} & \multicolumn{2}{c}{Observed \(\mathcal{T}\)} & Guarantee \\ & \(x\) & \(w\) & \(y\) & \(\tilde{x}\) & \(\tilde{w}\) & \(\tilde{y}\) & for \(R_{\mathcal{T}}\) \\ \hline SL-T & & & & ✓ & ✓ & ✓ \\ \hline SL-S & ✓ & & ✓ & & & \({}^{*}\) \\ UDA & ✓ & & ✓ & ✓ & & \({}^{*}\) \\ LUPI & ✓ & ✓ & ✓ & & & \({}^{*}\) \\ DALUPI & (✓) & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1: A summary of the different settings we consider in this work, what data is assumed to be available during training and if guarantees for identification are known for the setting under the assumptions of Proposition 1. The parentheses around source samples for DALUPI indicate that we need not necessarily observe these for the setting. Note that at test time only \(x\) from \(\mathcal{T}\) is observed. \({}^{*}\)Under the more generous assumption of overlapping support in the input space \(\mathcal{X}\), guarantees exist for all these settings.
Figure 1: Examples of domain adaptation by learning using privileged information (DALUPI). During training, input samples \(X\) and privileged information \(W\) are available from both source and target domains. Labels \(Y\) are only available for inputs from the source domain. At test time, a target sample \(X\) is observed. Top: Chest X-ray images \(X\) together with pathology masks \(W\) and diagnosis label \(Y\). Middle: Tokenizations \(W\) accompany recorded speech \(X\) during learning of a speech recognition system. Bottom: Clinical notes \(W\) add information to a system learning to predict disease progression from patient features \(X\).
We then marginalize over privileged information \(W\),
\[\mathcal{T}(y\mid x) =\mathbb{E}_{\mathcal{T}(W\mid x)}[\mathcal{T}(y\mid W)\mid x]\] \[=\sum_{w:\mathcal{S}(w)>0}\mathcal{T}(w\mid x)\mathcal{S}(y\mid w)\,\]
where the first equality follows by sufficiency and the second by covariate shift and overlap in \(W\). \(\mathcal{T}(x),\mathcal{T}(w\mid x)\) and \(\mathcal{S}(y\mid w)\) are observable through training samples.
Proposition 1 allows us to relax the standard assumption of overlap in \(X\) at the cost of collecting additional variables. When \(W\) is a subset of the information in \(X\), as in our running example, overlap in \(X\) implies overlap in \(W\), but not vice versa. Moreover, under Assumption 3, if covariate shift holds for \(X\), it holds also for \(W\).
Proposition 1 shows that there are conditions where privileged information allows for identification of a target-optimal hypothesis where identification is not possible without it, e.g., when overlap is violated in \(X\)(Johansson et al., 2019). Intuitively, \(W\) guides the learner toward the information in \(X\) which is relevant for the label. The conditions on \(W\) may be compared to the properties of the representation mapping in Construction 4.1 of (Wu et al., 2019). Finally, we note that Proposition 1 does not require that \(X\) is ever observed in the source domain \(\mathcal{S}\).
### A simple two-stage algorithm and its risk
Under Assumption 3, a natural strategy for predicting the outcome \(Y\) is to first infer the privileged information \(W\) from \(X\) using a function \(\hat{f}\), and then predict \(Y\) using a function \(\hat{g}\) applied to the inferred value, \(\hat{h}=\hat{g}(\hat{f}(X))\). We may find such functions \(\hat{f}\), \(\hat{g}\) by solving two separate empirical risk minimization (ERM) problems, as follows.
\[\hat{f} \coloneqq\operatorname*{arg\,min}_{f\in\mathcal{F}}\hat{R}^{W}_{ \mathcal{T}}(f),\hat{R}^{W}_{\mathcal{T}}(f)=\frac{1}{n}\sum_{i=1}^{n}L(f( \tilde{x}_{i}),\tilde{w}_{i}) \tag{2}\] \[\hat{g} \coloneqq\operatorname*{arg\,min}_{g\in\mathcal{G}}\hat{R}^{Y}_ {\mathcal{S}}(g),\;\hat{R}^{Y}_{\mathcal{S}}(g)=\frac{1}{m}\sum_{i=1}^{m}L(g( w_{i}),y_{i})\]
Hypothesis classes \(\mathcal{F},\mathcal{G}\) may be chosen so that the class \(\mathcal{H}=\{h=g\circ f;(f,g)\in\mathcal{F}\times\mathcal{G}\}\) has a desired form. We can bound the generalization error of an estimator \(\hat{h}=\hat{g}\circ\hat{f}\) of the form above, when \(W\in\mathbb{R}^{d\mathcal{W}}\) and \(L\) is the squared loss, by placing a Lipschitz assumption on the space of prediction functions \(\mathcal{G}\), i.e., \(\forall g\in\mathcal{G},w,w^{\prime}\in\mathcal{W},\;\|g(w)-g(w^{\prime})\|_ {2}\leq M\|w-w^{\prime}\|_{2}\). For this, we first define the \(\rho\)-weighted prediction risk in the source domain,
\[R^{Y,\rho}_{\mathcal{S}}(g)=\mathbb{E}_{\mathcal{S}(W,Y)}[\rho(W)L(g(W),Y)]\]
where \(\rho\) is the density ratio of \(\mathcal{T}\) and \(\mathcal{S}\), \(\rho(w)=\frac{\mathcal{T}(w)}{\mathcal{S}(w)}\). We define the empirical weighted risk \(\hat{R}^{Y,\rho}_{\mathcal{S}}(g)\) analogously to the above. When the density ratio \(\rho\) is not known one might use density estimation (Sugiyama et al., 2012) or probabilistic classifiers to estimate it. We now have the following result which we prove for univariate \(Y\); the result can be generalized to multiple classes with standard methods.
**Proposition 2**.: _Assume that \(\mathcal{G}\) comprises M-Lipschitz mappings from the privileged information space \(\mathcal{W}\subseteq\mathbb{R}^{d_{W}}\) to \(\mathcal{Y}\). Further, assume that both the ground truth privileged information \(W\) and label \(Y\) are deterministic in \(X\) and \(W\), respectively. Let \(\rho\) be the domain density ratio of \(W\) and let Assumptions 1-3 (Covariate shift, Overlap and Sufficiency) hold with respect to \(W\). Further, let \(L\) be the squared Euclidean distance and assume that \(L(\hat{f}(w),Y)\) is uniformly bounded over \(\mathcal{W}\) by a constant \(B\) and let \(d\) and \(d^{\prime}\) be the pseudo-dimensions of \(\mathcal{G}\) and \(\mathcal{F}\), respectively. Assume that there are \(n\) observations each from the source (labeled) and target (unlabeled) domains. Then, for any \(h=f\circ g\in\mathcal{F}\times\mathcal{G}\), with probability at least \(1-\delta\),_
\[\frac{R_{\mathcal{T}}(h)}{2} \leq\hat{R}^{Y,\rho}_{\mathcal{S}}(g)+M^{2}\hat{R}^{W}_{\mathcal{T }}(f)\] \[+\mathcal{O}\left(\sqrt[3]{\frac{d\log\frac{n}{d}}{n}}+\sqrt{ \frac{d^{\prime}\log\frac{n}{d^{\prime}}}{n}}\right)\.\]
Proof sketch.: Decomposing the risk of \(h\circ\phi\), we get
\[R_{\mathcal{T}}(h)=\mathbb{E}_{\mathcal{T}}[(g(f(X))-Y)^{2}]\] \[\leq 2\mathbb{E}_{\mathcal{T}}[(g(W)-Y)^{2}]+2M^{2}\mathbb{E}_{ \mathcal{T}}[\|(f(X)-W)\|^{2}]\] \[=2R^{Y}_{\mathcal{T}}(g)+2M^{2}R^{W}_{\mathcal{T}}(f)=2R^{Y,\rho} _{\mathcal{S}}(g)+2M^{2}R^{W}_{\mathcal{T}}(f)\.\]
The first inequality follows the relaxed triangle inequality and the Lipschitz property, and the third equality from Overlap and Covariate shift. Treating each component of \(\hat{w}\) as independent, using standard PAC learning results, and application of Theorem 3 from Cortes et al. (2010) along with a union bound argument, we get the stated result. See Appendix C for a more detailed proof.
As a result of Proposition 2, when \(\mathcal{F}\) and \(\mathcal{G}\) contain the ground-truth mappings between \(X\) and \(W\) and between \(W\) and \(Y\), in the limit of infinite samples, minimizers of (2) minimize \(R_{\mathcal{T}}\) as well.
Image classification under domain shift with privileged information
Image classification remains one of the primary benchmark tasks for unsupervised domain adaptation. In our experiments, we consider multi-class and multi-label image classification tasks where privileged information \(W\) highlights regions of interest in the images, related to the labels \(Y\). Specifically, \(W\) is the pixels contained within one or more bounding boxes, represented by pixel coordinates \(T\in\mathbb{R}^{4}\).
In the multi-class case, each image has a single bounding box whose corresponding pixels determine a single label \(Y\) of the image. In the multi-label case, an observation \((x_{i},w_{i},y_{i})\) may contain multiple bounding boxes \(j\in\{1,\dots,J\}\), with coordinates \(t_{ij}\), each of which is assigned a multi-class label \(u_{ij}\). The image label \(y_{i}\) constitutes the set of observed bounding box labels in the image. Next, we propose algorithms that leverage privileged information for improved domain transfer in these two tasks.
### DALUPI-TS: A two-stage algorithm for multi-class image classification
Our first algorithm, which we call DALUPI-TS, conforms to our theoretical analysis and instantiates (2) for the multi-class image classification problem by separately fitting functions \(\hat{f}\in\mathcal{F}\) and \(\hat{g}\in\mathcal{G}\) to infer privileged information \(W\) and predict \(Y\). \(\hat{f}\) takes an input image \(X\) and (i) estimates the location of a single bounding box, (ii) cuts out the corresponding part of the image \(X\), and (iii) resizes it to a set dimension. Steps (ii)-(iii) are hard coded (not learned). The pixels \(W\) defining a region of interest are fed to \(\hat{g}\) which outputs a prediction of the image label \(Y\).
In experiments, we use convolutional neural networks for both \(\hat{f}\) and \(\hat{g}\). The two estimators are trained separately on \((\tilde{x},\tilde{w})\sim\mathcal{T}\) and \((w,y)\sim\mathcal{S}\), respectively, as in (2), and then evaluated on target domain images where the output of \(\hat{f}\) is used for prediction with \(\hat{g}\). We use the mean squared error loss for \(\hat{f}\) and categorical cross-entropy loss for \(\hat{g}\).
### DALUPI-E2E: An end-to-end solution for multi-label image classification
For multi-label classification, we adapt Faster R-CNN (Ren et al., 2016) outlined in Figure 2 and described below. Faster R-CNN uses a region proposal network (RPN) to generate region proposals which are fed to a detection network for classification and bounding box regression. This way of solving the task in subsequent steps has similarities with our two-stage algorithm although Faster R-CNN can be trained end-to-end. We make small modifications to the training procedure of the original model in the end of this section.
The RPN generates region proposals relative to a fixed number of reference boxes--anchors--centered at the locations of a sliding window moving over convolutional feature maps. Each anchor is assigned a binary label \(p\in\{0,1\}\) based on its overlap with ground-truth bounding boxes; positive anchors are also associated with a ground-truth box with location \(t\). The RPN loss for a single anchor is
\[L^{\text{RPN}}(\hat{p},p,\hat{t},t):=L^{\text{RPN}}_{\text{cls}}(\hat{p},p)+ pL^{\text{RPN}}_{\text{reg}}(\hat{t},t), \tag{3}\]
where \(\hat{t}\) represents the refined location of the anchor and \(\hat{p}\) is the estimated probability that the anchor contains an object. The binary cross-entropy loss and a smooth \(L_{1}\) loss are used for the classification loss \(L^{\text{RPN}}_{\text{cls}}\) and the regression loss \(L^{\text{RPN}}_{\text{reg}}\), respectively. For a mini-batch of images, the total RPN loss is computed based on a subset of all anchors, sampled to have a ratio of up to 1:1 between positive and negative ditto.
A filtered set of region proposals are projected onto the convolutional feature maps. For each proposal, the detection network--Fast R-CNN Girshick (2015)--outputs a probability \(\hat{p}(k)\) and a predicted bounding box location \(\hat{t}(k)\) for each class \(k\). Let \(\hat{p}=(\hat{p}(0),\dots,\hat{p}(K))\), where \(\sum_{k}\hat{p}(k)=1\), \(K\) is the number of classes and 0 represents a catch-all background class. For a single proposal with ground-truth coordinates \(t\) and multi-class label \(u\in\{0,\dots,K\}\), the detection loss is
\[L^{\text{det}}(\hat{p},u,\hat{t}_{u},t)=L^{\text{det}}_{\text{cls}}(\hat{p},u )+\mathbf{I}_{u\geq 1}L^{\text{det}}_{\text{reg}}(\hat{t}_{u},t), \tag{4}\]
where \(L^{\text{det}}_{\text{cls}}(\hat{p},u)=-\log\hat{p}(u)\) and \(L^{\text{det}}_{\text{reg}}\) is a smooth
Figure 2: Faster R-CNN Ren et al. (2016). The RoI pooling layer and the classification and regression layers are part of the Fast R-CNN detection network Girshick (2015).
\(L_{1}\) loss. To obtain a probability vector for the entire image, we maximize, for each class \(k\), over the probabilities of all proposals. During training, Faster R-CNN requires that all input images \(x\) come with at least one ground-truth annotation (bounding box) \(w\) and its corresponding label \(u\). To increase sample-efficiency, we enable training the model using non-annotated but labeled samples \((x,y)\) from the source domain and annotated but unlabeled samples \((\tilde{x},\tilde{w})\) from the target domain. In the RPN, no labels are needed, and we simply ignore anchors from non-annotated images when sampling anchors for the loss computation. For the computation of (4), we handle the two cases separately. We assign the label \(u=-1\) to all ground-truth annotations from the target domain and multiply \(L_{\text{cls}}^{\text{det}}\) by the indicator \(\mathbf{I}_{u\geq 0}\). For non-annotated samples \((x,y)\) from the source domain, there are no box-specific coordinates \(t\) or labels \(u\) but only the labels \(y\) for the entire image. In this case, (4) is undefined and we instead compute the binary cross-entropy loss between binarized labels and the probability vector for the entire image.
We train the RPN and the detection network jointly as described in Ren et al. (2016). To extract feature maps, we use a Feature Pyramid Network Lin et al. (2017) on top of a ResNet-50 architecture He et al. (2016). We use the modfied model--DALUPI-E2E--in the experiments in Section 5.2 and 5.3. Note that we may also train the model in a LUPI setting, where no information from the target domain is used, in which case we call it LUPI-E2E.
## 5 Experiments
We compare our models, DALUPI-TS and DALUPI-E2E, to baseline algorithms in three image classification tasks where privileged information is available as one or more bounding boxes highlighting regions of interest related to the image label. We use DALUPI-TS in a digit classification task in Section 5.1. DALUPI-E2E is then used in the experiments in Section 5.2 and 5.3, where we consider classification of entities from the MS-COCO dataset and pathologies in chest X-ray images. LUPI-E2E is included in the entity classification task.
As baselines, we use classifiers trained on labeled examples from the source and target domain, respectively. The latter serves as an oracle comparison since labels are unavailable from the target in our setting. Both models are ResNet-based convolutional neural networks adapted for the different experiments and we refer to them as SL-S and SL-T, respectively. We also compare our models to a domain-adversarial neural network (DANN) Ganin et al. (2016) trained on labeled data from the source domain and unlabeled data from the target domain.
For each experiment setting (task, skew, amount of privileged information, etc.), we train 10 models from each relevant model class using randomly selected hyperparameters. All models are evaluated on a hold-out validation set from the source domain and the best performing model in each class is then evaluated on hold-out test sets from both domains. For SL-T, we use a hold-out validation set from the target domain. We repeat this procedure over 5-10 seeds controlling the data splits and the random number generation in PyTorch, which is used in our implementation. We report 95 % confidence intervals computed by bootstrapping over the seeds. We refer to Appendix A for further details on experiments.
### Digit classification
We construct a synthetic image dataset, based on the assumptions of Proposition 1, to verify that there are problems where domain adaptation using privileged information is guaranteed successful transfer but standard UDA is not.
We start with images from CIFAR-10 (Krizhevsky, 2009) which have been upscaled to \(128\times 128\). Then we insert a random \(28\times 28\) digit image, with a label in the range 0-4, from the MNIST dataset (Lecun, 1998) into a random location of the CIFAR-10 image, forming the input image \(X\) (see Figures 2(c)-2(d) for examples). The label \(Y\in\{0,\dots,4\}\) is determined by the MNIST digit. We store the bounding box around the inserted digit image and use the pixels contained within it as privileged information \(W\). This is done until no more digit images remain. The domains are constructed from using CIFAR-10's first five and last five classes as source and target backgrounds, respectively. Both source and target datasets contain 15 298 images each.
To increase the difficulty of the task, we make the digit be the mean colour of the dataset and make the digit background transparent so that the border of the image is less distinct. This may slightly violate Assumption 2 w.r.t. \(W\) since the backgrounds differ between domains. To understand how successful transfer depends on domain overlap and access to sufficient privileged information, we include a _skew parameter_\(\epsilon\in[\frac{1}{c},1]\), where \(c=5\) is the number of digit classes, which determines the correlation between digits and backgrounds. For a source image \(i\) with digit label \(Y_{i}\in\{0,...,4\}\), we select a random CIFAR-10 image with class \(B_{i}\in\{0,\dots,4\}\) with probability
\[P(B_{i}=b\mid Y_{i}=y)=\left\{\begin{array}{ll}\epsilon,&\text{if }b=y\\ (1-\epsilon)/(c-1),&\text{otherwise}\end{array}\right..\]
For target images, digits and backgrounds are matched uniformly at random. Note that the choice of
yields a uniform distribution and \(\epsilon=1\) being equivalent to the background carrying as much signal as the privileged information. We hypothesise that the latter part would be the worst possible case where confusion of the model is likely, which would lead to poor generalization performance.
In Figure 2(b), we can indeed observe the conjectured behavior where, as the skew \(\epsilon\) increases, the performance of SL-S decreases substantially on the target domain. Thus an association with the background might harm generalization performance which has been observed in previous work where background artifacts in X-ray images confused the resulting classifier.
We can also observe that DANN does not seem to be robust to the increase in correlation between the label and the background. In contrast, DALUPI-TS is unaffected by the increased skew as the subset of pixels only carries some of the background information with it, while containing sufficient information to make good predictions. Interestingly, DALUPI-TS also seems to be as good or slightly better than the oracle SL-T in this setting. This may be due to improved sample efficiency from using PI.
### Entity classification
We evaluate DALUPI-E2E and LUPI-E2E on an entity classification task based on images from the MS-COCO dataset (Lin et al., 2014). We define the source and target domains as indoor and outdoor images, respectively, and consider four entity classes for the label \(Y\): person, cat, dog, and bird. We extract indoor images by filtering out images from the super categories "indoor" and "appliance" that also contain at least one of the entity classes. Outdoor images are extracted in the same way using the super categories "vehicle" and "outdoor". Images that for some reason occur in both domains (for example an indoor image with a toy car) are removed along with any gray-scale images. We also include 1000 background examples, i.e., images with none of the entities present, in both domains.
In total, there are 5231 images in the source domain and 5719 images in the target domain; the distribution of labels are provided in Appendix A. From these pools we randomly sample 3000, 1000, and 1000 images for training, validation, and testing, respectively. As privileged information \(W\), we use bounding boxes localizing the different entities. We assume that the assumptions of Proposition 1 hold. In particular, it is reasonable to assume that the pixels contained in a box are sufficient for classifying the object.
The purpose of this experiment is to investigate the effect of additional privileged information (PI). We give LUPI-E2E access to all \((x,y)\) samples from the source domain and increase the fraction of inputs for which PI is available, \(n_{\text{PI}}(\mathcal{S})\), from 0 to 1. For DALUPI-E2E, we increase the fraction of \((\tilde{x},\tilde{w})\) samples from the target domain, \(n_{\text{PI}}(\mathcal{T})\), from 0 to 1, while keeping \(n_{\text{PI}}(\mathcal{S})=1\). We train SL-S and SL-T using all available data and increase the fraction of unlabeled target samples DANN sees from 0.2 to 1.
The models' test AUC, averaged over the four entity classes, in the target domain is shown in Figure 4. The performance of LUPI-E2E increases as \(n_{\text{PI}}(\mathcal{S})\) increases, but even with access to all PI in the source domain, LUPI-E2E barely beats SL-S. However, when additional \((\tilde{x},\tilde{w})\) samples from the target domain are added, DALUPI-E2E quickly outperforms SL-S and eventually reaches the performance of SL-T. We note that DANN does not benefit as much from the added unlabeled target samples as DALUPI-E2E does. The large gap between LUPI-E2E and SL-S for \(n_{\text{PI}}(\mathcal{S})=0\) is not too surprising; we do not expect an object detector to work well without bounding box supervision.
Figure 3: Digit classifcastiction task. Model performance on source (a) and target (b) domains as a function of association \(\epsilon\) between background and label in \(\mathcal{S}\). As the skew increases, the performance of the source models on the target data deteriorates. The rightmost panels show two example images (top) from the dataset as well as saliency maps (bottom) from SL-S when trained with source domain skew \(\epsilon=0.2\) (c) and \(\epsilon=1\) (d).
### X-ray classification
As a real-world experiment, we study detection of pathologies in chest X-ray images. We use the ChestX-ray8 dataset Wang et al. (2017) as source domain and the CheXpert dataset Irvin et al. (2019) as target domain. As privileged information, we use bounding boxes localizing each pathology. For the CheXpert dataset, only pixel-level segmentations of pathologies are available, and we create bounding boxes that tightly enclose the segmentations. It is not obvious that the pixels within such a bounding box are sufficient for classifying the pathology, and we suspect that we may violate the assumptions of Proposition 1 in this experiment. However, as we find below, DALUPI-E2E improves empirical performance compared to baselines for small training sets, thereby demonstrating increased sample efficiency.
We consider the three pathologies that exist in both datasets and for which there are annotated findings: atelectasis (ATL; collapsed lung), cardiomegaly (CM; enlarged heart), and pleural effusion (PE; water around the lung). There are 457 and 118 annotated images in the source and target domain, respectively. We train DALUPI-E2E and DANN using all these images. SL-S is trained with the 457 source images and SL-T with the 118 target images as well as 339 labeled but non-annotated target images. Neither SL-S, SL-T nor DANN use any privileged information. In the annotated images, there are 180/146/153 and 75/66/64 examples of ATL/CM/PE in each domain respectively. Validation and test sets are sampled from the non-annotated images.
In Table 2 we present the per-class AUCs in the target domain. DALUPI-E2E outperforms all baseline models, including the target oracle, in detecting CM. For ATL and PE, it performs similarly to the other feasible models. That SL-T is better at predicting PE is not surprising given that this pathology is most prevalent in the target domain. In Figure 4(a), we show a single-finding image from the target test set with ground-truth label CM. The predicted bounding box of DALUPI-E2E with the highest probability is added to the image. DALUPI-E2E identifies the region of interest and makes a correct classification. The bottom panel shows the saliency map for the ground truth class for SL-S. We see that the gradients are mostly constant, indicating that the model is uncertain. In Figure 4(b), we show AUC for CM for DALUPI-E2E, SL-S, and SL-T trained with additional examples _without_ bounding box annotations. We see that SL-S catches up to the performance of DALUPI-E2E when a large amount of labeled examples are provided. These results indicate that identifiability is not the primary obstacle for adaptation, and that PI improves sample efficiency.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & ATL & CM & PE \\ \hline SL-T & 57 (56, 58) & 59 (55, 63) & 79 (78, 80) \\ \hline SL-S & 55 (55, 56) & 61 (58, 64) & 73 (70, 75) \\ DANN & **56 (54, 57)** & 61 (56, 66) & 73 (71, 75) \\ DALUPI-E2E & 55 (55, 56) & **72 (71, 73)** & **74 (72, 76)** \\ \hline \hline \end{tabular}
\end{table}
Table 2: X-ray task. Test AUC for the three pathologies in the target domain for all considered models. Boldface indicates the best-performing feasible model; SL-T uses target labels.
Figure 4: Model performance in the target domain for the entity classification task. The point performance of SL-S and SL-T is extended across the x-axes for visual purposes. DANN is trained with an increasing fraction of target samples \(\tilde{x}\) but uses no PI.
Figure 5: X-ray task. (a): An example image from the target test set with label CM. The red rectangle indicates the bounding box with the highest probability as predicted by DALUPI-E2E with no extra data. The saliency map for CM for the corresponding SL-S is shown below. (b): AUC for CM in the target test set for SL-S, SL-T, and DALUPI-E2E trained with additional \((x,y)\) samples.
Related work
Learning using privileged information (LUPI) was first introduced by Vapnik and Vashist (2009) for support vector machines (SVMs), and was later extended to empirical risk minimization (Pechyony and Vapnik, 2010). Methods using PI, which is sometimes called hidden information or side information, has since been applied in many diverse settings such as healthcare (Shaikh et al., 2020), finance (Silva et al., 2010), clustering (Feyereisl and Aickelin, 2012) and image recognition (Vu et al., 2019; Hoffman et al., 2016). Related concepts include knowledge distillation (Hinton et al., 2015; Lopez-Paz et al., 2016), where a teacher model trained on additional variables adds supervision to a student model, and weak supervision (Robinson et al., 2020) where so-called weak labels are used to learn embeddings, subsequently used for the task of interest. The use of PI for deep image classification has been investigated by Chen et al. (2017) but this only covered regular supervised learning. Finally, Sharmanska et al. (2014) used regions of interest in images as privileged information to improve the accuracy of image classifiers, but did not consider domain shift.
Domain adaptation using PI has been considered before with SVMs (Li et al., 2022; Sarafianos et al., 2017). Vu et al. (2019) used scene depth as PI in semantic segmentation using deep neural networks. However, they only use PI from the source domain and they do not provide any theoretical analysis. Motiian (2019) investigate PI and domain adaptation using the information bottleneck method for visual recognition. However, their setting differs from ours in that each observation comprises source-domain and target domain features, a label and PI.
## 7 Discussion
We have presented our framework DALUPI which leverages privileged information for unsupervised domain adaptation. The framework builds on an alternative set of assumptions which can more realistically be satisfied in high-dimensional problems than classical assumptions of covariate shift and domain overlap, at the cost of collecting a larger variable set for training. Our analysis of this setting inspired practical algorithms for multi-class and multi-label image classification.
Our experiments show tasks where domain adaptation using privileged information is successful while regular adaptation methods fail, in particular when the assumptions of our analysis are satisfied. We observe empirically also that methods using privileged information are more sample-efficient than comparable non-privileged learners, in line with the literature. In fact, DALUPI models occasionally even outperform oracle models trained using target labels due to their sample efficiency. Thus, we recommend considering these methods in small-sample settings.
To avoid assuming that domain overlap is satisfied with respect to input covariates, we require that privileged information is sufficient to determine the label. This can be limiting in problems where sufficiency is difficult to verify. However, in our motivating example of image classification, a domain expert could deliberately choose PI so that sufficiency is reasonably satisfied. In future work, the framework might be applied to a more diverse set of tasks, with different modalities to investigate if the findings here can be replicated. Using PI may be viewed as "building in" domain-knowledge in the structure of the adaptation problem and we see this as promising approach for further UDA research.
|
2310.09268 | Magnetic structure, excitations and field induced transitions in the
honeycomb lattice $\rm{Er_2Si_2O_7}$ | We investigate the magnetic properties of the monoclinic D-type
$\rm{Er_2Si_2O_7}$ with a distorted honeycomb lattice using powder and single
crystal neutron scattering techniques, as well as single crystal magnetisation
measurements. The powder neutron diffraction shows that below the ordering
temperature, $T_{\rm N}=1.85$ K, the compound forms a ${\bf q}=0$
antiferromagnetic structure with four sublattices. For $H \! \parallel \! a$,
magnetisation measurements reveal a narrow, but clearly visible plateau at one
third of the magnetisation saturation value. The plateau's stabilisation is
accompanied by a significant increase of the magnetic unit cell, as the
magnetic peaks with fractional indices are observed in single crystal neutron
diffraction experiments. At low-temperatures, the inelastic neutron scattering
measurements reveal the presence of low-energy dispersionless excitations.
Their spectrum is sensitive to the applied field, it significantly softens on
the magnetisation plateau, and demonstrates the behaviour expected for a
non-collinear Ising antiferromagnet away from the plateau. | M. Islam, N. d'Ambrumenil, D. D. Khalyavin, P. Manuel, F. Orlandi, J. Ollivier, M. Ciomaga Hatnean, G. Balakrishnan, O. A. Petrenko | 2023-10-13T17:27:35Z | http://arxiv.org/abs/2310.09268v3 | Magnetic structure, excitations and field induced transitions in the honeycomb lattice \(\mathrm{Er_{2}Si_{2}O_{7}}\)
###### Abstract
We investigate the magnetic properties of the monoclinic D-type \(\mathrm{Er_{2}Si_{2}O_{7}}\) with a distorted honeycomb lattice using powder and single crystal neutron scattering techniques, as well as single crystal magnetisation measurements. The powder neutron diffraction shows that below the ordering temperature, \(T_{\mathrm{N}}=1.85\) K, the compound forms a \(\mathbf{q}=0\) antiferromagnetic structure with four sublattices. For \(H\parallel a\), magnetisation measurements reveal a narrow, but clearly visible plateau at one third of the magnetisation saturation value. The plateau's stabilisation is accompanied by a significant increase of the magnetic unit cell, as the magnetic peaks with fractional indices are observed in single crystal neutron diffraction experiments. At low-temperatures, the inelastic neutron scattering measurements reveal the presence of low-energy dispersionless excitations. Their spectrum is sensitive to the applied field, it significantly softens on the magnetisation plateau, and demonstrates the behaviour expected for a non-collinear Ising antiferromagnet away from the plateau.
+
Footnote †: Current affiliations: Materials Discovery Laboratory, Department of Materials, ETH Zurich, 8093 Zurich and Laboratory for Multiscale materials eXperiments, Paul Scherrer Institute (PSI), 5232 Villigen, Switzerland
+
Footnote †: Current affiliations: Materials Discovery Laboratory, Department of Materials, ETH Zurich, 8093 Zurich and Laboratory for Multiscale materials eXperiments, Paul Scherrer Institute (PSI), 5232 Villigen, Switzerland
+
Footnote †: Current affiliations: Materials Discovery Laboratory, Department of Materials, ETH Zurich, 8093 Zurich and Laboratory for Multiscale materials eXperiments, Paul Scherrer Institute (PSI), 5232 Villigen, Switzerland
## I Introduction
The family of rare-earth (RE) disilicates \(\mathrm{RE_{2}Si_{2}O_{7}}\) is renowned for its complex polymorphism with at least seven different crystal structures possible depending on the particular rare-earth ion, temperature range and pressure during synthesis [1; 2; 3]. Not much is known about the magnetic properties of this family of compounds, however, in several of the polymorphic crystal structures the magnetic RE ions are arranged into a (distorted) honeycomb lattice which makes them potentially very interesting from the contemporary magnetism point of view, particularly when searching for the experimental realisations of the Kitaev model [4]. \(\mathrm{Yb_{2}Si_{2}O_{7}}\), for example, is considered to be a strongly spin-orbit coupled quantum dimer magnet [5].
Erbium disilicate, \(\mathrm{Er_{2}Si_{2}O_{7}}\), can be formed in three polymorphs, a triclinic \(P\bar{1}\) low temperature phase (labeled as B-type), a monoclinic \(C2m\) phase (C-type) [6], and a high temperature monoclinic \(P2_{1}/b\) phase (D-type). Here we report on the magnetic properties of the monoclinic D-type phase (see Fig. 1) and focus on its rather unusual in-field behaviour. The initial magnetic properties studies reported by Leask _et al._[7] on the D-phase of \(\mathrm{Er_{2}Si_{2}O_{7}}\) suggested a four-sublattice antiferromagnetic ground state below the ordering temperature of 1.9 K. This hypothesis has recently received a direct experimental confirmation from powder neutron diffraction measurements [8]. A pronounced Ising-like character of \(\mathrm{Er_{2}Si_{2}O_{7}}\) is evident from the highly anisotropic magnetisation curves which were (at least partially) understood within a fairly simple model backed up by Monte Carlo simulations [7]. However, one experimental observation, a stabilisation of the \(\frac{1}{3}\) magnetisation plateau for a field applied along the \(a\) axis, was at odds with the theoretical predictions for a model with four sublattices in the magnetic unit cell. A possibility of the enlarged magnetic unit cell was briefly mentioned, but dismissed as unlikely [7]. We find that in the narrow region of applied magnetic fields corresponding to the \(\frac{1}{3}\) magnetisation plateau, the magnetic unit cell is indeed increased severalfold, as witnessed by a spectacular transformation of the single crystal neutron diffraction patterns.
The appearance of the \(\frac{1}{3}\) magnetisation plateaux in triangular lattice antiferromagnets, quantum and classical, is a common feature, and is often associated with the stabilisation of the _up-up-down_ (uud) structures in Heisenberg, XY and Ising systems by various mechanisms. In case of only nearest neighbour (NN) interactions, the honeycomb structure is bipartite and not frustrated, however, the next NN interactions could drive it towards a frustrated regime with an extensive ground state degeneracy. The resulting behaviour is then typical for highly frustrated magnets with complex and fragile ground states sensitive to minute perturbations. In many cases, the symmetry breaking by an external magnetic field is not trivial and the fractional magnetisation plateaux are ex
Figure 1: Crystal structure of the monoclinic D-type \(\mathrm{Er_{2}Si_{2}O_{7}}\) (space group \(P2_{1}/b\)). The bonds between the magnetic \(\mathrm{Er^{3+}}\) ions emphasise the formation of the distorted honeycomb layers stacked along the \(a\) axis. The unit cells are shown in black.
pected. Recent reports suggest, for example, stabilisation of the \(\frac{1}{3}\) and \(\frac{2}{3}\) magnetisation plateaux in a quantum antiferromagnet \(\mathrm{Cu_{2}(pymca)_{3}(ClO_{4})}\) on a distorted honeycomb lattice [9; 10]. For another two-dimensional honeycomb system, \(\mathrm{DyNi_{3}Ga_{9}}\), the plateaux are found for every \(\frac{1}{6}\) of the saturation value [11]. A theoretical study [12] suggests the stabilisation of the \(\frac{1}{3}\) and \(\frac{2}{3}\) plateaux in the XY model with third NN interactions on the honeycomb lattice through the order-by-disorder mechanism. In this paper, we explore the complex magnetisation process in the distorted honeycomb lattice antiferromagnet \(\mathrm{Er_{2}Si_{2}O_{7}}\) with four non-collinear Ising-like classical spins in the unit cell.
## II Experimental procedures
An \(\mathrm{Er_{2}Si_{2}O_{7}}\) crystal boule was prepared by the floating zone method using a two-mirror halogen lamp optical image furnace (NEC SC1MDH-11020, Canon Machinery Incorporated). The growth was performed in air, at ambient pressure and at growth speed of 12 mm/h. The crystal growth of \(\mathrm{Er_{2}Si_{2}O_{7}}\) is described elsewhere in a more detailed paper [13]. Phase purity analysis was carried out on a ground crystal piece of \(\mathrm{Er_{2}Si_{2}O_{7}}\), and the powder X-ray diffraction confirms that the crystallographic structure of the crystal boule is D-type. As previously reported [13; 14], the conventional synthesis by the solid-state method of polycrystalline samples of D-type \(\mathrm{Er_{2}Si_{2}O_{7}}\) results in the presence of a small amount of an \(\mathrm{Er_{2}SiO_{5}}\) impurity. To ensure the purity of the \(\mathrm{Er_{2}Si_{2}O_{7}}\) powder specimen used in our experiments, a polycrystalline sample (\(\approx 2\) g) was prepared by crushing a fragment of the crystal boule.
We used a Quantum Design MPMS SQuID magnetometer for the magnetisation measurements in applied magnetic fields of up to 70 kOe. An iQuantum \({}^{3}\)He insert [15] allowed the temperature range explored to be extended down to 0.48 K. In order to estimate the demagnetisation factors, we used a rectangular prism approximation [16].
The WISH time-of-flight diffractometer [17] at the ISIS facility at the Rutherford Appleton Laboratory (STFC, UK) was used for both the powder (PND) and single crystal neutron diffraction experiments [18]. A \({}^{3}\)He absorption refrigerator provided a base temperature of 0.24 K for neutron diffraction measurements. For PND, the data obtained at \(T=10\) K were used as a background to isolate the magnetic signal for each temperature. Rietveld refinements were performed using the FullProf software suite [19]. For the single crystal diffraction measurements, the sample was aligned with its \(a\) axis vertical (parallel to the applied field) for access to the reflections within \(\pm\)\(7.5\) degrees of the horizontal \(b^{\ast}\)-\(c^{\ast}\) scattering plane.
The inelastic neutron scattering (INS) measurements [20] were performed on the time-of-flight INS spectrometer [21] at the Institut Laue-Langevin (ILL), Grenoble, France. An incident wavelength of \(\lambda=4.8\) A (\(E_{i}=3.55\) meV) with an elastic resolution of \(\delta E\simeq 86\)\(\mu\)eV full width at half maximum has been chosen throughout the measurements. For each field and temperature condition, scans between 4 and 10 minutes have been recorded every 1\({}^{\circ}\) crystal rotation. Data have been reduced under Mantid [22] and analysed with the Horace [23] library under Matlab.
## III Results and discussion
### Powder neutron diffraction
Fig. 2(a) shows the development of the magnetic correlations in \(\mathrm{Er_{2}Si_{2}O_{7}}\) upon cooling from 4.0 K to just above the magnetic ordering temperature. At higher temperatures, the scattering signal is broad, but on approaching the \(T_{\mathrm{N}}=1.85\) K, it becomes more structured. The results are consistent
Figure 2: Temperature evolution of the magnetic neutron diffraction intensity measured on WISH diffractometer [17] using a powder sample of D-type \(\mathrm{Er_{2}Si_{2}O_{7}}\). Panel (a) shows the development of diffuse signal above ordering temperature. Panel (b) combines all the diffraction data between 0.24 and 5.0 K into an intensity colour map. For both panels, the magnetic contribution is obtained by subtracting a 10 K background. The omitted regions on panel (a) correspond to the very strong nuclear Bragg peaks for which the subtraction is imperfect.
with the data presented in [8]. At 1.9 K, two intensity maxima are clearly visible at 1.16 and 1.27 A\({}^{-1}\), the positions corresponding to the strongest magnetic peaks appearing below \(T_{\rm N}\). Fig. 2(b) emphasises the stabilisation of the long-range magnetic order below \(T_{\rm N}=1.85\) K and the absence of further transitions down to at least 0.24 K.
Refinement of the powder neutron diffraction data taken on WISH at 10 K confirm the monoclinic \(P2_{1}/b\) crystallographic space group with lattice parameters of \(a\) = 4.6932 A, \(b\) = 5.5586 A, \(c\) = 10.7942 A, \(\alpha\) = 90 \({}^{\circ}\), \(\beta\) = 90 \({}^{\circ}\), \(\gamma\) = 96 \({}^{\circ}\). The magnetic contribution of the data at 0.24 K are shown in Fig. 3(a), after the subtraction of the 10 K measurement in the paramagnetic phase. The presence of magnetic intensities on top of the nuclear Bragg peaks indicates \({\bf q}=0\) propagation vector. The magnetic structure solution was approached based on the space group representation (irrep) theory [24], assuming an irreducible nature of the magnetic order parameter. We focus on the data collected in the detector banks 2 and 9. Systematic testing of the relevant irreps in the refinement of the diffraction pattern revealed that only mGM2- [25; 26] provided good quality fitting (\(R_{\rm magnetic}=10.9\%\)). mGM2- is a one-dimensional irrep entering three times into decomposition of the reducible representation associated with the Er Wyckoff position \(4e\). The corresponding three basis vectors transform the \(\mu_{a}\), \(\mu_{b}\) and \(\mu_{c}\) components of the ordered moment specified in Table 1. All three components are non-zero, yielding a non-collinear magnetic structure shown in Fig. 3b with a total moment of 6.55 \(\mu_{\rm B}\) at \(T=0.24\) K. A magnetic structure with the unit cell containing four magnetic ions at unequivalent positions was refined, with all magnetic contributions found only at the nuclear peak positions and corresponding to the propagation vector \({\mathbf{q}}=0\). A fit with \(R~{}=~{}22.6~{}\%\), \(wR~{}=~{}21.7~{}\%\), and \(R_{Bragg}~{}=~{}0.9~{}\%\) was achieved for \(T=0.24\) K. The temperature dependence of the magnitude of the moment of the Er\({}^{3+}\) ions is shown in the inset. The moment remains practically independent of temperature at 6.55 \(\mu_{\rm B}\) until 1 K, above which it decreases with temperature and drops to zero at 1.9 K.
Fig. 3(b) illustrates the resulting magnetic structure of \({\rm Er_{2}Si_{2}O_{7}}\) after Rietveld refinement. The magnetic ions, shown in green, present a distorted honeycomb arrangement. The unit cell contains four magnetic Er\({}^{3+}\) ions, their positions are labelled in Fig. 3(b). The Er\({}^{3+}\) moments at positions 1 and 3 (blue arrows), and 2 and 4 (pink arrows) are antiferromagnetically aligned to each other and share an easy magnetisation axis. Table 1 details the positions of the ions and components of the magnetic moments as well as the average moment per ion when the spins are flipped by the applied field. The magnetic structure implies \(P2^{\prime}_{1}/c\) symmetry with the unit cell related to the paramagnetic structure as (-1,0,0), (0,0-1) and (0,-1,0) (see the supplementary micif file [27] for details).
### Magnetisation measurements
Fig. 4(a) summarises the low-temperature susceptibility measurements in 0.1 and 1.0 kOe along the \(a\) axis. The magnetic ordering temperatures in low fields is seen as a sharp drop in the susceptibility below \(T_{\rm N}=1.85\) K. No apprecia
Figure 3: (a) Rietveld refinement of the PND data in zero field at 0.24 K. The background subtracted data (red circles) from banks 2 and 9 of the WISH diffractometer [17] are shown together with calculated pattern (black line) and the difference curve (blue line). Green ticks represent the positions of the magnetic Bragg peaks. The inset follows the temperature dependence of the magnitude of the magnetic moment, \(\mu\), of the Er\({}^{3+}\) ions determined by the refinement. (b) Magnetic structure of D-type \({\rm Er_{2}Si_{2}O_{7}}\) obtained from the PND refinement as viewed along the \(a\) axis. Unit cell boundaries are shown in black, with the four constituent magnetic ions labelled from 1 to 4 in green. The positions of the Er\({}^{3+}\) ions and their moments are listed in Table 1.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Ion & Positions & \(\mu_{a}\) & \(\mu_{b}\) & \(\mu_{c}\) & Moment \\ & \(({\mathbf{a}},{\mathbf{b}},{\mathbf{c}})\) basis & & \([\mu_{\rm B}]\) & & \([\mu_{\rm B}]\) \\ \hline
1 & 0.89 & 0.10 & 0.35 & -5.62 & 2.11 & 2.08 & 6.55 \\
2 & 0.11 & 0.40 & 0.85 & -5.62 & 2.11 & -2.08 & 6.55 \\
3 & 0.11 & 0.90 & 0.65 & -5.62 & -2.11 & -2.08 & 6.55 \\
4 & 0.89 & 0.60 & 0.15 & -5.62 & -2.11 & 2.08 & 6.55 \\ \hline \hline \(\langle{\mathbf{m}}_{a}\rangle\) & & & 5.62 & -2.11 & 0.00 & 6.20 \\ \(\langle{\mathbf{m}}_{c}\rangle\) & & & 0.00 & 0.00 & 2.08 & 2.08 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Ionic positions and magnetic moments of the four Er\({}^{3+}\) ions, obtained from the refinement of zero-field PND data taken at 0.24 K. The labels \(\mu_{a,b,c}\) denote the components of the magnetic moments along unit vectors in the \({\mathbf{a}}\), \({\mathbf{b}}\) and \({\mathbf{c}}\) directions and have an error of \(<1~{}\%\). The moments \(\langle{\mathbf{m}}_{a}\rangle\) and \(\langle{\mathbf{m}}_{c}\rangle\) are the theoretical average moments per ion expected if ions flip to align as far as possible with an applied field in the \(a\) and \(c\) directions.
ble difference has been noticed in the susceptibility measured in field-cooled and in zero-field-cooled conditions for the applied fields of 0.1 and 1.0 kOe.
Figs. 5 and 6 illustrate the magnetisation process in \(\mathrm{Er_{2}Si_{2}O_{7}}\) for two directions of an applied magnetic field, \(H\parallel a\) and \(H\parallel c\). For both field directions, a rather sharp increase in magnetisation \(M(H)\) is observed in relatively weak fields, 5.3 kOe for the field applied along the \(a\) axis and 3.3 kOe for the field along the \(c\) axis.
For \(H\parallel a\), at the lowest temperature, a clear plateau in magnetisation is visible, extending from 5.3 to 6.0 kOe. Above the plateau, the \(M_{H\parallel a}\) increases again rather sharply reaching 5.9 \(\mu_{\mathrm{B}}\) per \(\mathrm{Er^{3+}}\) ion already in a field of 7 kOe and 6.5 \(\mu_{\mathrm{B}}\) in a field of 20 kOe. This value is in agreement with the total moment of 6.55 \(\mu_{\mathrm{B}}\) obtained from the PND refinements (see Fig. 3). Above 20 kOe, the magnetisation tends to saturate with further field increase up to 70 kOe, resulting in a small increase of \(M_{H\parallel a}\) to 6.9 \(\mu_{\mathrm{B}}\) per \(\mathrm{Er^{3+}}\) ion.
The transitions to and from the magnetisation plateau are temperature sensitive (see inset in Fig. 5) - a minimum in the field derivative \(dM/dH\) becomes barely visible at \(T=0.71\) K and disappears completely for \(T=0.93\) and 1.1 K despite the temperature remaining well below the \(T_{\mathrm{N}}\).
For \(H\parallel c\), the transition at 3.3 kOe is accompanied by a sudden jump of magnetisation from 0.1 to 2.1 \(\mu_{\mathrm{B}}\), no signs of a plateau stabilisation could be detected for this direction of an applied field (see inset in Fig. 6). Above the transition field, the \(M_{H\parallel c}\) continues to significantly increase reaching 3.3 \(\mu_{\mathrm{B}}\) per \(\mathrm{Er^{3+}}\) ion in a maximum field of 70 kOe. Unlike the case of \(H\parallel a\), the magnetisation process around the critical field for \(H\parallel c\) is not particularly temperature sensitive.
The magnetisation measurements results are in agreement with the previous report [7]. Small variations in the values of the observed critical fields are most likely caused by the uncertainty in the estimates of the demagnetisation factor, as the sample used was not a perfect rectangular prism. The estimates of demagnetisation factor of the sample used in magnetisation measurements for \(H\parallel a\) and \(H\parallel c\) are 0.11 and 0.66 (in the units of 4\(\pi\)) respectively [16]. Given a relatively high density of \(\mathrm{Er_{2}Si_{2}O_{7}}\) and large magnetic moments of \(\mathrm{Er^{3+}}\) ions, the demagnetisation field corrections make an appreciable impact on the values of the transition fields.
Figure 4: (a) Temperature dependence of the susceptibility of D-type \(\mathrm{Er_{2}Si_{2}O_{7}}\) measured in 0.1 and 1.0 kOe for \(H\parallel a\). (b) Temperature dependence of the susceptibility measured in 1 kOe for \(H\parallel c\). For both field directions, the applied field is quoted without taking into account demagnetisation corrections.
Figure 5: Field dependence of the magnetisation of D-type \(\mathrm{Er_{2}Si_{2}O_{7}}\) measured at 0.5 K for \(H\parallel a\). The inset focuses on the transition found at 5.3 kOe and follows the temperature evolution of (top panel) the magnetisation, \(M\), and (bottom panel) its field derivative \(dM/dH\). The applied field is corrected using the demagnetisation factor \(N_{a}=0.11\times 4\pi\).
Figure 6: Field dependence of the magnetisation of D-type \(\mathrm{Er_{2}Si_{2}O_{7}}\) measured at 0.5 K for \(H\parallel c\). The inset shows the transition observed at 3.3 kOe on an expanded scale and shows the field dependence of the magnetisation around the transition at different temperatures. The applied field is corrected using the demagnetisation factor \(N_{c}=0.66\times 4\pi\).
### Single crystal neutron diffraction
We followed the magnetisation process in \(\mathrm{Er_{2}Si_{2}O_{7}}\) for a field applied along the \(a\) axis by performing single crystal neutron diffraction measurements on the WISH diffractometer at ISIS [17]. Above and below the magnetisation plateau region the observed magnetic pattern consists of the \(\mathbf{q}=0\) peaks, while on the plateau, additional signal around the non-integer \((0kl)\) positions becomes clearly visible (see Fig. 7). The magnetic signal consists of a mixture of the resolution-limited Bragg peaks and much broader diffuse scattering features. There are sharp peaks around the positions described with the propagation vector \(\mathbf{q}=(0\frac{1}{2}\frac{1}{2})\) at 6 kOe. The intensity profiles of these peaks become broader along the \(c^{\star}\) direction in 7 kOe. In 8 kOe the signal could be described as a collection of diffuse scattering rods with undulating intensity running along the \(c^{\star}\) direction at \(\frac{1}{2}\)-integer values of \(k\). On further increase of the applied field to 9 kOe the intensities of these rods decrease and they almost disappear at 10 kOe.
The phase at 6 kOe, which we associate with the magnetisation plateau regime, is also characterised by the appearance of the relatively weak resolution-limited peaks at the \(\mathbf{q}=(0\frac{1}{4}\frac{1}{4})\) positions. These peaks are barely visible at 7 kOe and absent in data taken at the other fields. The diffraction pattern in 6 kOe is also characterised by the presence of broad diffuse features elongated along the \(b^{\star}\) direction at the \(\mathbf{q}=(0\frac{1}{3}0)\) positions, which form the undulating intensity rods running along the \(b^{\star}\) axis. These rods are also present in 7 kOe, but in 8 kOe they are transformed into a collection of weak and narrow peaks at \(\mathbf{q}=(0\frac{1}{3}0)\) positions.
The additional non-integer diffuse features appear as sharp peaks with intensities comparable to the main magnetic reflections at the \(\mathbf{q}=0\) positions. We have therefore chosen an appropriate scale (2 % of the intensity of the Bragg reflections at the \(\mathbf{q}=0\) positions) to best illustrate the weaker diffuse signal within the plateau regime in Figs. 7 and 8. At applied fields of 8 and 9 kOe the undulating diffuse features appear to tilt along \(k\) while traversing along \(l\). The direction of the tilt alternates as the Brillouin zone boundaries are crossed with increasing \(l\). A similar undulation of diffuse scattering rods has previously been linked to the presence of an incommensurate state in the honeycomb systems SrHo\({}_{2}\)O\({}_{4}\) and SrEr\({}_{2}\)O\({}_{4}\)[28; 29].
The rather irregular shape of the sample, a result of natural cleaving of the as-grown boule, used in the diffraction experiments means that we cannot accurately calculate the demagnetising factor \(N\) and a comparison between the neutron diffraction and magnetisation features is not straightforward. However, just for estimation purposes, if we were to assume that \(N_{a}\approx 0.22\times 4\pi\), then on a plateau, the demagnetising field is around 12% of the applied field (further discussion is given section IV). Therefore for the single crystal diffraction measurements, an applied field of 6 kOe corresponds to the left-hand side of the magnetisation plateau, the field of 7 kOe is very close to its right-hand side, while 8 kOe corresponds to the region of a rapid magnetisation growth above the plateau. The fields of 9 and 10 kOe would then correspond to a nearly saturated phase.
Fig. 8 compares the magnetic correlations in identical applied fields along the \(a\) axis at before (top) and after exposure to a significantly higher field of 50 kOe (bottom). At all fields within the plateau boundaries, there is an apparent asymmetry when looking at the diffuse scattering patterns in Fig. 8. The difference is most pronounced along \(b^{\star}\), when comparing the diffuse features at \(\mathbf{q}=(0\frac{1}{3}0)\) and \((0\frac{2}{3}0)\). The signal at \((0\frac{2}{3}0)\) is a distinct, sharp Bragg peak-like feature while at \((0\frac{1}{3}0)\), there is a relatively large diffuse scattering feature elongated along \(b^{\star}\). The diffuse scattering signals on the upward and downward field sweep also differ, with well-defined, diagonal lines of intensity traversing through the \(\mathbf{q}=(0\frac{1}{3}0)\) and the surrounding \((0\frac{1}{4}\frac{1}{4})\) peaks appearing on the downward sweep. The resulting star-like shapes are much more pronounced in the patterns taken after the sample was subjected to a high applied field, above the saturation field.
A full set of the diffraction maps measured on WISH can be found in the Supplementary Material [27].
### Inelastic neutron scattering
The measurements of the excitations performed on the IN5 spectrometer at ILL with a single crystal sample of \(\mathrm{Er_{2}Si_{2}O_{7}}\) were taken with the vertical magnetic field applied along the \(a\) axis. The measurements of the elastic signal in the \(b^{\star}\)-\(c^{\star}\) scattering plane allowed for an identification of the magnetic state, as a direct comparison with the magnetisation data is again complicated by the irregular sample's shape and the associated uncertainty of the demagnetising factor. We therefore accompany the INS data with the maps of the elastic intensity observed in the \(b^{\star}\)-\(c^{\star}\) plane. Whenever possible, the diffraction intensity maps have been folded according to the crystal symmetries in order to improve the statistics.
For all the measurements made on IN5, only dispersionless excitations were found with no significant dependence of the excitation energy on the scattering vector. We therefore present our measurements as an average over the rotation of the crystal, producing a powder-average spectrum, obtained by grouping the pixellated data of the detector in (|Q|, E) bins ie. Debeye-Scherer cones over the entire detector range.
As the orientation of the crystal only allows for measurement of the \(b^{\star}\)-\(c^{\star}\) plane, this method of averaging cannot represent a true powder-average containing the information for all orientations of the crystals with the incident and scattered neutrons spanning all possible crystal symmetry directions. Our case covers the symmetry directions of interest, in which there is no dispersion of the inelastic modes. It should be noted, however, the Q-dependence of the intensity of these excitations cannot be determined. We therefore present the powder-averaged INS data as colour-coded intensity maps, for different values of the applied field, see Fig. 9.
In zero field, a high intensity excitation is observed at 0.58 meV, along with a second, lower intensity branch at 0.75 meV, differing from previous powder INS measurements in [8], where two branches were seen at 0.2 meV and 0.6 meV. With increasing field, the higher intensity branch appears to split into two branches at 0.74 meV and 0.43 meV in 2 kOe. The energies of the two branches increase and decrease lin
early as the applied field is increased further. By 5 kOe, the energy gap between the branches reaches 0.76 meV. The sample enters the plateau phase at 5.5 kOe, confirmed by the presence of the non-integer peaks in the elastic reciprocal space maps. The transition to a plateau regime is accompanied by the appearance of three excitation branches. A full set of the field dependent excitation spectra can be found in the Supplementary Material [27].
The field dependence of the excitations have been examined via the cuts made along the energy transfer axis in the scattering vector range 1.0 to 1.5 A\({}^{-1}\) (see Fig. 10(a)). For clarity, the intensity cuts have been plotted separately to follow the excitations both within and outside of the plateau region. The evolution of the excitations with field is plotted in Fig. 10(b), with the area of each point corresponding to the intensity of the excitation branch. The plateau phase is indicated by the dashed lines. While demagnetisation effects prevent a direct comparison to our magnetisation measurements, the plateau region can be defined via a combination of the elastic diffraction maps accompanying the inelastic powder average spectra and examining the development of the excitations. Although remnants of the non-integer peaks are still present in the elastic diffraction maps for 7-9 kOe, their intensities are greatly reduced. The single, high energy excitation present in higher fields shown in orange in Fig. 10 is also present from 7 kOe, implying that the system is no longer in the plateau phase and has entered the fully magnetised state.
Distinct field-induced trends can be seen in the excitations below and above the plateau regime. At low fields, the 0.58 meV excitation present in zero field splits into two pronounced branches. The higher energy branch continues to increase in energy with field while that of the lower energy excitation decreases with field. The field gradients for increasing and decreasing branches are very similar. This can be rationalised via the spin-flipping mechanism in Ising systems for which the energy of reversing the Ising moments is varying linearly with the applied field. At fields above the plateau, a single high-intensity excitation branch is present. With further field increase, the energy of the excitation increases with the same gradient as the lower-field branch, implying that the system is in the fully magnetised state along the applied field. Remnants of the 0.5 meV excitation, shown in pink in Fig. 10(b), are present immediately above the plateau with a reduced intensity, but disappear at higher fields.
The system enters the plateau phase above 5 kOe, in this regime three intense excitation branches are observed, all varying with applied field with the same gradient.
Figure 8: Development of the magnetic correlations at 0.24 K in D-type Er\({}_{2}\)Si\({}_{2}\)O\({}_{7}\) in a field from 6 to 9 kOe applied along the \(a\) axis before (top row) and after exposure to a higher field of 50 kOe (bottom row).
Figure 7: Magnetic single crystal neutron diffraction maps of \((0kl)\) plane for D-type Er\({}_{2}\)Si\({}_{2}\)O\({}_{7}\) measured on the WISH diffractometer at \(T=0.24\) K in different fields applied along the \(a\) axis. The magnetic signal is obtained by subtracting the \(T=2\) K background. Only sharp Bragg peaks with integer \(k\) and \(l\) indices are observed in zero field and in applied fields above 10 kOe (see [27]). The scattering patterns in the fields corresponding to magnetisation plateau are dominated by the appearance of the peaks and diffuse scattering features at fractional positions in reciprocal space.
The individual levels of the Er\({}^{3+}\) ions are Kramers doublets with the energy levels of the ion split by a magnetic field. This splitting is characterised by an anisotropic effective \(g\)-tensor. In general, this will be different for different ionic states. Extensive analysis in [7] reported the values of the \(g\)-tensor along the principal axes. The Zeeman energy of each of the Ising-like spins was well-approximated by taking \(g_{z}=13.6\) with \(g_{x}=g_{y}=0\) where \(z\) is the easy-axis direction of each magnetic ion and \(x\) and \(y\) are the two perpendicular directions. For a field varying along the \(a\) direction this would give a Zeeman energy varying as \(\pm 0.077H\) meV, illustrated by the slope of the straight lines in Fig. 10.
## IV Modelling magnetic properties
Er\({}_{2}\)Si\({}_{2}\)O\({}_{7}\) is thought to be well described as an Ising-like system. There are two easy magnetisation axes set by the crystal fields. Labelling the ions in the unit cell as in Fig. 3(b), ions 1 and 3 share one easy axis and ions 2 and 4 share the other. Our Rietveld refinement gives the total moment as 6.55 \(\mu_{\rm B}\) with components given in Table 1.
The angles of the average magnetisation direction \(\langle\mathbf{m}_{a}\rangle\) with respect to the \(a\) direction and from the \(ab\)-plane to the \(c\) direction (see Fig. 11) are \(\theta=19.0^{\circ}\) and \(\phi=18.4^{\circ}\) respectively. These values differ slightly from previously reported values \(\theta=28^{\circ},\phi=15^{\circ}\) in [7] and \(\theta=21.3^{\circ},\phi=12.8^{\circ}\) in [8]. The value we obtain for \(\phi\) fits with a natural explanation of the transitions observed in the magnetisation in Fig. 5 and in Fig. 6 as the field increases. The jump to a magnetisation of \(\sim 2\mu_{\rm B}\) per magnetic ion for fields aligned along the \(c\) direction matches the value of \(2.08\)\(\mu_{\rm B}\) of the easy-axis moment resolved along this direction. With the field aligned along the \(a\) direction, the magnetisation rises quickly to \(\gtrsim 6\)\(\mu_{\rm B}\) as the field is increased from \(6.5\) to \(7.5\) kOe. This is close to the moment \(6.20\)\(\mu_{\rm B}\) from our Rietveld refinement we would expect in the fully magnetised phase. The magnetisation in the centre of the intermediate phase at \(0.5\) K, which we estimate as \(2.1\)\(\mu_{\rm B}\) per Er ion, is close to one-third of the theoretical fully magnetised value.
The interpretation of these transitions, suggested by the model of [7], is that the moments of the ions 1 and 2 reverse direction to align with the magnetic field. The ground state configuration is assumed to be changing from the one identified as i) in Fig. 11 to either ii) or iii) for the case of a field along the \(a\) direction or the \(c\) direction. (We have labelled the ions as in Fig. 3.) The gradual increase of the moment for increasing fields above the transitions suggests that the easy axis slowly realigns towards the magnetic field with increasing field in both cases. The intermediate phase, with magnetisation of one third of its expected value in the fully magnetised phase (\(6.55\)\(\mu_{B}\)), requires an enlarged unit cell in which there are two-thirds of a spin flip per original unit cell on average.
The excitations seen in our inelastic data support the validity of an Ising model with two easy-axes. In any Ising system, all single spin flips are eigenstates of the Hamiltonian. Excitations are therefore local and there should be no dispersion of the excitations. As mentioned above we found no dispersion for excitations in our angle-resolved data [27] and presented angle-averaged results in Fig. 9, showing the existence of dis
Figure 9: Top: Evolution of the dynamical structure factor \(S(Q,E)\) with field taken on the IN5 spectrometer on a single crystal sample of D-type Er\({}_{2}\)Si\({}_{2}\)O\({}_{7}\) at \(T=0.045\) K. The resulting energy spectra have been averaged over the rotation of the crystal due to the presence of dispersionless excitations only. The sample was rotated around the \(a\) axis \(\sim 43^{\circ}\), parallel to an applied field, \(H\), while the average is taken over the whole detector range. The high intensity feature at \(|Q|=0.35\) Å\({}^{-1}\) is an artefact of the cryomagnet and can be ignored. Bottom: Corresponding symmetrised diffraction maps for the zero-energy transfer in the \((0kl)\) plane.
persionless bands. In the model of [7], the energies of the possible ground states are characterised by the energy of interaction of the moment \(i\), with all moments of type \(j\) in the rest of the lattice. The model assumes that the interactions between ions have the same symmetry as the crystal lattice. For example, ions 1 and 3 share the same easy axis and have the same separations as ions 2 and 4. The interactions between these pairs of ions are taken to be the same, namely \(\beta\). On this basis, the contribution to the ground state energy of a given ion in the unit cell with the other ions in the system in the unit cell is given in Table 2.
Assuming that the interactions between Ising spin variables of ions \(i\) and \(j\) are of the form \(J(\mathbf{r}_{i}-\mathbf{r}_{j})S_{i}S_{j}\), where \(\mathbf{r}_{i}\) is the position vector an ion \(i\), the parameters in Table 2 determine the strength of the local field acting on an individual ion. The energy of an excitation is \(\epsilon_{\pm}(H)=-2(\alpha+\beta+\gamma+\delta)\pm g\mu_{\mathrm{B}}B\cos\psi\) in state i), see Fig. 11. Here \(\psi\) is the angle between the magnetic field and the easy-axis for the spin. We are defining the effective \(g\)-factor as in [7], so that the Zeeman
\begin{table}
\begin{tabular}{c c c c c c c c c} & \multicolumn{4}{c}{Ion \(j\), State i)} & \multicolumn{4}{c}{Ion \(j\), State ii)} \\ \cline{2-10} Ion \(i\) & 1 & 2 & 3 & 4 & 1 & 2 & 3 & 4 \\ \hline \hline
1 & \(\alpha\) & \(\beta\) & \(\delta\) & \(\gamma\) & \(\alpha\) & \(\beta\) & \(-\delta\) & \(-\gamma\) \\
2 & \(\beta\) & \(\alpha\) & \(\gamma\) & \(\delta\) & \(\beta\) & \(\alpha\) & \(-\gamma\) & \(-\delta\) \\
3 & \(\delta\) & \(\gamma\) & \(\alpha\) & \(\beta\) & \(-\delta\) & \(-\gamma\) & \(\alpha\) & \(\beta\) \\
4 & \(\gamma\) & \(\delta\) & \(\beta\) & \(\alpha\) & \(-\gamma\) & \(-\delta\) & \(\beta\) & \(\alpha\) \\ \hline \end{tabular}
\end{table}
Table 2: The parameters giving the contribution to the energy from interactions between an Ising-like spin of an Er ion labelled \(i\) in Fig. 3 with all those labelled \(j\) in the crystal. (We have used the labelling of ions from [8]. The parameters \(\beta\), \(\gamma\) and \(\delta\) were estimated in [7] from the measured phase diagram, while the parameter \(\alpha\) was estimated from spectroscopic measurements. The values quoted were \(\alpha=-0.088(6)\,\mathrm{meV}\), \(\beta=0.015(4)\,\mathrm{meV}\), \(\gamma=-0.144(4)\,\mathrm{meV}\) and \(\delta=-0.050(4)\,\mathrm{meV}\). (In [7] ion 3 is labelled 4 and vice versa.)
Figure 11: Configurations of the moments in possible ground states. a) Perspective view showing the spins 1 and 3 aligned along one easy axis and spins 2 and 4 aligned along the other. This corresponds to the expected ground state in zero applied field. In an applied field we expect spins to flip to maximise their alignment with the field. For a field applied parallel (or near parallel) to \(\mathbf{a}\), we expect spins 1 and 2 to flip. This gives rise to a net magnetisation in the \(\langle\mathbf{m}_{a}\rangle\) direction, which is in the plane perpendicular to the \(c\) direction at an angle \(\theta=19^{\circ}\) to the \(a\) direction. b) The orientation of the moments in the unit cell: i) the zero-field case; ii) the fully magnetised state for a field applied in the \(a\) direction. The expected average moment per ion is \(\langle\mathbf{m}_{a}\rangle\), which is given in Table 1; and iii) the fully magnetised state for a field applied in the \(c\) direction.
Figure 10: Field dependence of the excitation energies of \(\mathrm{Er}_{2}\mathrm{Si}_{2}\mathrm{O}_{7}\) at \(T=0.045\) K. (a) Cuts along the energy transfer axis of the INS spectra shown in Fig. 9 for \(Q=[1.0,1.5]\) Å\({}^{-1}\) for different applied fields. (b) The variation of the excitation energies with applied field. The area of the symbols is proportional to the scattering intensity. The vertical dashed lines are guides to the eye and indicate the boundaries between the phases which we estimate to be at \(H_{1}\approx 5.2\,\mathrm{kOe}\) and \(H_{2}\approx 6.7\,\mathrm{kOe}\). The colouring indicates branches of excitations which follow the variation with the Zeeman energy expected in the three phases: unmagnetised (\(H<H_{1}\)), \(\nicefrac{{1}}{{3}}\)-magnetised (\(H_{1}<H<H_{2}\)) and fully magnetised \(H_{2}\lesssim H\). The number of main excitations are in line with expectations in both the unmagnetised and fully magnetised phases. The lines through the points are drawn with gradients given by the effective \(g\)-factor reported in [7] and assuming the excitation energy varies as \(g\mu_{\mathrm{B}}H\), where \(H\) is the applied field. We do not expect the magnetisation and demagnetising field to vary significantly in each phase (see main text).
energy to flip the Ising effective spin-1/2 is \(g\mu_{\rm B}B\cos\psi\).
In state i) there are two excitation energies with a splitting given by the Zeeman term. The magnetisation is small and the field \(B\) is well approximated by the applied field \(H\). In our experiments the field was applied along the \(a\) direction and \(\cos\psi=\cos\phi\cos\theta\). With the parameters quoted above and the value given for \(g\) (the \(g\)-tensor was measured spectroscopically in [7]), the model predicts the excitation energies to be \(\epsilon_{\pm}(H)=0.53\pm 0.077H\,\,{\rm meV}\), with \(H\) in \({\rm kOe}\). This agrees remarkably well with the excitation energies shown in Figure 10 for applied fields \(H<H_{1}\approx 5.2\,{\rm kOe}\). These have \(\epsilon_{+}\approx 0.58+0.077H\) and \(\epsilon_{-}\approx 0.58-0.077H\) meV.
We identify the ground state at applied fields \(H>H_{2}\approx 6.7\,{\rm kOe}\) with spin configuration ii) characterised in Fig. 11. Here the model also predicts the excitation energy: \(\epsilon(H)\approx-2(\alpha+\beta-\gamma-\delta)+g\mu_{\rm B}H\cos\psi+C\) where \(C\) is a constant. The Zeeman energy should vary linearly with the applied field as the magnetisation, and hence the demagnetising field, are expected to be constant within each phase and their effects are denoted by \(C\). There is only one excitation as any spin must flip against the field. This implies that the excitation energy in meV is \(\epsilon(H)\approx-0.24+g\mu_{\rm B}H\cos\psi+C\), with \(H\) in \({\rm kOe}\). The model predicts one dominant excitation. This is close to what we find for the applied fields \(H>H_{2}\) shown in Fig. 10.
In both states i) and ii), see Fig. 11, there is evidence for weak scattering by other modes. In state i) there is some scattering at energy \(\epsilon\approx 0.75\,{\rm meV}\), which we observed only in zero field. We do not have an explanation for this within the simple model we are using. In state ii) there is weak scattering at \(\epsilon\sim 0.5\,{\rm meV}\), which disappears at higher fields. Again, we cannot explain this within the model.
The magnetic field in the sample will be significantly different from the applied field once the system magnetises. This is because the magnetisation is not parallel to the applied field and because of demagnetising effects. For applied fields \(H\lesssim 10\) kOe, the magnetisation is close to one of three values: 0, \(\langle\mathbf{m}_{a}\rangle/3\) and \(\langle\mathbf{m}_{a}\rangle\). The demagnetising effects are hard to quantify, but are apparent in our results. For example, the magnetisation results shown in Fig. 5 (inset), show the transition into the fully magnetised phase occurs at an applied field \(H\sim 6.5\,{\rm kOe}\) at \(0.5\,{\rm K}\). The same transition appears at the slightly higher applied field \(H_{2}\sim 6.7\,{\rm kOe}\) in the sample studied in inelastic neutron scattering, see Fig 10 (where data were taken at the temperature of \(0.05\,{\rm K}\)). The samples in these two measurements were different so that direct mapping between applied fields for the magnetisation and neutron scattering measurements is not possible on account of the demagnetising effects.
In the case of the sample studied in neutron scattering, its shape was approximately a half-cylinder. The axis of the cylinder was \(27^{\circ}\) away from the \(a\) axis in the \(ac\) plane. None of the applied field, the magnetisation direction \(\langle\mathbf{m}_{a}\rangle\) and the demagnetising field would be aligned with the cylinder axes. For some directions of applied field the resultant field has different projections along the two easy axes, which would lead to a further splitting of the Zeeman split excitations. For the particular case of the field applied parallel to the \(a\) axis, our estimates suggest that this splitting is negligible. (We assumed a uniform demagnetising tensor with component \(0.15\times 4\pi\) along the axis and equal components in the perpendicular directions.)
The identification of the intermediate phase is difficult. We would like to establish for which directions of applied field the phase exists (we have only looked along one direction) as well as find the new ordered state. The diffraction data shown in Figs. 7 and 8 indicate possible Bragg as well as diffuse peaks corresponding to propagation vectors \(\mathbf{q}=[0\frac{1}{3}0]\) and \([0\frac{1}{4}\frac{1}{4}]\). The nature of the phase also seems to vary with applied field. Within the easy-axis model, the magnetisation and diffraction data imply a state with enlarged periodicity in both \(b\) and \(c\) directions and with an average of two-thirds of a spin-flip per each original unit cell. It might be possible to explain the magnetisation with an enlarged cell, for example \(4\times 4\) but not involving a factor 3, and flipping close to two-thirds of a spin per original unit cell. However, this would make it difficult to explain the scattering corresponding to propagation vectors \(\mathbf{q}=[0\frac{1}{3}0]\).
Our model assumes that the magnetisation and, hence, the demagnetising field are constant in each phase including the intermediate phase. We therefore expect only the applied component of the total field to be varying. The variation of the excitation energies with applied field would then still show the linear dependence of the Zeeman energy seen in the unmagnetised and fully magnetised cases. This is close to what we see. The small deviation from this behavior, slightly below the transition at \(H=H_{2}\) (see Figure 10), may also be connected with the variation of the phase seen in the diffraction data.
## V Summary
Magnetic measurements and a range of neutron scattering techniques have been employed to investigate the in-field behaviour of D-type \({\rm Er}_{2}{\rm Si}_{2}{\rm O}_{7}\) an Ising-like antiferromagnet. Powder neutron diffraction data confirm the formation of a four-sublattice \(\mathbf{q}=0\) structure below the ordering temperature of \(T_{\rm N}=1.85\) K with two pairs of collinear antiferromagnetic moments within the crystallographic unit cell. Magnetisation measurements with the field applied along the \(a\) axis of a single crystal of \({\rm Er}_{2}{\rm Si}_{2}{\rm O}_{7}\) reveal the existence of a narrow and temperature-sensitive magnetisation plateau at \(\frac{1}{3}\) of the saturation magnetisation. In-field single crystal neutron diffraction has shown that the magnetic unit cell undergoes a significant increase within the plateau regime demonstrated by the appearance of scattering signal at non-integer positions described by the propagation vectors \(\mathbf{q}=(0\frac{1}{2}\frac{1}{2})\), \(\mathbf{q}=(0\frac{1}{4}\frac{1}{4})\) and \(\mathbf{q}=(0\frac{1}{3}0)\). The intensity, shape and width of these field-induced features change rapidly with the applied magnetic field within the magnetisation plateau. The diffraction pattern then returns to integer peaks at higher fields, upon its entrance into the paramagnetic phase and out of the plateau.
Inelastic neutron scattering data indicate flat, dispersionless excitations, as expected in an Ising-like system. Below the plateau, two intense excitation branches are observed, one linearly increasing with field and the other decreasing while above the plateau, only one branch is visible. The field gra
dients for all three branches are well described by the previously determined \(g\)-factor for the \(\mathrm{Er}^{3+}\) magnetic ions. Within the plateau regime, the excitation spectrum corresponds to an increased magnetic unit cell and contains three intense branches. This observation should restrict the number of candidate ground states significantly. Enlarging the magnetic unit cell leads quickly to an increase in the number of inequivalent spin flips and hence the number of excitations, while our data show only three significant excitations.
## VI Acknowledgments
We acknowledge the technical support and expertise of the sample environment teams at ISIS and ILL. The work at the University of Warwick was supported by EPSRC through grants EP/M028771/1 and EP/T005963/1.
|
2310.11427 | Induced Gravitational Waves from Non-attractor Inflation and NANOGrav
data | In this paper, we investigate the scalar-induced gravitational waves in
single-field non-attractor inflation for the Pulsar Timing Arrays data. Our
model comprises three phases of inflation: the first and third phases are
slow-roll inflation, while the second phase is a period of non-attractor
inflation. We analyze the model's predictions for various values of the sound
speed $c_s$ and examine the sharp transitions to the final attractor phase.
Furthermore, we study the model's predictions for NANOGrav observations and
future gravitational wave observations. We also calculate the non-Gaussianity
parameter $f_{NL}$ for the non-attractor setup with a general sound speed and
the sharpness parameter. | Amin Nassiri-Rad, Kosar Asadi | 2023-10-17T17:38:51Z | http://arxiv.org/abs/2310.11427v1 | # Induced Gravitational Waves from Non-attractor Inflation
###### Abstract
In this paper, we investigate the scalar-induced gravitational waves in single-field non-attractor inflation for the Pulsar Timing Arrays data. Our model comprises three phases of inflation: the first and third phases are slow-roll inflation, while the second phase is a period of non-attractor inflation. We analyze the model's predictions for various values of the sound speed \(c_{\rm s}\) and examine the sharp transitions to the final attractor phase. Furthermore, we study the model's predictions for NANOGrav observations and future gravitational wave observations. We also calculate the non-Gaussianity parameter \(f_{NL}\) for the non-attractor setup with a general sound speed and the sharpness parameter.
Introduction
The detection of gravitational waves (GWs) has indeed revolutionized our understanding of physics on large scales and in non-linear regimes. Recent advancements have led to significant progress in detecting stochastic GWs in the range of 10 nHz, as observed by NANOGrav [1] and other experiments [2, 3, 4]. The NANOGrav data provides valuable insights into the astrophysical origins of GWs, including the merging of binary black holes. Another intriguing possibility is the existence of induced gravitational waves, which are sourced by scalar perturbations on small scales during inflation [5, 6, 7, 8, 9, 10, 11, 12, 13, 14].
To explain the NANOGrav data via cosmological sources, one needs to amplify the scalar perturbations during inflation [16, 17, 18, 19, 20, 21, 22, 23, 24, 25]. The amplification of scalar perturbations is also a mechanism for the production of primordial black holes (PBHs), which may explain dark matter as well [26, 27, 28, 29, 30]. The GW perturbations are then sourced by the square of the scalar perturbations. If the amplitude of the scalar perturbations at small scales is much higher than the corresponding amplitude at CMB scales, then there is a chance that the induced GWs are large enough to explain NANOGrav data.
The Ultra slow-roll (USR) inflation model is a well-known approach to amplify scalar perturbations, which can lead to the formation of PBHs [31, 32, 33, 34, 35, 36, 37, 38, 39]. In this model, the potential is constant, resulting in an exponential decrease in the velocity of the inflaton over time. Consequently, this leads to an increase in the curvature perturbation [40, 41, 42, 43]. It is important to note that the USR model violates the Maldacena consistency condition [44, 45]. Several studies have explored this violation, including [47, 48, 49, 51, 52, 53, 54, 55]. According to [43], the non-Gaussianity parameter in this model is \(f_{NL}=\frac{5}{2}\). In [46] this result is extended by introducing a sharpness parameter that characterizes the final attractor phase.
In an alternative scenario, the perturbations can experience growth during the non-attractor phase, even with an arbitrary sound speed \(c_{s}\)[48, 49]. In this study, we extend the approach proposed in [56] by considering the arbitrary sound speed \(c_{s}\) as an additional degree of freedom. Our objective is to investigate how this model can account for the NANOGrav data. In our model, the Lagrangian of the inflaton is an arbitrary function of the kinetic term \(X\), denoted as \(P(X)\), with a constant potential. The sound speed can have non-trivial effects on the generalization of the results obtained in [56]. Specifically, our model comprises three inflationary phases: two slow-roll phases at the beginning and end, and a non-attractor phase in between. During the slow-roll phases, the sound speed is set equal to unity. In this work, we also explore non-Gaussianity and demonstrate that it depends on both the sharpness parameter \(h\) and the number of e-folds associated with the non-attractor phase. Notably, when \(c_{s}=1\), our results reduce to the standard result obtained in [46]. Furthermore, if the transition to the final attractor phase is very mild, non-Gaussianity approaches zero.
The researches on ultra slow-roll inflation has recently focused on the loop effects of this model on the CMB scale [57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70]. The main message conveyed by these works is that loop corrections can become significant and affect long CMB scales perturbations, depending on the duration of ultra slow-roll inflation. In particular, it is claimed in [63] that if the transition
to the final attractor phase is mild, then loop corrections will become harmless. However, other criticisms have been raised that loop corrections are absent [59, 60, 69]. This question remains open, and future studies will be needed to investigate the loop effects in our model.
The paper is structured as follows. In Section 2, we present our model and generalize the results of [56] to the case with a general sound speed. We calculate the non-Gaussianity for this model and study its consistency in confrontation with the observations of PBHs. In Section 3, we compare our results with the NANOGrav data and discuss the differences compared to the USR model. Our main focus is to investigate the predictions of the model for various values of \(c_{s}\) and see which ones fit better to the data qualitatively. We conclude in Section 4.
## 2 The Model
In this section, we introduce our model for the scalar-induced gravitational waves as the source of pulsar timing arrays (PTAs) observations. Our setup comprises three stages: two slow-roll inflation stages at the beginning and end, and a non-attractor phase in the middle. This model is a generalization of the work presented in [56], where the middle phase allows for an arbitrary sound speed, \(c_{s}\). Specifically, the Lagrangian in the middle phase is given by
\[\mathcal{L}_{2}=P(X)-V_{0}, \tag{2.1}\]
where \(V_{0}\) is a constant, and \(X=\frac{\dot{\phi}^{2}}{2}\) is the kinetic energy. The sound speed in terms of \(P(X)\) is given by [50]
\[c_{s}^{2}\equiv\frac{P_{,X}}{P_{,X}+2XP_{,XX}}. \tag{2.2}\]
As \(P(X)=X\) in the first and third stages, the sound speed is set to unity (\(c_{s}=1\)) in these phases. In the non-attractor phase, the evolution of \(\phi\) at the background level is given by [49]
\[\ddot{\phi}+3H\dot{\phi}c_{s}^{2}=0. \tag{2.3}\]
In the first slow-roll phase, the mode function in terms of conformal time \(\tau\) is given by
\[\mathcal{R}_{1}=\frac{He^{-ik\tau}(1+ik\tau)}{2\sqrt{k^{3}\epsilon_{i}}}. \tag{2.4}\]
Here we assume that \(\epsilon_{i}\) represents the slow roll parameter at the first phase, which is assumed to be nearly constant. We also consider the Bunch-Davies initial condition for the primary mode function. Throughout this paper, we assume \(M_{P}=1\) without loss of generality. In the middle stage, as the sound speed is arbitrary, the mode function will be more complicated. Following [75], the mode function in the non-attractor phase evolves as
\[v_{k}^{\prime\prime}+(c_{s}^{2}k^{2}-\frac{(\eta-2s+3)^{2}-1}{4\tau^{2}})v_{k }=0, \tag{2.5}\]
where we define the variable \(v_{k}=z\mathcal{R}_{k}\), \(z=\frac{2\epsilon a^{2}}{c_{s}^{2}}\) and \(s=\frac{c_{s}}{Hc_{s}}\). Here, \(a\) represents the scale factor. Additionally, we have the first and second slow-roll parameters, \(\epsilon\) and \(\eta\), respectively. They are defined as \(\epsilon=-\frac{\dot{H}}{H^{2}}\) and \(\eta=\frac{\dot{\epsilon}}{H\epsilon}\) (a 'dot' represents the differentiation with respect to the cosmic time). In the non-attractor phase, we assume that the phase begins at \(\tau=\tau_{i}\) and ends at \(\tau=\tau_{e}\). Based on this assumption, it can be shown that the first and second slow-roll parameters during this phase with a constant potential are expressed as
\[\epsilon_{n}(\tau)=\epsilon_{i}\left(\frac{\tau}{\tau_{i}}\right)^{-\eta}, \tag{2.6}\]
\[\eta=-3(1+c_{s}^{2}). \tag{2.7}\]
The general solution to (2.5) in the non-attractor phase is given as
\[\mathcal{R}_{2}=\frac{iHc_{s}\sqrt{-\tau^{3}}}{2\sqrt{\epsilon_{i}}}\left( \frac{\tau_{i}}{\tau}\right)^{-\frac{\eta}{2}}\bigg{[}\alpha_{k}H_{v}^{(1)}(- c_{s}k\tau)+\beta_{k}H_{v}^{(2)}(-c_{s}k\tau)\bigg{]}, \tag{2.8}\]
where \(\nu=\frac{-3c_{s}^{2}}{2}\). \(\alpha_{k}\) and \(\beta_{k}\) are constants that should be determined by the matching condition at \(\tau=\tau_{i}\). As the sound speed is different in the middle stage the matching conditions are as follows [76]
\[\begin{split} R_{k}(\tau_{i}^{-})&=R_{k}(\tau_{i}^ {+})\\ R_{k}^{\prime}(\tau_{i}^{-})&=\frac{R_{k}^{\prime}( \tau_{i}^{+})}{c_{s}^{2}}.\end{split} \tag{2.9}\]
we can determine the values of \(\alpha_{k}\) and \(\beta_{k}\) as
\[\alpha_{k}=\frac{ie^{-ik\tau_{i}}}{2\sqrt{-k^{3}\tau_{i}^{3}}}\bigg{[}k\tau_{ i}\left(k\tau_{i}-i\right)H_{1+\nu}^{(2)}\left(-kc_{s}\tau_{i}\right)+ic_{s} \left(3+k\tau_{i}\left(k\tau_{i}+3i\right)\right)H_{\nu}^{(2)}\left(-kc_{s} \tau_{i}\right)\bigg{]}, \tag{2.10}\]
\[\beta_{k}=-\frac{ie^{-ik\tau_{i}}}{2\sqrt{-k^{3}\tau_{i}^{3}}}\bigg{[}k\tau_{ i}\left(k\tau_{i}-i\right)H_{1+\nu}^{(1)}\left(-c_{s}k\tau_{i}\right)+ic_{s} \left(3+k\tau_{i}\left(k\tau_{i}+3i\right)\right)H_{\nu}^{(1)}\left(-c_{s}k \tau_{i}\right)\bigg{]}. \tag{2.11}\]
One can determine the mode function at the third stage by utilizing \(\alpha_{k}\) and \(\beta_{k}\). It is possible to verify that when \(c_{s}=1\), the aforementioned expressions reduce to the findings of [63]. The first slow-roll parameter in the final stage is given by [46]
\[\epsilon(\tau)=\epsilon_{e}\left[\frac{h}{6}-\left(\frac{h}{6}+1\right)\frac{ \tau^{3}}{\tau_{e}^{3}}\right]^{2}. \tag{2.12}\]
Here, the parameter \(h\) determines the sharpness of the transition to the final attractor phase. A higher value of \(|h|\) leads to a quicker transition. In our model, \(h\) is defined as [46, 63]
\[h=-6\sqrt{\frac{\epsilon_{V}}{\epsilon_{e}}}, \tag{2.13}\]
where \(\epsilon_{e}\) represents the slow-roll parameter at the end of the non-attractor phase, and \(\epsilon_{V}=\frac{1}{2}(\frac{V^{\prime}(\phi_{e})}{V(\phi_{e})})^{2}\).
One can use these results to construct the mode function. The general mode function in the final attractor phase is expressed as
\[\mathcal{R}_{3}=\frac{H}{2\sqrt{k^{3}\epsilon(\tau)}}\bigg{[}c_{k}e^{-ik\tau}(1+ ik\tau)+d_{k}e^{ik\tau}(1-ik\tau)\bigg{]} \tag{2.14}\]
The constants \(c_{k}\) and \(d_{k}\) are determined by matching conditions similar to (2.9) at \(\tau=\tau_{e}\). Due to their complexity, we do not provide explicit expressions for them. The power spectrum at the end of inflation can be obtained using the following relation
\[\mathcal{P}_{\mathcal{R}}=\frac{k^{3}}{2\pi^{2}}|\mathcal{R}_{3}|^{2}\Big{|}_ {\tau=0} \tag{2.15}\]
The two graphs in Fig. 1 illustrate the power spectrum for various combinations of \(c_{s}\) and \(h\). To calculate the power spectrum, we select \(\tau_{i}\) such that the non-attractor phase commences about 16 e-folds after the CMB mode exits the horizon. Additionally, we employ COBE normalization to establish the power spectrum on CMB scales. In simpler terms, by satisfying the matching condition, it can be demonstrated that on super-horizon scales, we observe
\[\lim_{k\to 0}\frac{k^{3}}{2\pi^{2}}|\mathcal{R}_{3}|^{2}=\frac{H^{2}}{8\pi^{2} \epsilon_{i}} \tag{2.16}\]
The values of \(H\) and \(\epsilon_{i}\) are carefully selected such that equation (2.16) yields a power spectrum \(\mathcal{P}_{\mathcal{R}}\simeq 2\times 10^{-9}\) for \(k_{CMB}=0.05\)Mpc\({}^{-1}\). As depicted in Fig. 1, the power spectrum exhibits similar characteristics to those described in [56], which will be elaborated upon in detail. Let \(\Delta N\) denote the e-fold duration of the non-attractor phase. For \(|\tau_{e}|\ll|\tau_{i}|\) and \(k|\tau_{e}|\ll 1\), it can be demonstrated that
\[\begin{split}\mathcal{P}_{\mathcal{R}}=&\mathcal{P} _{\text{CMB}}\frac{(6-h)^{2}}{h^{2}}e^{6c_{s}^{2}\Delta N}\times\\ &\Bigg{|}(1+ix)\,_{0}F_{1}\left(;\frac{3c_{s}^{2}}{2};-\frac{1}{ 4}x^{2}c_{s}^{2}\right)-\frac{1}{3}(3+x(x+3i))\,_{0}F_{1}\left(;\frac{3c_{s}^{ 2}}{2}+1;-\frac{1}{4}x^{2}c_{s}^{2}\right)\Bigg{|}^{2},\end{split} \tag{2.17}\]
where \(x=-k\tau_{i}\) and \({}_{0}F_{1}\) is the hypergeometric function. From the above equation, we observe that the exponential growth of the power spectrum is modified to \(\exp(6c_{s}^{2}\Delta N)\). This indicates that the peak of the non-attractor phase is shorter compared to the case where \(c_{s}=1\). Additionally, the power spectrum is proportional to \(\frac{(6-h)^{2}}{h^{2}}\). It is worth noting that the scaling behavior of the power spectrum in the attractor phase is identical to that observed in ultra slow-roll inflation. Another interesting feature to study is the behavior of the power spectrum in the limit \(|k\tau_{i}|\ll 1\), which pertains to modes that exited the horizon at early times. Given the power spectrum at the end of inflation, one can observe that the power spectrum at CMB scales is modified by
\[\mathcal{P}_{\mathcal{R}}(k,\tau=0)\simeq\mathcal{P}_{\text{CMB}}\bigg{[}1+ \frac{2x^{2}\left(c_{s}^{2}+1\right)\left(6-h\right)e^{3c_{s}^{2}\Delta N}}{h \left(3c_{s}^{2}+2\right)}\bigg{]}. \tag{2.18}\]
As we see in Fig. 1 in the dip, the power spectrum decreases considerably. To estimate the position of the dip, we expect Eq. (2.18) goes to zero at dip and one has (2.18)
\[k|\tau_{i}|\sim e^{-3c_{s}^{2}\Delta N/2}\sqrt{\frac{h\left(3c_{s}^{2}+2\right)} {2\left(c_{s}^{2}+1\right)\left(h-6\right)}}. \tag{2.19}\]
The above expression also demonstrates that the position of the dip shifts to the right as the absolute value of \(|h|\) increases. One can also check the behaviour of power spectrum for the modes very deep inside horizon. For \(1\ll k|\tau_{e}|\) one can check that
\[\mathcal{P}_{\mathcal{R}}\simeq \frac{9H^{2}e^{3\Delta N(1+c_{s}^{2})}}{8\pi^{2}h^{2}c_{s}^{2} \epsilon_{i}}\times\] \[\Bigg{|}\left(c_{s}-1\right)e^{2ikc_{s}\left(\tau_{i}-\tau_{e} \right)}\left(\cos\left(k\tau_{e}\right)+ic_{s}\sin\left(k\tau_{e}\right) \right)+\left(c_{s}+1\right)\left(\cos\left(k\tau_{e}\right)-ic_{s}\sin\left(k \tau_{e}\right)\right)\Bigg{|}^{2}\,. \tag{2.20}\]
The given expression oscillates for large values of k when \(c_{s}\neq 1\). However, if we consider \(c_{s}=1\), the power spectrum remains constant for \(1\ll k|\tau_{e}|\), as depicted in the left panel of Figure 1. In particular, the expression reduces to the following one by setting \(c_{s}=1\)
\[\mathcal{P}_{\mathcal{R}}\simeq\frac{9H^{2}e^{6\Delta N}}{2\pi^{2}h^{2} \epsilon_{i}}\,. \tag{2.21}\]
To fit the parameters of the model with PTAs data, we also consider the observed values for PBH formation, as shown in Fig. 2. To begin, we focus on the mass fraction of PBHs, which is given by [71]
\[\beta\simeq\frac{1}{2}\text{Erfc}\bigg{(}\frac{\mathcal{R}_{c}}{\sqrt{2 \mathcal{P}_{\mathcal{R}}}}\bigg{)}. \tag{2.22}\]
Figure 1: Left: The power spectrum for different values of \(c_{s}\) and \(h=-6\). Other parameters are held constant to ensure that the peak for different sound speeds does not grow significantly. Right: The power spectrum for \(c_{s}=\frac{3}{4}\) and different values of \(h\).
Here, \({\cal R}_{c}\) is an \(O(1)\) parameter that denotes the critical value of the curvature perturbation for PBH formation. In this model, we have chosen \({\cal R}_{c}=1.59\). Additionally, the ratio of PBHs to the dark matter density, \(f_{PBH}\), is expressed as [28]
\[f_{\rm PBH}(M_{\rm PBH})\simeq 2.7\times 10^{8}\bigg{(}\frac{M_{\rm PBH}}{M_{ \odot}}\bigg{)}^{-\frac{1}{2}}\beta. \tag{2.23}\]
To determine \(\frac{M_{\rm PBH}}{M_{\odot}}\) in terms of the number of e-folds, one can use the following relation [29]
\[\frac{M_{\rm PBH}}{M_{\odot}}=7.7\times 10^{17}\exp(-2(N_{\rm max}-N_{\rm CMB })). \tag{2.24}\]
In our model, \(N_{\rm max}\) represents the number of e-folds at which the power spectrum reaches its maximum and \(N_{CMB}\) is the number of e-folds at which CMB mode exits the horizon. For each value of \(c_{s}\) and \(h\), our model has three free parameters: \(h\), \(\tau_{i}\) and \(\Delta N\). We have carefully selected these parameters such that for each \(c_{s}\) the frequency of the peak in the power spectrum is comparable to the frequency of PTAs data. In Fig. 2 we have presented constraints on \(f_{\rm PBH}\) for different values of \(c_{s}\). As the constraints on \(f_{\rm PBH}\) for different parameters were somewhat similar, we have consolidated them into a single figure. When \(h=-1\), the amplitude of oscillations does not decay for large momentum values, as observed in Figure 1. Since the approximation of \(\beta\) in Equation (2.22) may not be valid in this limit, we do not study the effect of mild transitions. Before ending this section, we delve into the study of the bispectrum for non-attractor models with arbitrary sound speed \(c_{s}\) in the next subsection.
Figure 2: This figure illustrates the behavior of the PBH mass fraction (\(f_{\rm PBH}\)) for \(c_{s}=\frac{3}{4}\) and \(h=-12\). It is worth noting that this behavior is similar for other values of \(c_{s}\) and \(h\), so we have consolidated them into a single figure.
### Bispectrum
In this subsection, we evaluate the bispectrum for the modes that exit the horizon during the intermediate non-attractor phase. To calculate the bispectrum, one can adopt a two-phase inflationary framework, where the first phase corresponds to the non-attractor phase, and the second phase corresponds to the slow-roll phase [63]. Since the mode functions during non-attractor inflation do not exhibit scale invariance, we can consider the following option for the mode function
\[\mathcal{R}_{1}(\tau)=\frac{Hc_{s}\sqrt{-\pi\tau^{3}}}{2\sqrt{2 \epsilon_{n}(\tau)}}\bigg{[}c_{+}H_{\nu}^{(1)}\left(-c_{s}k\tau c_{s}\right)+c _{-}H_{\nu}^{(1)}\left(-c_{s}k\tau\right)\bigg{]}. \tag{2.25}\]
The mode function in the second phase can be obtained using the following expressions for the coefficients
\[c_{\pm}=\frac{1}{2}\left(\frac{k}{\lambda}\right)^{-\frac{3}{2} \left(c_{s}^{2}-1\right)}\left[\left(\frac{k}{\lambda}\right)^{3(c_{s}^{2}-1) }\pm 1\right]\,, \tag{2.26}\]
where \(\lambda\) is an arbitrary conststant. It is straightforward to verify that with these choices of coefficients, the mode function satisfies the normalization condition
\[c_{+}^{2}-c_{-}^{2}=1. \tag{2.27}\]
The mode function in the second phase is then given by
\[\mathcal{R}_{2}(\tau)=\frac{H}{2\sqrt{k^{3}\epsilon(\tau)}}\bigg{[}a_{k}e^{- ik\tau}(1+ik\tau)+b_{k}e^{ik\tau}(1-ik\tau)\bigg{]}, \tag{2.28}\]
with \(a_{k}\) and \(b_{k}\) to be determined by the matching conditions. Based on the definition of \(f_{NL}\), we have the following relation
\[\langle\mathcal{R}_{k_{1}}\mathcal{R}_{k_{2}}\mathcal{R}_{k_{3}} \rangle=(2\pi)^{3}\delta^{3}\bigg{(}\sum_{i}\overrightarrow{k_{i}}\bigg{)} \frac{6}{5}f_{NL}\big{(}P_{k_{1}}P_{k_{2}}+P_{k_{2}}P_{k_{3}}+P_{k_{3}}P_{k_{1 }}\big{)}, \tag{2.29}\]
where \(P_{k}=|R_{k}(\tau=0)|^{2}\). In order to calculate the bispectrum, we require the Hamiltonian up to third order. Additionally, it is more convenient to compute \(f_{\rm NL}\) using the effective field theory approach. It can be shown that The third-order Hamiltonian, which neglects sub-leading gradient interactions, can be expressed as follows [75]
\[H_{3}^{(1)} =-\frac{a^{2}\eta H^{3}\epsilon\pi^{\prime 2}\pi}{c_{s}(\tau)^{2}}\,, \tag{2.30}\] \[H_{3}^{(2)} =\left[1-\frac{1}{c_{s}(\tau)^{2}}\right]aH^{2}\epsilon\pi^{ \prime 3}\,,\] \[H_{3}^{(3)} =2H^{2}a\epsilon\frac{c_{s}^{\prime}(\tau)}{c_{s}^{3}(\tau)}\pi^ {\prime 2}\pi,\]
where \(c_{s}(\tau)\) is the time dependent sound speed. The total third order Hamiltonian is expressed as \(H_{3}=H_{3}^{(1)}+H_{3}^{(2)}+H_{3}^{(3)}\). Here, the prime denotes the derivative with respect to conformal time, and \(\pi\) represents the Goldstone boson, which is related to the curvature perturbation according to the relations established in [44, 73]
\[\mathcal{R}_{k}=-H\pi_{k}+H\pi_{k}\dot{\pi}_{k}+\frac{\dot{H}}{2}\pi_{k}^{2}\,. \tag{2.31}\]
Since our focus is on the three-point correlation function at the end of inflation, the higher-order terms are suppressed within the slow-roll approximation. Therefore, we can utilize the aforementioned relation at the linear order, leading to an approximate expression for the three-point correlation function as follows
\[\left\langle\mathcal{R}_{k_{1}}\mathcal{R}_{k_{2}}\mathcal{R}_{k_{3}}\right\rangle \simeq-H^{3}\left\langle\pi_{k_{1}}\pi_{k_{2}}\pi_{k_{3}}\right\rangle. \tag{2.32}\]
By employing the in-in formalism, we can express the three-point correlation function as follows
\[\left\langle\pi_{k_{1}}(0)\pi_{k_{2}}(0)\pi_{k_{3}}(0)\right\rangle=-2\,{\rm Im }\int_{\tau_{i}}^{0}d\tau\left\langle H_{3}(\tau)\pi_{k_{1}}(0)\pi_{k_{2}}(0) \pi_{k_{3}}(0)\right\rangle. \tag{2.33}\]
The integral mentioned above can be divided into three parts: \(\tau_{i}<\tau<\tau_{e}\), \(\tau=\tau_{e}\) and\(\tau_{e}<\tau<0\). At \(\tau=\tau_{e}\) the Hamiltonian at the transition time contributes to the three-point function. To compute the contribution of each part, we utilize the mode functions corresponding to the non-attractor phase in the first part, and those associated with the slow-roll phase in the second part. For \(\tau_{i}<\tau<\tau_{e}\) we have the non-attractor phase and \(c_{s}(\tau)=c_{s}\). In this part both \(H_{3}^{(1)}\) and \(H_{3}^{(2)}\) contribute to the three-point function and we have
\[\frac{\left\langle\mathcal{R}_{k_{1}}\mathcal{R}_{k_{2}}\mathcal{R}_{k_{3}} \right\rangle^{(1)}_{\text{non-att}}}{2\pi^{2}\mathcal{P}_{\mathcal{R}}}=(2\pi )^{3}\delta^{3}\bigg{(}\sum_{i}\overrightarrow{k_{i}}\bigg{)}\frac{-3h\left( c_{s}^{2}+1\right)(h+12)}{2(h-6)^{2}}\left(\frac{1}{k_{1}^{3}}\frac{1}{k_{2}^{3}}+2 \text{ perms}\right), \tag{2.34}\]
\[\frac{\left\langle\mathcal{R}_{k_{1}}\mathcal{R}_{k_{2}}\mathcal{R}_{k_{3}} \right\rangle^{(2)}_{\text{non-att}}}{2\pi^{2}\mathcal{P}_{\mathcal{R}}}=(2\pi )^{3}\delta^{3}\bigg{(}\sum_{i}\overrightarrow{k_{i}}\bigg{)}\frac{9h^{2}c_{ s}^{2}\left(c_{s}^{2}-1\right)}{4(h-6)^{2}}\left(\frac{1}{k_{1}^{3}}\frac{1}{k_{2}^{3}}+2 \text{ perms}\right), \tag{2.35}\]
with \(\mathcal{P}_{\mathcal{R}}=\lim_{k\to 0}\frac{k^{3}}{2\pi^{2}}|\mathcal{R}_{3}|^{2}\). To clarify, we use the term 'non-att' to refer to the non-attractor phase. On the other hand for \(\tau_{e}<\tau<0\) we have the slow roll phase. In this stage only \(H_{3}^{(1)}\) contributes to the three-point function with \(c_{s}(\tau)=1\). Computing the three-point function in this part yields
\[\frac{\left\langle\mathcal{R}_{k_{1}}\mathcal{R}_{k_{2}}\mathcal{R}_{k_{3}} \right\rangle^{(1)}_{\text{SR}}}{2\pi^{2}\mathcal{P}_{\mathcal{R}}}=(2\pi)^{3 }\delta^{3}\bigg{(}\sum_{i}\overrightarrow{k_{i}}\bigg{)}\frac{6h(h+6)e^{3 \Delta N\left(c_{s}^{2}-1\right)}}{(h-6)^{2}}\left(\frac{1}{k_{1}^{3}}\frac{1 }{k_{2}^{3}}+2\text{ perms}\right). \tag{2.36}\]
Finally, we calculate the contribution of the transition term. Since the sound speed changes during the transition, we can express \(c_{s}(\tau)^{2}\) as
\[c_{s}(\tau)^{2}=c_{s}^{2}\theta(\tau_{e}-\tau)+\theta(\tau-\tau_{e}), \tag{2.37}\]
with \(\theta(x)\) being the step function. Writing \(H_{3}^{(3)}=H^{2}a\epsilon\frac{(c_{s}(\tau)^{2})^{\prime}}{c_{s}^{2}(\tau)} \pi^{\prime 2}\pi\) and using (2.9) at \(\tau=\tau_{e}\), the above equation gives the contribution of transition term to the three-point function as
\[\frac{\langle\mathcal{R}_{k_{1}}\mathcal{R}_{k_{2}}\mathcal{R}_{k_{3}} \rangle_{\text{transition}}^{(3)}}{2\pi^{2}\mathcal{P}_{\mathcal{R}}}=(2\pi)^{3} \delta^{3}\bigg{(}\sum_{i}\overrightarrow{k_{i}}\bigg{)}\frac{3h(2h+6)}{(h-6) ^{2}}e^{-3\Delta N(1-c_{s}^{2})}(c_{s}^{2}-1)\left(\frac{1}{k_{1}^{3}}\frac{1 }{k_{2}^{3}}+2\text{ perms}\right). \tag{2.38}\]
Taking into account all the above contributions the non-Gaussianity then reads as
\[f_{NL}=\frac{5h}{2(h-6)^{2}}\left[2\left((h+3)c_{s}^{2}+3\right)e^{-3\Delta N \left(1-c_{s}^{2}\right)}+\frac{(-3hc_{s}^{4}+(h-24)c_{s}^{2}-2(h+12))}{4} \right]\,. \tag{2.39}\]
The expression mentioned above violates the Maldacena consistency condition, which is similar to what happens in ultra slow-roll inflation. It is noteworthy that when \(c_{s}=1\), the expression reduces to the result obtained in [63]. Furthermore, we can analyze the sharp transition limit where \(h\to-\infty\). In this limit, the expression simplifies to
\[f_{NL}=\frac{5}{8}\left[c_{s}^{2}\left(8e^{-3\Delta N\left(1-c_{s}^{2}\right) }+1\right)-3c_{s}^{4}-2\right]\,. \tag{2.40}\]
It should be noted that the aforementioned outcome should not be mistaken for the outcome obtained in [49]. In that study, the non-attractor phase is assumed to have \(\eta=-6\), which results in a non-constant potential. In contrast, in the present analysis, we observe that for \(|h|\ll 1\), \(f_{NL}\) approaches zero, which is similar to the result obtained in the case of ultra slow-roll inflation [63].
According to [72], the Gaussian bound for the PBH constraints is applicable when \(|f_{NL}|\mathcal{P}_{\mathcal{R}}\ll 1\). For instance, with \(h=-6\), \(c_{s}=\frac{1}{4}\), and \(\Delta N=3\), we obtain \(f_{NL}\simeq 0.3\), and the Gaussian approximation constraint is satisfied with \(\mathcal{P}_{\mathcal{R}}=\mathcal{O}(0.01)\). It is worth noting that the non-Gaussianity parameter presented in (2.39) corresponds to a model with two phases. However, if one were to inquire about the non-Gaussianity parameter in a model with three phases, our analysis for this case confirms that \(f_{NL}=\mathcal{O}(\epsilon)\), which is negligible. Therefore, the aforementioned criterion is satisfied once again.
## 3 PTAs observations and the scalar induced gravitational waves
The evolution of gravitational wave perturbations follows the same description as presented in [74]. Specifically, the evolution of the Fourier mode of gravitational waves can be described as follows
\[h_{k}^{\lambda^{\prime\prime}}(\tau)+2\mathcal{H}h_{k}^{\lambda^{\prime}}( \tau)+k^{2}h_{k}^{\lambda}(\tau)=4S_{k}^{\lambda}(\tau), \tag{3.1}\]
Here, \(\mathcal{H}=\frac{a^{\prime}(\tau)}{a(\tau)}\) represents the conformal Hubble parameter, and \(S_{k}^{\lambda}(\tau)\) corresponds to the source term, which is given by
\[S_{k}^{\lambda}=\int\frac{d^{3}q}{(2\pi)^{3}}\epsilon_{ij}^{\lambda}(\hat{k})q^{i} q^{j}\bigg{(}2\Phi_{q}\Phi_{k-q}+(\mathcal{H}^{-1}\Phi_{q}^{\prime}+\Phi_{q})( \mathcal{H}^{-1}\Phi_{k-q}^{\prime}+\Phi_{k-q})\bigg{)}\,. \tag{3.2}\]
The expression for the source term \(S_{k}^{\lambda}(\tau)\) involves the polarization tensor \(\epsilon_{ij}^{\lambda}\). Furthermore, the Bardeen potential \(\Phi_{k}\) is related to the curvature perturbation through the following relation
\[\Phi_{k}=\frac{2}{3}\Gamma(k\tau)\mathcal{R}_{k}, \tag{3.3}\]
where \(\Gamma(k\tau)\) is the transfer function. The power spectrum of gravitational waves, corresponding to the solution of equation (3.1), is given by
\[\mathcal{P}_{h}(k,\tau)=2\int_{0}^{\infty}dt\int_{-1}^{1}dsF(s,t)\mathcal{P}_ {\mathcal{R}}(kv)\mathcal{P}_{\mathcal{R}}(ku), \tag{3.4}\]
where \(u=\frac{s+t+1}{2}\) and \(v=\frac{t-s+1}{2}\). For details of \(F(s,t)\) see [74]. After calculating the power spectrum of gravitational waves, the observed energy density of produced gravitational waves can be obtained as follows
\[\Omega_{GW}h_{0}^{2}=\Omega_{r}h_{0}^{2}\bigg{(}\frac{g_{*}}{g_{*,e}}\bigg{)}^ {\frac{1}{3}}\Omega_{GW,e}(f), \tag{3.5}\]
with \(\Omega_{r}h_{0}^{2}\simeq 4.2\times 10^{-5}\) in which \(h_{0}=H_{0}/100\text{km}^{-1}\) and \(\Omega_{r}\) is the current radiation energy density. Also, \(f=ck/(2\pi)\) and the factor \(\big{(}\frac{g_{*}}{g_{*,e}}\big{)}^{\frac{1}{3}}\) accounts for the change in the number of relativistic degrees of freedom from the end of radiation dominated era to the present time. In Figure 3 and 4, the results of the model are presented in comparison with the NANOGrav and future data. In these figures, we have fixed the sharpness parameter to a specific value but compared the results for different values of \(c_{s}\). When comparing different values of \(c_{s}\), it is evident that for \(c_{s}=1/4\) and \(c_{s}=1/2\), the model becomes qualitatively more consistent particularly with the first lines of the NANOGrav data, as they cross the violin diagrams, which have a higher probability according to the analysis in [1]. Conversely, \(c_{s}=3/4\) lies outside the violin diagrams for the first lines and does not explain the data better than other values of \(c_{s}\). Moreover, as seen in the left panel of Fig. 3, various values of \(c_{s}\) present a slightly different prediction for future data, as \(c_{s}=1/2\) and \(c_{s}=3/4\) lie in a wider range of predictions. However, in Fig. 4, \(c_{s}=1\) lies in a narrower region of future data.
## 4 Conclusion
The study of stochastic gravitational waves has become a valuable tool for investigating the physics of both the early and late universe. In this particular work, we have focused on comparing the predictions of a non-attractor model for gravitational waves with the NANOGrav data from PTAs.
The non-attractor model considered in this study is characterized by three distinct phases of inflation. The first and third phases correspond to slow-roll inflation, which is a well-established paradigm in inflationary cosmology. However, the second phase is unique as it
Figure 4: Comparison of the model’s results with NANOGrav and future data for \(h=-12\). We observe that \(c_{s}=1/4\) has a better fit, particularly with the first violins, compared to other values of sound speed. Furthermore, other values of \(c_{s}\) can explain the future observations better than \(c_{s}=1\).
Figure 3: Comparison of the model’s results with NANOGrav and future data for \(h=-6\). It can be seen that \(c_{s}=1/4\) has a better fit, particularly with the first violins, compared to other values of sound speed. Moreover, \(c_{s}=1/2\) and \(c_{s}=3/4\) lie in a more wider range of future predictions.
exhibits non-attractor behavior. This non-attractor phase introduces additional dynamics and features into the inflationary process.
By comparing the predictions of the non-attractor model with the NANOGrav data, we aim to assess the model's viability and its ability to explain the observed gravitational wave signals. This analysis provides valuable insights into the physics of the early universe and enhances our understanding of the inflationary dynamics and the generation of primordial gravitational waves. To compare our results with the data from PTAs, we have also taken into account observations of the PBH formation parameter, \(f_{\rm PBH}\). Notably, our findings align well with the constraints imposed by the \(f_{\rm PBH}\) data.
In this study, we have explored a range of values for the sound speed parameter, \(c_{s}\), and the transition parameter, \(h\). By considering these various scenarios, we aim to comprehensively investigate the implications of different parameter choices and their impact on the predicted outcomes. From the plotted results, it is evident that models with \(c_{s}=1/2\) and \(c_{s}=1/4\), exhibit better consistency with the observational data especially in the first lines. However, it is important to note that these comparisons are currently qualitative, and in order to provide a more precise assessment, a quantitative approach will be pursued in future studies.
Moreover, we have considered the parameter space which have a good prediction for the value of \(f_{PBH}\) in the range of NANOGrav frequency. One may consider the cases in which the power spectrum doesn't grow enough. This may make the model more consistent with PTAs data for other values of \(c_{s}\). On the hand one can study the effects of loop corrections in this study as well and see how loop corrections can affect the predictions of model for the PTAs data. The other possiblity that can be studied is to consider the case in which the mode functions are not initially in a Bunch-Davis vacuum and see how the predictions change with different choices of vacuum.
To summarize, this study emphasizes that models with lower sound speed parameter values, \(c_{s}\), exhibit more agreement with PTAs data. Nonetheless, a meticulous and comprehensive data analysis is required to achieve a more thorough assessment of the parameter space within the model and make a better fit to the PTAs data. Another aspect worth investigating is the utilization of a broader range of sound speed and sharp transition parameter values to derive more dependable results for comparisons. However, this will require additional computational power. We intend to conduct these analyses in future research endeavors.
In addition to the aforementioned comparisons, we have extended the analytical results presented in [56] to accommodate arbitrary values of the sound speed parameter, \(c_{s}\). Furthermore, we have computed the non-Gaussianity parameter, \(f_{\rm NL}\) in this model as well. As expected, we find that the non-Gaussianity parameter approaches zero as the transition becomes milder. This aligns with the intuition that a smoother transition would lead to a more Gaussian distribution of primordial fluctuations. By generalizing the analytical results and investigating the non-Gaussianity parameter, we gain a deeper understanding of the statistical properties of the primordial perturbations in our model.
**Acknowledgments:** We would like to express our deep gratitude to Hassan Firouzjahi for his valuable discussions and meticulous comments, which greatly contributed to the development of this work. We are also thankful to Alireza Talebian, Mohammad Hossein Namjoo and Sina Hooshangi for their insightful discussions, which have enriched our understanding of the subject matter. K. A. would like to extend special thanks to the University of SISSA and ICTP for their generous hospitality during the conference "Cosmology in Miramare, 2023," where this paper was in progress.
|
2302.11636 | Do We Really Need Complicated Model Architectures For Temporal Networks? | Recurrent neural network (RNN) and self-attention mechanism (SAM) are the de
facto methods to extract spatial-temporal information for temporal graph
learning. Interestingly, we found that although both RNN and SAM could lead to
a good performance, in practice neither of them is always necessary. In this
paper, we propose GraphMixer, a conceptually and technically simple
architecture that consists of three components: (1) a link-encoder that is only
based on multi-layer perceptrons (MLP) to summarize the information from
temporal links, (2) a node-encoder that is only based on neighbor mean-pooling
to summarize node information, and (3) an MLP-based link classifier that
performs link prediction based on the outputs of the encoders. Despite its
simplicity, GraphMixer attains an outstanding performance on temporal link
prediction benchmarks with faster convergence and better generalization
performance. These results motivate us to rethink the importance of simpler
model architecture. | Weilin Cong, Si Zhang, Jian Kang, Baichuan Yuan, Hao Wu, Xin Zhou, Hanghang Tong, Mehrdad Mahdavi | 2023-02-22T20:24:35Z | http://arxiv.org/abs/2302.11636v1 | # Do We Really Need Complicated Model Architectures For Temporal Networks?
###### Abstract
Recurrent neural network (RNN) and self-attention mechanism (SAM) are the de facto methods to extract spatial-temporal information for temporal graph learning. Interestingly, we found that although both RNN and SAM could lead to a good performance, in practice neither of them is always necessary. In this paper, we propose GraphMixer, a conceptually and technically simple architecture that consists of three components: 1) a _link-encoder_ that is only based on multi-layer perceptrons (MLP) to summarize the information from temporal links, 2) a _node-encoder_ that is only based on neighbor mean-pooling to summarize node information, and 3) an MLP-based _link classifier_ that performs link prediction based on the outputs of the encoders. Despite its simplicity, GraphMixer attains an outstanding performance on temporal link prediction benchmarks with faster convergence and better generalization performance. These results motivate us to rethink the importance of simpler model architecture. [Code].
## 1 Introduction
In recent years, temporal graph learning has been recognized as an important machine learning problem and has become the cornerstone behind a wealth of high-impact applications Yu et al. (2018); Bui et al. (2021); Kazemi et al. (2020); Zhou et al. (2020); Cong et al. (2021). Temporal link prediction is one of the classic downstream tasks which focuses on predicting the future interactions among nodes. For example, in an ads ranking system, the user-ad clicks can be modeled as a temporal bipartite graph whose nodes represent users and ads, and links are associated with timestamps indicating when users click ads. Link prediction between them can be used to predict whether a user will click an ad. Designing graph learning models that can capture node evolutionary patterns and accurately predict future links is a crucial direction for many real-world recommender systems.
In temporal graph learning, recurrent neural network (RNN) and self-attention mechanism (SAM) have become the de facto standard for temporal graph learning Kumar et al. (2019); Sankar et al. (2020); Xu et al. (2020); Rossi et al. (2020); Wang et al. (2020), and the majority of the existing works focus on designing neural architectures with one of them and additional components to learn representations from raw data. Although powerful, these methods are conceptually and technically complicated with advanced model architectures. It is non-trivial to understand which parts of the model design truly contribute to its success, and whether these components are indispensable. Thus, in this paper, we aim at answering the following two questions:
1. Are RNN and SAM always indispensable for temporal graph learning? To answer this question, we propose GraphMixer, a simple architecture based entirely on the multi-layer perceptrons (MLPs) and neighbor mean-pooling, which does not utilize any RNN or SAM in its model architecture (Section 3). Despite its simplicity, GraphMixer could obtain outstanding results when comparing it
against baselines that are equipped with the RNNs and SAM. In practice, it achieves state-of-the-art performance in terms of different evaluation metrics (e.g., average precision, AUC, Recall@K, and MRR) on real-world temporal graph datasets, with the even smaller number of model parameters and hyper-parameters, and a conceptually simpler input structure and model architecture (Section 4).
Q2: What are the key factors that lead to the success of GraphMixer? We identify three key factors that contribute to the success of GraphMixer:
1 _The simplicity of GraphMixer's input data and neural architecture._ Different from most deep learning methods that focus on designing conceptually complicated data preparation techniques and technically complicated neural architectures, we choose to simplifying the neural architecture and utilize a conceptually simpler data as input. Both of which could lead to a better model performance and better generalization (Section 4.4).
2 _A time-encoding function that encodes any timestamp as an easily distinguishable input vector for GraphMixer_. Different from most of the existing methods that propose to learn the time-encoding function from the raw input data, our time-encoding function utilizes conceptually simple features and is fixed during training. Interestingly, we show that our fixed time-encoding function is more preferred than the trainable version (used by most previous studies), and could lead to a smoother optimization landscape, a faster convergence speed, and a better generalization (Section 4.2);
3 _A link-encoder that could better distinguish temporal sequences._ Different from most existing methods that summarize sequences using SAM, our encoder module is entirely based on MLPs. Interestingly, our encoder can distinguish temporal sequences that cannot be distinguished by SAM, and it could generalize better due to its simpler neural architecture and lower model complexity (Section 4.3).
To this end, we summarize our contributions as follows:
1 We propose a conceptually and technically simple architecture GraphMixer;
2 Even without RNN and SAM, GraphMixer not only outperforms all baselines but also enjoys a faster convergence and better generalization ability;
3 Extensive study identifies three factors that contribute to the success of GraphMixer.
4 Our results could motivate future research to rethink the importance of the conceptually and technically simpler method.
## 2 Preliminary and existing works
**Preliminary.** Figure 1 is an illustration on the temporal graph. Our goal is to predict whether two nodes are connected at a specific timestamp \(t_{0}\) based on all the available temporal graph information happened before that timestamp. For example, to predict whether \(v_{1},v_{2}\) are connected at \(t_{0}\), we only have access to the graph structure, node features, and link features with timestamps from \(t_{1}\) to \(t_{6}\).
**Related works.** Most of the temporal graph learning methods are conceptually and technically complicated with advanced neural architectures. It is non-trivial to fully understand the algorithm details without looking into their implementations. Therefore, we select the four most representative and most closely-related methods to introduce and compare them in more details.
* _JODIE_ Kumar et al. (2019) is a _RNN-based method_. Let us denote \(\mathbf{x}_{i}(t)\) as the embedding of node \(v_{i}\) at time \(t\), \(\mathbf{x}_{ij}^{\text{link}}(t)\) as the link feature between \(v_{i},v_{j}\) at time \(t\), and \(m_{i}\) as the timestamp that \(v_{i}\) latest interact with other node. JODIE pre-processes and updates the representation of each node via RNNs (is it just one RNN or multiple RNNs). More specifically, when an interaction between \(v_{i},v_{j}\) happens at time \(t\), JODIE updates the temporal embedding using RNN by \(\mathbf{x}_{i}(t)=\texttt{RNN}\left(\mathbf{x}_{i}(m_{j}),\mathbf{x}_{j}(m_{j }),\mathbf{x}_{ij}^{\text{link}}(t),t-m_{i}\right)\). Then, the dynamic embedding of node \(v_{i}\) at time \(t_{0}\) is computed by \(\mathbf{h}_{i}(t_{0})=(1+(t_{0}-m_{i})\mathbf{w})\cdot\mathbf{x}_{i}(m_{i})\). Finally, the prediction on any node pair at time \(t_{0}\) is computed by \(\texttt{MLP}([\mathbf{h}_{i}(t_{0})\mid\mid\mathbf{h}_{j}(t_{0})])\), where \([\cdot|\cdot|\cdot]\) is the concatenate operation and \(\texttt{MLP}(\mathbf{x})\) is applying 2-layer MLP on \(\mathbf{x}\).
Figure 1: (Left) Temporal graph with nodes \(v_{1},\ldots,v_{5}\), per-link timestamps \(t_{1},\ldots,t_{6}\) indicate when two nodes interact. For example, \(v_{1},v_{2}\) interact at \(t_{1},t_{5}\). (Right) Each node has its node features (e.g., \(\mathbf{x}_{1}^{\text{node}}\) for \(v_{1}\)) and each temporal link has its link features (e.g., \(\mathbf{x}_{1,2}^{\text{link}}(t_{1}),\mathbf{x}_{1,2}^{\text{link}}(t_{5})\) are link features between \(v_{1},v_{2}\) at \(t_{1},t_{5}\)). For scenarios without node or link features, we use all-zero vectors instead.
* _DySAT_Sankar et al. (2020) is a _SAM-based method_. DySAT requires pre-processing the temporal graph into multiple snapshot graphs by first splitting all timestamps into multiple time-slots, then merging all edges in each time-slot. Let \(\mathcal{G}_{t}(\mathcal{V},\mathcal{E}_{t})\) denote the \(t\)-th snapshot graph. To capture spatial information, DySAT first applies Graph Attention Network (GAT) Velickovic et al. (2018) on each snapshot graph \(\mathcal{G}_{t}\) independently by \(\mathbf{X}(t)=\texttt{GAT}(\mathcal{G}_{t})\). Then, to capture of temporal information for each node, Transformer is applied to \(\mathbf{x}_{i}(t)=[\mathbf{X}(t)]_{i}\) at different timestamps to capture the temporal information by \(\mathbf{h}_{i}(t_{k}),\ldots\mathbf{h}_{i}(t_{0})=\texttt{Transformer}\big{(} \mathbf{x}_{i}(t_{k}),\ldots,\mathbf{x}_{i}(t_{0})\big{)}\). Finally, the prediction on any node pair at time \(t_{0}\) is computed by \(\texttt{MLP}\big{(}[\mathbf{h}_{i}(t_{0})\mid\mathbf{h}_{j}(t_{0})]\big{)}\).
* _TGAT_Xu et al. (2020) is a _SAM-based method_ that could capture the spatial and temporal information simultaneously. TGAT first generates the time augmented feature of node \(i\) at time \(t\) by concatenating the raw feature \(\mathbf{x}_{i}\) with a trainable time encoding \(\mathbf{z}(t)\) of time \(t\), i.e., \(\mathbf{x}_{i}(t)=[\mathbf{x}_{i}\mid\|\mathbf{z}(t)]\) and \(\mathbf{z}(t)=\cos(t\mathbf{w}+\mathbf{b})\). Then, SAM is applied to the time augmented features and produces node representation \(\mathbf{h}_{i}(t_{0})=\texttt{SAM}(\mathbf{x}_{i}(t_{0}),\{\mathbf{x}_{u}(h_{ u})\mid u\in\mathcal{N}_{t_{0}}(i)\})\), where \(\mathcal{N}_{t_{0}}(i)\) denotes the neighbors of node \(i\) at time \(t_{0}\) and \(h_{u}\) denotes the timestamp of the latest interaction of node \(u\). Finally, the prediction on any node pair at time \(t_{0}\) is computed by \(\texttt{MLP}\big{(}[\mathbf{h}_{i}(t_{0})\mid\mathbf{h}_{j}(t_{0})]\big{)}\).
* _TGN_Rossi et al. (2020) is a mixture of _RNN- and SAM-based method_. In practice, TGN first captures the temporal information using RNN (similarly to JODIE), and then applies graph attention convolution to capture the spatial and temporal information jointly (similarly to TGAT).
Besides, we also consider the following temporal graph learning methods as baselines. These methods could be thought of as an extension on top of the above four most representative methods, but with the underlying idea behind the model design much more conceptually complicated. _CAWs_Wang et al. (2020) is a mixer of _RNN- and SAM- based method_ that proposes to represent network dynamics by extracting temporal network motifs using temporal random walks. CAWs replaces node identities with the hitting counts of the nodes based on a set of sampled walks to establish the correlation between motifs. Then, the extracted motifs are fed into RNNs to encode each walk as a representation, and use SAM to aggregate the representations of multi-walks into a single vector for downstream tasks. _TGSRe_Fan et al. (2021) is a _SAM-based method_ that proposes to unify sequential patterns and temporal collaborative signals to improve the quality of recommendation. To achieve this goal, they propose to advance the SAM by adopting novel collaborative attention, such that SAM can simultaneously capture collaborative signals from both users and items, as well as consider temporal dynamics inside sequential patterns. _APAN_Wang et al. (2021) is a _RNN-based method_ that proposes to decouple model inference and graph computation to alleviate the damage of the heavy graph query operation to the speed of model inference. More related works are deferred to Appendix B.
## 3 GraphMixer: A conceptually and technically simple method
In this section, we first introduce the neural architecture of GraphMixer in Section 3.1 then explicitly highlight its difference to baseline methods in Section 3.2.
### Details on GraphMixer: neural architecture and input data
GraphMixer has three modules: _link-encoder_ is designed to summarize the information from temporal links (e.g., link timestamps and link features); _node-encoder_ is designed to summarize the information from nodes (e.g., node features and node identity); _link classifier_ predicts whether a link exists based on the output of the aforementioned two encoders.
**Link-encoder.** The link-encoder is designed to summarize the temporal link information associated with each node sorted by timestamps, where temporal link information is referring to the timestamp and features of each link. For example in Figure 1, the temporal link information for node \(v_{2}\) is \(\{(t_{1},\mathbf{x}_{1,2}^{\text{link}}(t_{1})),(t_{3},\mathbf{x}_{2,4}^{ \text{link}}(t_{3})),(t_{4},\mathbf{x}_{2,4}^{\text{link}}(t_{4})),(t_{5}, \mathbf{x}_{1,2}^{\text{link}}(t_{5}))\}\) and for node \(v_{5}\) is \(\{(t_{2},\mathbf{x}_{3,5}^{\text{link}}(t_{2})),(t_{6},\mathbf{x}_{4,5}^{ \text{link}}(t_{6}))\}\). In practice, we only keep the top \(K\) most recent temporal link information, where \(K\) is a dataset dependent hyper-parameter. If multiple links have the same timestamps, we simply keep them the same as the input raw data. To summarize temporal link information, our link-encoder should have the ability to distinguish different timestamps (achieved by our time-encoding function) and different temporal link information (achieved by the Mixer module).
* _Time-encoding function._ To distinguish different timestamps, we introduce our time-encoding function \(\cos(t\mathbf{\omega})\), which utilizes features \(\mathbf{\omega}=\{\alpha^{-(i-1)/\beta}\}_{i=1}^{d}\) to encode each timestamps into
a \(d\)-dimensional vector. More specifically, we first map each \(t\) to a vector with monotonically exponentially decreasing values \(t_{\mathbf{\omega}}\in(0,t]\) among the feature dimension, then use cosine function to project all values to \(\cos(t_{\mathbf{\omega}})\in[-1,+1]\). The selection of \(\alpha,\beta\) is depending on the scale of the maximum timestamp \(t_{\text{max}}\) we wish to encode. In order to distinguish all timestamps, we have to make sure \(t_{\text{max}}\times\alpha^{-(i-1)/\beta}\to 0\) as \(i\to d\) to distinguish all timestamps. In practice, we found \(d=100\) and \(\alpha=\beta=\sqrt{d}\) works well for all datasets. Notice that \(\mathbf{\omega}\) is fixed and will not be updated during training. As shown in Figure 1(a), the output of this time-encoding function has two main properties that could help GraphMixer distinguish different timestamps: similar timestamps have similar time-encodings (e.g., the plot of \(t_{1},t_{2}\)) and the larger the timestamp the later the values in time-encodings converge to \(+1\) (e.g., the plot of \(t_{1},t_{3}\) or \(t_{1},t_{4}\)).
* _Mixer for information summarizing._ We use a 1-layer MLP-mixer Tolstikhin et al. (2021) to summarize the temporal link information. Figure 1(b) is an example on summarizing the temporal link information of node \(v_{2}\). Recall that the temporal link information of node \(v_{2}\) is \(\{(t_{1},\mathbf{x}_{1,2}^{\text{link}}(t_{1})),(t_{3},\mathbf{x}_{2,4}^{ \text{link}}(t_{3})),(t_{4},\mathbf{x}_{2,4}^{\text{link}}(t_{4})),(t_{5}, \mathbf{x}_{1,2}^{\text{link}}(t_{5}))\}\). We first encode timestamps by our time-encoding function then concatenate it with its corresponding link features. For example, we encode \((t_{1},\mathbf{x}_{1,2}^{\text{link}}(t_{1}))\) as \(\cos((t_{0}-t_{1})\mathbf{\omega})\mid\mid\mathbf{x}_{2,2}^{\text{link}}(t_{1}))\) where \(t_{0}\) is the timestamp that we want to predict whether the link exists. Then, we stack all the outputs into a big matrix and zero-pad to the fixed length \(K\) denoted as \(\mathbf{T}_{2}(t_{0})\). Finally, we use an 1-layer MLP-mixer with mean-pooling to compress \(\mathbf{T}_{2}(t_{0})\) into a single vector \(\mathbf{t}_{2}(t_{0})\). Specifically, the MLP-mixer takes \(\mathbf{T}_{2}(t_{0})\) as input \[\mathbf{T}_{2}(t_{0})\rightarrow\mathbf{H}_{\text{input}},\; \mathbf{H}_{\text{token}} =\mathbf{H}_{\text{input}}+\mathbf{W}_{\text{token}}^{(2)}\text {GeLU}(\mathbf{W}_{\text{token}}^{(1)}\text{LayerNorm}(\mathbf{H}_{\text{ input}})),\] \[\mathbf{H}_{\text{channel}} =\mathbf{H}_{\text{token}}+\text{GeLU}(\text{LayerNorm}(\mathbf{ H}_{\text{token}})\mathbf{W}_{\text{channel}}^{(1)})\mathbf{W}_{\text{channel}}^{(2)},\] and output the temporal encoding \(\mathbf{t}_{2}(t_{0})=\text{Mean}(\mathbf{H}_{\text{channel}})\). Please notice that zero-padding operator is important to capture how often a node interacts with other nodes. The node with more zero-padded dimensions has less temporal linked neighbors. This information is very important in practice according to our experimental observation.
**Node-encoder.** The node-encoder is designed to capture the node identity and node feature information via neighbor mean-pooling. Let us define the 1-hop neighbor of node \(v_{i}\) with link timestamps from \(t_{0}\) as \(\mathcal{N}(v_{i};t,t_{0})\). For example in Figure 1, we have \(\mathcal{N}(v_{2};t_{4},t_{0})=\{v_{1},v_{4}\}\) and \(\mathcal{N}(v_{5};t_{4},t_{0})=\{v_{3}\}\). Then, the node-info feature is computed based on the 1-hop neighbor by \(\mathbf{s}_{i}(t_{0})=\mathbf{x}_{i}^{\text{node}}+\text{Mean}\{\mathbf{x}_{ j}^{\text{node}}\mid v_{j}\in\mathcal{N}(v_{i};t_{0}-T,t_{0})\}\), where \(T\) is a dataset-dependent hyper-parameter. In practice, we found 1-hop neighbors are enough to achieve good performance, and we use one-hot node representations for datasets without node features.
**Link classifier.** Link classifier is designed to classify whether a link exists at time \(t_{0}\) using the output of link-encoder \(\mathbf{t}_{i}(t_{0})\) and the output of node-encoder \(\mathbf{s}_{i}(t_{0})\). Let us denote the node \(v_{i}\)'s representation at time \(t_{0}\) as the concatenation of the above two encodings \(\mathbf{h}_{i}(t_{0})=[\mathbf{s}_{i}(t_{0})\mid\mid\mathbf{t}_{i}(t_{0})]\). Then, the prediction on whether an interaction between node \(v_{i},v_{j}\) happens at time \(t_{0}\) is computed by applying a 2-layer MLP model on \([\mathbf{h}_{i}(t_{0})\mid\mid\mathbf{h}_{j}(t_{0})]\), i.e., \(p_{ij}=\texttt{MLP}([\mathbf{h}_{i}(t_{0})\mid\mid\mathbf{h}_{j}(t_{0})])\).
### Comparison to existing methods
In the following, we highlight some differences between GraphMixer and other methods, which will be explicitly ablation studied in the experiment section (Section 4.4).
**Temporal graph as undirected graph.** Most of the existing works consider temporal graphs as directed graphs with information only flows from the source node (e.g., users in the recommender system) to the destination nodes (e.g., ads in the recommender system). However, we consider the
Figure 2: (a) Time-encoding function that pre-process timestamp \(t\) into a vector \(\cos(t_{\mathbf{\omega}})\). The x-axis is the vector dimension and the y-axis is the cosine value. (b) link-encoder takes the temporal link information of node \(v_{2}\) as inputs and outputs a vector \(\mathbf{t}_{2}(t_{0})\) that will be used for link prediction.
temporal graph as an undirected graph. By doing so, if two nodes are frequently connected in the last few timestamps, the "most recent 1-hop neighbors" sampled for the two nodes on the "undirected" temporal graph would be similar. In other words, the similarity between the sampled neighbors provides information on whether two nodes are frequently connected in the last few timestamps, which is essential for temporal graph link prediction. Intuitively, if two nodes are frequently connected in the last few timestamps, they are also likely to be connected in the recent future.
**Selection on neighbors.** Existing methods consider either "multi-hop recent neighbors" or "multi-hop uniform sampled neighbors", whereas we only consider the "1-hop most recent neighbors". For example, TGAT Xu et al. (2020), DySAT Sankar et al. (2020), and TGSRe Fan et al. (2021) consider multi-hop uniform sampled neighbors; JODIE Kumar et al. (2019), TGN Rossi et al. (2020), and APAN Wang et al. (2021) maintain the historical node interactions via RNN, which can be think of as multi-hop recent neighbors; CAWs Wang et al. (2020) samples neighbors by random walks, which can also be think of as multi-hop recent neighbors. Although sampling more neighbors could provide a sufficient amount of information for models to reason about, it could also carry much spurious or noisy information. As a result, more complicated model architectures (e.g., RNN or SAM) are required to extract useful information from the raw data, which could lead to a poor model innaibility and potentially weaker generalization ability. Instead, we only take the "most recent 1-hop neighbors" into consideration, which is conceptually simpler and enjoys better performance.
## 4 Experiments
**Dataset.** We conduct experiments on five real-world datasets, including the _Reddit_, _Wiki_, _MOOC_, _LastFM_ datasets that are used in Kumar et al. (2019) and the _GDELT_ dataset1 which is introduced in Zhou et al. (2022). Besides, since _GDELT_ is the only dataset with both node and link features, we create its two variants to understand the effect of training data on model performance: _GDELT-e_ removes the link feature from _GDELT_ and keep the node feature and link timestamps, _GDELT-ne_ removes both the link and edge features from _GDELT_ and only keep the link timestamps. For each dataset, we use the same \(70\%/15\%/15\%\) chronological splits for the train/validation/test sets as existing works. The detailed dataset statistics are summarized in Appendix A.2.
Footnote 1: The _GDELT_ dataset used in our paper is a sub-sampled version because the original dataset is too big to fit into memory for single-machine training. In practice, we keep \(1\) temporal link per \(100\) continuous temporal link.
**Baselines.** We compare baselines that are introduced in Section 2. Besides, we create two variants to better understand how node- and link-information contribute to our results, where GraphMixer-L is only using link-encoder and GraphMixer-N is only using node-encoder. We conduct experiments under the transductive learning setting and use average precision for evaluation. The detailed model configuration, training and evaluation process are summarized in Appendix A.3. Due to the space limit, more experiment results on using _Recall@K_, _MRR_, _and AUC_ as the evaluation metrics, comparison on _wall-clock time_ and _number of parameters_ are deferred to Appendix C
**Outline.** We first compare GraphMixer with baselines in Section 4.1 then highlight the three key factors that contribute to the success of GraphMixer in Section 4.2, Section 4.3, and Section 4.4.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & Reddit & Wiki & MOOC & LastFM & GDELT & GDELT-ne & GDELT-e \\ & I, T & L, T & T & T & L, N, T & T & N, T \\ \hline JODIE & \(99.30\pm 0.01\) & \(98.81\pm 0.01\) & \(99.16\pm 0.01\) & \(67.51\pm 0.87\) & \(98.27\pm 0.02\) & \(97.13\pm 0.02\) & \(96.96\pm 0.02\) \\ DySAT & \(98.52\pm 0.01\) & \(96.71\pm 0.02\) & \(98.82\pm 0.02\) & \(76.40\pm 0.77\) & \(98.52\pm 0.02\) & \(82.47\pm 0.13\) & \(97.25\pm 0.02\) \\ TGAT & \(99.66\pm 0.01\) & \(97.75\pm 0.02\) & \(98.43\pm 0.01\) & \(74.71\pm 0.19\) & \(98.92\pm 0.02\) & \(84.30\pm 0.10\) & \(96.96\pm 0.02\) \\ TGN & \(99.80\pm 0.01\) & \(99.55\pm 0.01\) & \(99.62\pm 0.01\) & \(82.23\pm 0.50\) & \(98.15\pm 0.02\) & \(97.13\pm 0.02\) & \(96.04\pm 0.02\) \\ \hline CAWs-mean & \(98.43\pm 0.02\) & \(97.72\pm 0.03\) & \(62.99\pm 0.87\) & \(76.35\pm 0.05\) & \(98.51\pm 0.12\) & \(69.20\pm 0.10\) & \(91.72\pm 0.19\) \\ CAWs-atm & \(98.51\pm 0.02\) & \(97.95\pm 0.03\) & \(63.07\pm 0.82\) & \(76.31\pm 0.10\) & \(95.06\pm 0.11\) & \(69.54\pm 0.19\) & \(91.54\pm 0.22\) \\ TGSRe & \(95.21\pm 0.08\) & \(91.64\pm 0.12\) & \(83.62\pm 0.34\) & \(76.91\pm 0.87\) & \(97.03\pm 0.61\) & \(97.03\pm 0.61\) & \(97.03\pm 0.61\) \\ APAN & \(99.24\pm 0.02\) & \(98.14\pm 0.01\) & \(98.70\pm 0.08\) & \(69.39\pm 0.81\) & \(95.96\pm 0.10\) & \(97.38\pm 0.23\) & \(96.77\pm 0.18\) \\ \hline GraphMixer-L & \(99.84\pm 0.01\) & \(99.70\pm 0.01\) & \(99.81\pm 0.01\) & \(95.50\pm 0.03\) & \(98.99\pm 0.02\) & \(96.14\pm 0.02\) & \(98.99\pm 0.02\) \\ GraphMixer-N & \(99.24\pm 0.01\) & \(90.33\pm 0.01\) & \(97.35\pm 0.02\) & \(63.80\pm 0.03\) & \(94.44\pm 0.02\) & \(96.00\pm 0.02\) & \(98.81\pm 0.02\) \\ GraphMixer & \(99.93\pm 0.01\) & \(99.85\pm 0.01\) & \(99.91\pm 0.01\) & \(96.31\pm 0.02\) & \(98.89\pm 0.02\) & \(98.39\pm 0.02\) & \(98.22\pm 0.02\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison on the average precision score for link prediction. GraphMixer uses one-hot node encoding for datasets without node features (marked by \(\natural\)). For each dataset, we indicate whether we have the corresponding feature (“L” link features, “N” node features, and “T” link timestamps). Red is the best score, Blue is the best score excluding GraphMixer and its variants.
### Main empirical results.
**GraphMixer achieves outstanding performance.** We compare the average precision score with baselines in Table 1. We have the following observations: 1 GraphMixer outperforms all baselines on all datasets. The experiment results provide sufficient support on our argument that neither RNN nor SAM is necessary for temporal graph link prediction. 2 According to the performance of GraphMixer-L on datasets only have link timestamp information (_MOOC_, _LastFM_, and _GDELT-ne_), we know that our time-encoding function could successfully pre-process each timestamp into a meaningful vector. In fact, we will show later in Section 4.2 that our time-encoding function is more preferred than baselines' trainable version. 3 By comparing the performance GraphMixer-N and GraphMixer on _Wiki_, _MOOC_, and _LastFM_ datasets, we know that node-encoder alone is not enough to achieve a good performance. However, it provides useful information that could benefit the link-encoder. 4 By comparing the performance of GraphMixer-N on _GDELT_ and _GDELT-ne_, we observe that using one-hot encoding outperforms using node features. This also shows the importance of node identity information because one-hot encoding only captures such information. 5 More complicated methods (e.g., CAWs, TGSRec, and DDGCL) do not perform well when using the default hyper-parameters2, which is understandable because these methods have more components with an excessive amount of hyper-parameters to tune.
Footnote 2: In fact, we tried different hyper-parameters based on their default values, but the results are similar.
**GraphMixer enjoys better convergence and generalization ability.** To better understand the model performance, we take a closer look at the dynamic of training accuracy and the generalization gap (the absolute difference between training and evaluation score). The results are reported in Figure 3 and Figure 8: 1 The slope of training curves reflects the expressive power and convergence speed of an algorithm. From the first row figures, we can observe that GraphMixer always converge to a high average precision score in just a few epochs, and the training curve is very smooth when compared to baselines. Interestingly, we can observe that the baseline methods cannot always fit the training data, and their training curves fluctuate a lot throughout the training process. 2 The generalization gap reflects how well the model could generalize and how stable the model could perform on unseen data (the smaller the better). From the second row figures, the generalization gap curve of GraphMixer is lesser and smoother than baselines, which indicates the generalization power of GraphMixer.
**GraphMixer enjoys a smoother loss landscape.** To understand why _"GraphMixer converges faster and generalizes better, while baselines suffer training unstable issue and generalize poorly"_, we explore the loss landscape by using the visualization tools introduced in Li et al. (2018). We illustrate the loss landscape in Figure 4 by calculating and visualizing the loss surface along two random directions near the pre-trained optimal parameters. The x- and y-axis indicate how much the optimal solution is stretched along the two random directions, and the optimal point is when x- and y-axis are zero. 1 From Figure 3(a), 3(d), we know GraphMixer enjoys a smoother landscape with a flatter surface at the optimal point, the slope becomes steeper when stretching along the two random directions. The steeper slope on the periphery explains why GraphMixer could converge fast, the flatter surface at the optimal point explains why it could generalize well. 2 Surprisingly, we find that baselines have a non-smooth landscape with many spikes on its surface from Figure 3(b), 3(c), 3(e), 3(f). This observation provides sufficient explanation on the training instability and poor generalization issue of baselines as shown in Figure 3, 8. Interestingly, as we will show later in Section 4.2, the trainable time-encoding
Figure 3: Comparison on the training set average precision and generalization gap for the first \(100\) training epochs. Results on other datasets can be found in Figure 8.
function in baselines is the key to this non-smooth landscape issue. Replacing it with our fixed time-encoding function could flatten the landscape and boost their model performance.
### On the importance of our fixed time-encoding function.
Existing works (i.e., JODIE, TGAT, and TGN) leverage a trainable time-encoding function \(\mathbf{z}(t)=\cos(t\mathbf{w}^{\top}+\mathbf{b})\) to represent timestamps3. However, we argue that using trainable time-encoding function could cause instability during training because its gradient \(\frac{\partial\cos(\mathbf{w}+\mathbf{b})}{\partial\mathbf{w}}=t\times\sin(t \mathbf{w}+\mathbf{b})\) scales proportional to the timestamps, which could lead to training instability issue and cause the baselines' the non-smooth landscape issue as shown in Figure 4. As an alternative, we utilize the fixed time-encoding function \(\mathbf{z}(t)=\cos(t\mathbf{\omega})\) with fixed features \(\mathbf{\omega}\) that could capture the relative difference between two timestamps (introduced in Section 3.1). To verify this, we introduce a simple experiment to test whether the time-encoding functions (both our fixed version and baselines' trainable version) are expressive enough, such that a simple linear classifier can distinguish the time-encodings of two different timestamps produced by the time-encoding functions. Specially, our goal is to classify if \(t_{1}>t_{2}\) by learning a linear classifier on \([\mathbf{z}(t_{1})\mid\mathbf{z}(t_{2})]\). During training, we randomly generate two timestamps \(t_{1},t_{2}\in[0,10^{6}]\) and ask a fully connected layer to classify whether a timestamp is greater than another. As shown in Figure 4(a), using the trainable time-encoding function (orange curve) will suffer from the unstable exploding gradient issue (left upper figure) and its performance remains almost the same during the training process (left lower figure). However, using our fixed time-encoding function (blue curve) does not have the unstable exploding gradient issue and can quickly achieve high accuracy within several iterations. Meanwhile, we compare the parameter trajectories of the two models in Figure 4(b). We observe that the change of parameters on the trainable time-encoding function is drastically larger than our fixed version. A huge change in weight parameters could deteriorate the model's performance. Most importantly, by replacing baselines' trainable time-encoding function with our fixed version, most baselines have a smoother optimization landscape (Figure 6) and a better model performance (in Table 2), which further verifies our argument that our fixed time-encoding function is more preferred than the trainable version.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & Reddit & Wiki & MOOC & LastFM & GDELT-ne & GDELT-e \\ \hline \hline JODIE & \(99.30\rightarrow\mathbf{99.76}\) & \(98.81\rightarrow\mathbf{99.00}\) & \(99.16\rightarrow\mathbf{99.17}\) & \(67.51\rightarrow\mathbf{79.89}\) & \(97.13\rightarrow\mathbf{98.23}\) & \(96.96\rightarrow\mathbf{96.96}\) \\ TGN & \(98.66\rightarrow\mathbf{99.48}\) & \(96.71\rightarrow\mathbf{98.55}\) & \(98.43\rightarrow\mathbf{99.33}\) & \(54.77\rightarrow\mathbf{76.26}\) & \(84.30\rightarrow\mathbf{92.31}\) & \(\mathbf{96.96}\rightarrow\mathbf{96.28}\) \\ TGN & \(99.80\rightarrow\mathbf{99.83}\) & \(\mathbf{99.55}\rightarrow\mathbf{99.54}\) & \(99.62\rightarrow\mathbf{99.62}\) & \(82.23\rightarrow\mathbf{87.58}\) & \(98.15\rightarrow\mathbf{98.25}\) & \(96.04\rightarrow\mathbf{97.34}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison on average precision score with fixed/trainable time encoding function (TEF). The results before “\(\rightarrow\)” is for trainable TEF (same as Table 1) and after “\(\rightarrow\)” is for fixed TEF.
Figure 4: Comparison on the training loss landscape. Results on other datasets and other baselines can be found in Appendix E.
### On the importance of MLP-mixer in GraphMixer's link-encoder
In this section, we aim to achieve a deeper understanding on the expressive power of the link-encoder by answering the following two questions: "_Can we replace the MLP-mixer in link-encoder with self-attention?_" and "_Why MLP-mixer is a good alternative of self-attention?_" To answer these questions, let us first conduct experiments by replacing the MLP-mixer in link-encoder with full/1-hop self-attention and sum/mean-pooling, where full self-attention is widely used in Transformers and 1-hop self-attention is widely used in graph attention networks. As shown in Table 3, GraphMixer suffers from performance degradation when using self-attention: the best performance is achieved when using MLP-mixer with zero-padding, while the model performance drop slightly when using self-attention with sum-pooling (row 2 and 4), and the performance drop significantly when using self-attention with mean-pooling (row 3 and 5). Self-attention with mean-pooling has a weaker model performance because it cannot distinguish "temporal sequences with identical link timestamps and features" (e.g., cannot distinguish \([a_{1},a_{1}]\) and \([a_{1}]\) and it cannot explicitly capture "the length of temporal sequences" (e.g., cannot distinguish if \([a_{1},a_{2}]\) is longer than \([a_{3}]\)), which are both very important for GraphMixer understand how frequent a node interacts with other nodes. We explicitly verify this in Figure 7 by first generating two temporal sequences (with timestamps but without link features), then encoding the timestamps into vectors via time-encoding function, and asking full self-attention and MLP-mixer to distinguish. As shown in Figure 7, self-attention with mean-pooling cannot distinguish two temporal sequences with identical timestamps (because all the self-attention weights are equivalent if the features of the node on the two sides of a link are identical) and cannot capture the sequence length (because of mean-pooling simply averages the inputs and does not take the input size into consideration). However, MLP-mixer in GraphMixer can distinguish the above two sequences because of zero-padding. Fortunately, the aforementioned two weaknesses could be alleviated by replacing the mean-pooling in temporal self-attention with the sum-pooling, which explains why using sum-pooling brings better model performance than mean-pooling. However, since self-attention modules have more parameters and are harder to train, they could generalize poor when the downstream task is not too complicated.
### Key factors to the better performance
One of the major factors that contributes to GraphMixer's success is the simplicity of GraphMixer's neural architecture and input data. Using conceptually simple input data that better aligned with their labels allows a simple neural network model to capture the underlying mapping between the input to
Figure 5: (a) Comparison on the _gradient / parameters norm_ and _accuracy_ at each iteration. (b) Comparison on the trajectories of parameter change, where the radius is \(r_{t}=\|\delta_{t}\|/\|\delta_{0}\|\), the angle is \(\theta_{t}=\arccos\langle\delta_{t}/\|\delta_{t}\|_{2},\delta_{0}/\|\delta_{0} \|_{2}\rangle\), and \(\delta_{t}=\mathbf{w}_{t}-\mathbf{w}^{*}\) is the difference between \(\mathbf{w}_{t}\) to optimal point \(\mathbf{w}^{*}\). The more the model parameters change during training, the larger the semicircle.
Figure 6: Comparison on the training loss landscape _fixed time-encoding function_. Results on other datasets and baselines can be found in Appendix F.
their labels, which could lead to a better generalization ability. In the following, we explicitly verify this by comparing the performance of GraphMixer with different input data in Table 4: 1 Recall from Section 3.2 that the "most recent 1-hop neighbors" sampled for the two nodes on the "undirected" temporal graph could provide information on whether two nodes are frequently connected in the last few timestamps, which is essential for temporal graph link prediction. To verify this, we conduct ablation study by comparing the model performance on direct and undirected temporal graphs. As shown in the 1st and 2nd row of Table 4, changing from undirected to direct graph results in a significant performance drop because such information is missing. 2 Recall from Section 3.1 that instead of feeding the raw timestamp to GraphMixer and encoding each timestamp with a trainable time-encoding function, GraphMixer encodes the timestamps via our fixed time-encoding function and feed the encoded representation to GraphMixer, which reduces the model complexity of learning a time-encoding function from data. This could be verified by the 3rd and 4th rows of Table 4, where using the pre-encoded time information could give us a better performance. 3 Selecting the input data that has similar distribution in training and evaluation set could also potentially improve the evaluation error. For example, using relative timestamps (i.e., each neighbor's timestamp is subtracted by its root node's timestamp) is better than absolute timestamps (e.g., using Unix timestamp)because the absolute timestamps in the evaluation set and training set are from different range when using chronological splits, but they are very likely to overlap if using relative timestamps. As shown in the 3rd to 6th rows of Table 4, using relative time information always gives a better model performance than using absolute time information. 4 Selecting the most representative neighbors for each node. For example, we found 1-hop most recent interacted neighbors are the most representative for link prediction. Switching to either 2-hop neighbors or uniform sampled neighbors will hurt the model performance according to the 7th to the 10th row of Table 4.
## 5 Conclusion
In this paper, we propose a conceptually and technically simple architecture GraphMixer for temporal link prediction. GraphMixer not only outperforms all baselines but also enjoys a faster convergence
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline & & Reddit & Wiki & MOOC & LastFM \\ \hline \hline Direct vs undirect & \multicolumn{2}{c}{Directed temporal graph} & \(99.69\) & \(88.37\) & \(97.87\) & \(78.34\) \\ \cline{2-6} temporal graph & \multicolumn{2}{c}{Undirected temporal graph} & \(\mathbf{99.93}\) & \(\mathbf{99.85}\) & \(\mathbf{99.91}\) & \(\mathbf{96.31}\) \\ \hline \multirow{3}{*}{\begin{tabular}{l} Time \\ information \\ \end{tabular} } & \multicolumn{2}{c}{Relative timestamp \((t_{i}-t_{0})\)} & \(99.79\) & \(99.80\) & \(99.81\) & \(95.32\) \\ \cline{2-6} & \multicolumn{2}{c}{Relative time-encoding \(\mathbf{\delta}(\mathbf{\epsilon}_{i}-t_{0})\)} & \(\mathbf{99.93}\) & \(\mathbf{99.85}\) & \(\mathbf{99.91}\) & \(\mathbf{96.31}\) \\ \cline{2-6} & \multicolumn{2}{c}{Absolute timestamp \(t_{i}\)} & \(98.90\) & \(98.23\) & \(98.73\) & \(92.25\) \\ \multicolumn{6}{c}{Absolute time-encoding \(\mathbf{\delta}(\mathbf{\epsilon}_{i}\))} & \(99.52\) & \(99.13\) & \(99.74\) & \(95.28\) \\ \hline \multirow{3}{*}{
\begin{tabular}{l} Neighbor \\ selection \\ \end{tabular} } & 2-hop & Most recent neighbors & \(99.39\) & \(98.05\) & \(99.11\) & \(89.36\) \\ \cline{2-6} & 1-hop & Most recent neighbors & \(\mathbf{99.93}\) & \(\mathbf{99.85}\) & \(\mathbf{99.91}\) & \(\mathbf{96.31}\) \\ \cline{2-6} & 2-hop & Uniform sample neighbors & \(97.66\) & \(92.57\) & \(98.87\) & \(65.72\) \\ \cline{2-6} & 1-hop & Uniform sample neighbors & \(98.19\) & \(94.74\) & \(98.40\) & \(60.02\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison on the average precision score of GraphMixer with different input data. The highlighted rows are identical to our default setting.
Figure 7: (a) We generate identical timestamp sequences with different length, then ask MLP-mixer and GAT to distinguish whether the generated sequence are identical (b) We generate random sequence with different length, then ask MLP-mixer and GAT to classify which sequence is longer.
speed and better generalization ability. An extensive study identifies three key factors that contribute to the success of GraphMixer and highlights the importance of simpler neural architecture and input data structure. An interesting future direction, not limited to temporal graph learning, is designing algorithms that could automatically select the best input data and data pre-processing strategies for different downstream tasks.
## Acknowledgements
This work was supported in part by NSF grant 2008398. Majority of this work was completed during Weilin Cong's internship at Meta AI under the mentorship of Si Zhang. We also extend our gratitude to Long Jin for his co-mentorship and for his contribution to the idea of using MLP-Mixer on graphs.
|
2301.08637 | Error bounds for kernel-based approximations of the Koopman operator | We consider the data-driven approximation of the Koopman operator for
stochastic differential equations on reproducing kernel Hilbert spaces (RKHS).
Our focus is on the estimation error if the data are collected from long-term
ergodic simulations. We derive both an exact expression for the variance of the
kernel cross-covariance operator, measured in the Hilbert-Schmidt norm, and
probabilistic bounds for the finite-data estimation error. Moreover, we derive
a bound on the prediction error of observables in the RKHS using a finite
Mercer series expansion. Further, assuming Koopman-invariance of the RKHS, we
provide bounds on the full approximation error. Numerical experiments using the
Ornstein-Uhlenbeck process illustrate our results. | Friedrich Philipp, Manuel Schaller, Karl Worthmann, Sebastian Peitz, Feliks Nüske | 2023-01-20T15:36:45Z | http://arxiv.org/abs/2301.08637v3 | # Error bounds for kernel-based approximations of the Koopman operator
###### Abstract.
We consider the data-driven approximation of the Koopman operator for stochastic differential equations on reproducing kernel Hilbert spaces (RKHS). Our focus is on the estimation error if the data are collected from long-term ergodic simulations. We derive both an exact expression for the variance of the kernel cross-covariance operator, measured in the Hilbert-Schmidt norm, and probabilistic bounds for the finite-data estimation error. Moreover, we derive a bound on the prediction error of observables in the RKHS using a finite Mercer series expansion. Further, assuming Koopman-invariance of the RKHS, we provide bounds on the full approximation error. Numerical experiments using the Ornstein-Uhlenbeck process illustrate our results.
## 1. Introduction
The Koopman operator [23] has become an essential tool in the modeling process of complex dynamical systems based on simulation or measurement data. The philosophy of the Koopman approach is that for a (usually non-linear) dynamical system on a finite-dimensional space, the time-evolution of expectation values of observable functions satisfies a linear differential equation. Hence, after "lifting" the dynamical system into an infinite-dimensional function space of observables, linear methods become available for its analysis. The second step is then to notice that traditional Galerkin approximations of the Koopman operator can be consistently estimated from simulation or measurement data, establishing the fundamental connection between the Koopman approach and modern data science. Koopman methods have found widespread application in system identification [4], control [24, 42, 25, 17, 49], sensor placement [31], molecular dynamics [50, 44, 35, 36, 18, 56], and many other fields. We refer to [19, 33, 5] for comprehensive reviews of the state of the art.
The fundamental numerical method for the Koopman approach is _Extended Dynamic Mode Decomposition_ (EDMD) [54], which allows to learn a Galerkin approximation of the Koopman operator from finite (simulation or measurement) data on a subspace spanned by a finite set of observables, often called dictionary. An appropriate choice of said dictionary is a challenging problem. In light of this issue, representations of the Koopman operator on large approximation spaces have been considered in recent years, including deep neural networks [29, 32], tensor product spaces [21, 37], and _reproducing kernel Hilbert spaces_ (RKHS) [55, 11, 20]. In the work [20] it was shown that by means of the integral operator associated to an RKHS, it is possible to construct a type of Galerkin approximation of the Koopman operator. The central object are (cross-)covariance operators, which can be estimated from data, using only evaluations of the feature map. Due to the relative simplicity of the resulting numerical algorithms on the one hand, and the rich approximation properties of reproducing kernels on the other hand, kernel methods have emerged as a promising candidate to overcome the fundamental problem of dictionary selection.
A key question is the quantification of the estimation error for (compressed) Koopman operators. For finite dictionaries and independent, identically distributed (i.i.d.) samples, error estimates were provided in [26, 38], see also [58] for the ODE case and [49] for an extension to control-affine systems. The
Introduction
In this paper, we consider a class of _non-degenerate_ operators \(C^{t}_{\mathbb{H}}\) on \(\mathbb{H}\), where \(C^{t}_{\mathbb{H}}\) is a bounded linear operator on \(\mathbb{H}\), and \(\mathbb{H}\) is a bounded linear operator on \(\mathbb{H}\). The operator \(C^{t}_{\mathbb{H}}\) is defined as
\[C^{t}_{\mathbb{H}}=\sum_{k=0}^{\infty}\int(C^{t}_{\mathbb{H}}\psi)(x)k(x,\cdot) \,d\mu(x),\qquad\psi\in\mathbb{H},\]
our bounds for the estimation error of the cross-covariance operator are accurate, and that the corrections we introduced to account for the inter-dependence of the data are indeed required. Concerning the prediction error, we find our theoretical bounds still far too conservative, which reflects the problem of accounting for the effect of inverting the mass matrix in traditional EDMD. This finding indicates that additional research is required on this end.
## 2. Preliminaries
In this section, we provide the required background on stochastic differential equations (Section 2.1), reproducing kernel Hilbert spaces (Section 2.2), Koopman operators (Section 2.3), and their representations on an RKHS (Section 2.4).
### Stochastic differential equations
Let \(\mathcal{X}\subset\mathbb{R}^{d}\) and let a stochastic differential equation (SDE) with drift vector field \(b:\mathcal{X}\to\mathbb{R}^{d}\) and diffusion matrix field \(\sigma:\mathcal{X}\to\mathbb{R}^{d\times d}\) be given, i.e.,
\[dX_{t}=b(X_{t})\,dt+\sigma(X_{t})\,dW_{t}, \tag{2.1}\]
where \(W_{t}\) is \(d\)-dimensional Brownian motion. We assume that both \(b\) and \(\sigma\) are Lipschitz-continuous and that \((1+\|\cdot\|_{2})^{-1}[\|b\|_{2}+\|\sigma\|_{F}]\) is bounded on \(\mathcal{X}\). Then [39, Theorem 5.2.1] guarantees the existence of a unique solution \((X_{t})_{t\geq 0}\) to (2.1).
The solution \((X_{t})_{t\geq 0}\) constitutes a continuous-time Markov process whose transition kernel will be denoted by \(\rho_{t}:\mathcal{X}\times\mathcal{B}_{\mathcal{X}}\to\mathbb{R}\), where \(\mathcal{B}_{\mathcal{X}}\) denotes the Borel \(\sigma\)-algebra on \(\mathcal{X}\). Then \(\rho_{t}(x,\cdot)\) is a probability measure for all \(x\in\mathcal{X}\), and for each \(A\in\mathcal{B}_{\mathcal{X}}\) we have that \(\rho_{t}(\cdot,A)\) is a representative of the conditional probability for \(A\) containing \(X_{t}\) given \(X_{0}=\,\cdot\,\), i.e.,
\[\rho_{t}(x,A)=\mathbb{P}(X_{t}\in A|X_{0}=x)\quad\text{for $\mu$-a.e. $x\in\mathcal{X}$.}\]
Throughout, we will assume the existence of an _invariant_ (Borel) _probability measure_\(\mu\) for the Markov process \((X_{t})_{t\geq 0}\), i.e., we have
\[\int\rho_{t}(x,A)\,d\mu(x)=\mu(A) \tag{2.2}\]
for all \(t\geq 0\).
In addition to being invariant, we will often assume that \(\mu\) is _ergodic_, meaning that for any \(t>0\) every \(\rho_{t}\)-invariant set \(A\) (that is, \(\rho_{t}(x,A)=1\) for all \(x\in A\)) satisfies \(\mu(A)\in\{0,1\}\). In this case, the Birkhoff ergodic theorem [15, Theorem 9.6] (see also (D.1)) and its generalizations apply, and allow us to calculate expectations w.r.t. \(\mu\) using long-time averages over simulation data.
We let \(\|\cdot\|_{p}\) denote the \(L^{p}_{\mu}(\mathcal{X})\)-norm, \(1\leq p<\infty\). In the particular case \(p=2\), scalar product and norm on the Hilbert space \(L^{2}_{\mu}(\mathcal{X})\) will be denoted by \(\langle\cdot\,,\cdot\rangle_{\mu}\) and \(\|\cdot\|_{\mu}\), respectively.
Figure 2. Illustration of main results
### Reproducing kernel Hilbert spaces
In what follows, let \(k:\mathcal{X}\times\mathcal{X}\to\mathbb{R}\) be a continuous and symmetric positive definite kernel, that is, we have \(k(x,y)=k(y,x)\) for all \(x,y\in\mathcal{X}\) and
\[\sum_{i,j=1}^{m}k(x_{i},x_{j})c_{i}c_{j}\,\geq\,0\]
for all choices of \(x_{1},\ldots,x_{m}\in\mathcal{X}\) and \(c_{1},\ldots,c_{m}\in\mathbb{R}\). It is well known that \(k\) generates a so-called _reproducing kernel Hilbert space_ (RKHS) [1, 6, 40]\((\mathbb{H},\langle\,\cdot\,,\cdot\rangle)\) of continuous functions, such that for \(\psi\in\mathbb{H}\) the _reproducing property_
\[\psi(x)=\langle\psi,\Phi(x)\rangle,\qquad x\in\mathcal{X}, \tag{2.3}\]
holds, where \(\Phi:\mathcal{X}\to\mathbb{H}\) denotes the so-called _feature map_ corresponding to the kernel \(k\), i.e.,
\[\Phi(x)=k(x,\cdot),\qquad x\in\mathcal{X}.\]
In the sequel, we shall denote the norm on \(\mathbb{H}\) by \(\|\cdot\|\) and the kernel diagonal by \(\varphi\):
\[\varphi(x)=k(x,x),\qquad x\in\mathcal{X}.\]
Then for \(x\in\mathcal{X}\) we have
\[\|\Phi(x)\|^{2}=\langle\Phi(x),\Phi(x)\rangle=\langle k(x,\cdot),k(x,\cdot) \rangle=k(x,x)=\varphi(x).\]
We shall frequently make use of the following estimate:
\[|k(x,y)|=|\langle\Phi(x),\Phi(y)\rangle|\leq\|\Phi(x)\|\|\Phi(y)\|=\sqrt{ \varphi(x)\varphi(y)}.\]
In particular, it shows that \(k\) is bounded if and only if its diagonal \(\varphi\) is bounded.
By \(\mathcal{L}_{\mu}^{p}(\mathcal{X})\), \(p\in[1,\infty)\), we denote the space of all _functions_ (not equivalence classes) on \(\mathcal{X}\) with a finite \(p\)-norm \(\|\cdot\|_{p}\). Henceforth, we shall impose the following
**Compatibility Assumptions:**
1. \(\varphi\in\mathcal{L}_{\mu}^{2}(\mathcal{X})\).
2. If \(\psi\in L_{\mu}^{2}(\mathcal{X})\) such that \(\int\!\int k(x,y)\psi(x)\psi(y)\,d\mu(x)\,d\mu(y)=0\), then \(\psi=0\).
3. If \(\psi\in\mathbb{H}\) such that \(\psi(x)=0\) for \(\mu\)-a.e. \(x\in\mathcal{X}\), then \(\psi(x)=0\) for all \(x\in\mathcal{X}\).
Many of the statements in this subsection can also be found in [52, Chapter 4]. However, as we aim to present the contents in a self-contained way, we provide the proofs in Appendix A.
The following lemma explains the meaning of the compatibility assumptions (A1) and (A2).
**Lemma 2.1**.: _Under the assumption that \(\varphi\in\mathcal{L}_{\mu}^{1}(\mathcal{X})\) (in particular, under assumption (A1)), we have that \(\mathbb{H}\subset\mathcal{L}_{\mu}^{2}(\mathcal{X})\) with_
\[\|\psi\|_{\mu}\,\leq\,\sqrt{\|\varphi\|_{1}}\cdot\|\psi\|,\qquad\psi\in \mathbb{H}, \tag{2.4}\]
_and assumption (A2) is equivalent to the density of \(\mathbb{H}\) in \(\mathcal{L}_{\mu}^{2}(\mathcal{X})\)._
We have meticulously distinguished between functions and equivalence classes as there might be distinct functions \(\phi,\psi\in\mathbb{H}\), which are equal \(\mu\)-almost everywhere2, i.e., \(\phi=\psi\) in \(L_{\mu}^{2}(\mathcal{X})\). The compatibility assumption (A3) prohibits this situation so that \(\mathbb{H}\) can in fact be seen as a subspace of \(L_{\mu}^{2}(\mathcal{X})\), which is then densely and continuously embedded.
**Remark 2.2**.: (a) Condition (A1) implies \(k\in L^{4}_{\mu\otimes\mu}(\mathcal{X}\times\mathcal{X})\), where \(\mu\otimes\mu\) is the product measure on \(\mathcal{X}\times\mathcal{X}\).
(b) The density of \(\mathbb{H}\) in \(L^{2}_{\mu}(\mathcal{X})\) is strongly related to the term _universality_ in the literature, see [53].
(c) Condition (A3) holds if \(\operatorname{supp}\mu=\mathcal{X}\), cf. [52, Exercise 4.6].
It immediately follows from
\[\int|\psi(x)|\|\Phi(x)\|\,d\mu(x)\leq\|\psi\|_{\mu}\|\varphi\|_{1}^{1/2}, \tag{2.5}\]
for \(\psi\in L^{2}_{\mu}(\mathcal{X})\) that the linear operator \(\mathcal{E}:L^{2}_{\mu}(\mathcal{X})\to\mathbb{H}\), defined by
\[\mathcal{E}\psi:=\int\psi(x)\Phi(x)\,d\mu(x),\qquad\psi\in L^{2}_{\mu}( \mathcal{X}),\]
is well defined (as a Bochner integral in \(\mathbb{H}\)) and bounded with operator norm not larger than \(\|\varphi\|_{1}^{1/2}\).
**Remark 2.3**.: The so-called _kernel mean embedding_\(\mathcal{E}_{k}\), mapping probability measures \(\nu\) on \(\mathcal{X}\) to the RKHS \(\mathbb{H}\), is defined by \(\mathcal{E}_{k}\nu=\int\Phi(x)\,d\nu(x)\), see, e.g., [51]. Hence, we have \(\mathcal{E}\psi=\mathcal{E}_{k}\nu\) with \(d\nu=\psi\,d\mu\).
Note that the operator \(\mathcal{E}\) is not an embedding in strict mathematical terms. The terminology _embedding_ rather applies to its adjoint \(\mathcal{E}^{*}\). Indeed, the operator \(\mathcal{E}\) enjoys the simple but important property:
\[\langle\mathcal{E}\psi,\eta\rangle=\int\psi(x)\langle\Phi(x),\eta\rangle\,d\mu (x)=\int\psi(x)\eta(x)\,d\mu(x)=\langle\psi,\eta\rangle_{\mu} \tag{2.6}\]
for \(\psi\in L^{2}_{\mu}(\mathcal{X})\) and \(\eta\in\mathbb{H}\). This implies that the adjoint operator \(\mathcal{E}^{*}:\mathbb{H}\to L^{2}_{\mu}(\mathcal{X})\) is the inclusion operator from \(\mathbb{H}\) into \(L^{2}_{\mu}(\mathcal{X})\), i.e.,
\[\mathcal{E}^{*}\eta=\eta,\qquad\eta\in\mathbb{H}. \tag{2.7}\]
We shall further define the covariance operator3
Footnote 3: In what follows, by \(L(\mathcal{H},\mathcal{K})\) we denote the set of all bounded (i.e., continuous) linear operators between Hilbert spaces \(\mathcal{H}\) and \(\mathcal{K}\). As usual, we also set \(L(\mathcal{H}):=L(\mathcal{H},\mathcal{H})\).
\[C_{\mathbb{H}}:=\mathcal{E}\mathcal{E}^{*}\,\in\,L(\mathbb{H}).\]
Recall that a linear operator \(T\in L(\mathcal{H})\) on a Hilbert space \(\mathcal{H}\) is _trace class_ if for some (and hence for each) orthonormal basis \((e_{j})_{j\in\mathbb{N}}\) of \(\mathcal{H}\) we have that \(\sum_{j=1}^{\infty}\langle(T^{*}T)^{1/2}e_{i},e_{i}\rangle<\infty\). A linear operator \(S\in L(\mathcal{H},\mathcal{K})\) between Hilbert spaces \(\mathcal{H}\) and \(\mathcal{K}\) is said to be _Hilbert-Schmidt_[12, Chapter III.9] if \(S^{*}S\) is trace class, i.e., \(\|S\|_{HS}^{2}:=\sum_{j=1}^{\infty}\|Se_{i}\|^{2}<\infty\) for some (and hence for each) orthonormal basis \((e_{j})_{j\in\mathbb{N}}\).
**Lemma 2.4**.: _Let the Compatibility Assumptions (A1)-(A3) be satisfied. Then the following hold._
* _The operator_ \(\mathcal{E}\) _is an injective Hilbert-Schmidt operator with_ \[\|\mathcal{E}\|_{HS}^{2}=\|\varphi\|_{1}.\]
* _The space_ \(\mathbb{H}\) _is densely and compactly embedded in_ \(L^{2}_{\mu}(\mathcal{X})\)_._
* _The operator_ \(C_{\mathbb{H}}\) _is an injective non-negative selfadjoint trace class operator._
The next theorem is due to Mercer and can be found in, e.g., [45]. It shows the existence of a particular orthonormal basis \((e_{j})_{j=1}^{\infty}\) of \(L^{2}_{\mu}(\mathcal{X})\) composed of eigenfunctions of \(\mathcal{E}^{*}\mathcal{E}\), which we shall henceforth call the _Mercer basis_ corresponding to the kernel \(k\). Again for the sake of self-containedness, we give a short proof in Appendix A.
**Theorem 2.5** (Mercer's Theorem).: _There exists an orthonormal basis \((e_{j})_{j=1}^{\infty}\) of \(L^{2}_{\mu}(\mathcal{X})\) consisting of eigenfunctions of \(\mathcal{E}^{*}\mathcal{E}\) with corresponding eigenvalues \(\lambda_{j}>0\) such that \(\sum_{j=1}^{\infty}\lambda_{j}=\|\varphi\|_{1}<\infty\). Furthermore, \((f_{j})_{j=1}^{\infty}\) with \(f_{j}=\sqrt{\lambda_{j}}e_{j}\) constitutes an orthonormal basis of \(\mathbb{H}\) consisting of eigenfunctions of \(C_{\mathbb{H}}\) with corresponding eigenvalues \(\lambda_{j}\). Moreover, for all \(x,y\in\mathcal{X}\),_
\[k(x,y)=\sum_{j}f_{j}(x)f_{j}(y)=\sum_{j}\lambda_{j}e_{j}(x)e_{j}(y),\]
_the series converges absolutely._
### The Koopman semigroup
The _Koopman semigroup_\((K^{t})_{t\geq 0}\) associated with the SDE (2.1) is defined by
\[(K^{t}\psi)(x)=\mathbb{E}[\psi(X_{t})|X_{0}=x]=\int\psi(y)\,\rho_{t}(x,dy),\]
for \(\psi\in B(\mathcal{X})\), the set of all bounded Borel-measurable functions on \(\mathcal{X}\), and \(\rho_{t}(x,dy)=d\rho_{t}(x,\cdot)(y)\). It is easy to see that the invariance of \(\mu\) is equivalent to the identity
\[\int K^{t}\psi\,d\mu=\int\psi\,d\mu \tag{2.8}\]
for all \(t\geq 0\) and \(\psi\in B(\mathcal{X})\) (which easily extends to functions \(\psi\in L^{1}_{\mu}(\mathcal{X})\), see Proposition 2.7).
**Remark 2.6**.: Note that in the case \(\sigma=0\) the SDE (2.1) reduces to the deterministic ODE \(\dot{x}=b(x)\). Then (2.8) implies \(\int|\psi(\phi(t,x))|^{2}\,d\mu(x)=\int|\psi(x)|^{2}\,d\mu(x)\) for all \(t\geq 0\) and all \(\psi\in B(\mathcal{X})\), where \(\phi(\cdot,x)\) is the solution of the initial value problem \(\dot{y}=b(y)\), \(y(0)=x\). Hence, the composition operator \(K^{t}:\psi\mapsto\psi\circ\phi(t,\cdot)\) is unitary in \(L^{2}_{\mu}(\mathcal{X})\). However, we shall require below (see Theorem 3.1) that \(K^{t}\) has its spectrum in the interior of the unit circle. Therefore, we assume throughout that \(\sigma\neq 0\).
The proofs of the following two propositions can be found in Appendix A.
**Proposition 2.7**.: _For each \(p\in[1,\infty]\) and \(t\geq 0\), \(K^{t}\) extends uniquely to a bounded operator from \(L^{p}_{\mu}(\mathcal{X})\) to itself with operator norm \(\|K^{t}\|_{L^{p}_{\mu}\to L^{p}_{\mu}}\leq 1\)._
By \(C_{b}(\mathcal{X})\) we denote the set of all bounded continuous functions on \(\mathcal{X}\). As the measure \(\mu\) is finite, we have \(C_{b}(\mathcal{X})\subset B(\mathcal{X})\subset L^{p}_{\mu}(\mathcal{X})\) for all \(p\in[1,\infty]\). In fact, \(C_{b}(\mathcal{X})\) is dense in each \(L^{p}_{\mu}(\mathcal{X})\), \(p\in[1,\infty)\), see [48, Theorem 3.14].
**Proposition 2.8**.: \((K^{t})_{t\geq 0}\) _is a \(C_{0}\)-semigroup of contractions in \(L^{p}_{\mu}(\mathcal{X})\) for each \(p\in[1,\infty)\)._
The _infinitesimal generator_ of the \(C_{0}\)-semigroup \((K^{t})_{t\geq 0}\) is the (in general unbounded) operator in \(L^{2}_{\mu}(\mathcal{X})\), defined by
\[\mathcal{L}\psi=L^{2}_{\mu}\mbox{-}\lim_{t\to 0}\frac{K^{t}\psi-\psi}{t}, \tag{2.9}\]
whose domain \(\operatorname{dom}\mathcal{L}\) is the set of all \(\psi\in L^{2}_{\mu}(\mathcal{X})\) for which the above limit exists. By Proposition 2.8 and the Lumer-Phillips theorem (see [28]), the operator \(\mathcal{L}\) is densely defined, closed4, dissipative (i.e., \(\operatorname{Re}\langle\mathcal{L}\psi,\psi\rangle_{\mu}\leq 0\) for all \(\psi\in\operatorname{dom}\mathcal{L}\)), and its spectrum is contained in the closed left half-plane.
Footnote 4: Recall that a linear operator \(T\), defined on a subspace \(\operatorname{dom}T\) of a Hilbert space \(\mathcal{H}\), which maps to a Hilbert space \(\mathcal{K}\), is closed if its graph is closed in \(\mathcal{H}\times\mathcal{K}\).
**Lemma 2.9**.: _The constant function \(\mathbbm{1}\) is contained in \(\operatorname{dom}\mathcal{L}\) and \(\mathcal{L}\mathbbm{1}=0\). Moreover, if \(M:=\operatorname{span}\{\mathbbm{1}\}\subset L^{2}_{\mu}(\mathcal{X})\), then both \(M\) and \(M^{\perp}\) are invariant under \(\mathcal{L}\) and all \(K^{t}\), \(t\geq 0\)._
Proof.: It is easy to see that \(K^{t}\mathbb{1}=\mathbb{1}\) for each \(t\geq 0\) and hence \(\mathbb{1}\in\operatorname{dom}\mathcal{L}\) with \(\mathcal{L}\mathbb{1}=0\). Hence \(K^{t}M\subset M\) for all \(t\geq 0\) and \(\mathcal{L}M\subset M\). Now, if \(\psi\perp\mathbb{1}\), then \(\langle K^{t}\psi,\mathbb{1}\rangle_{\mu}=\int K^{t}\psi\,d\mu=\int\psi\,d\mu= \langle\psi,\mathbb{1}\rangle_{\mu}=0\), which shows that also \(K^{t}M^{\perp}\subset M^{\perp}\). The relation \(\mathcal{L}M^{\perp}\subset M^{\perp}\) follows from (2.9).
### Representation of Koopman Operators on the RKHS
Using the integral operator \(\mathcal{E}\), it is possible to represent the Koopman operator with the aid of a linear operator on \(\mathbb{H}\), which is based on kernel evaluations. This construction mimics the well-known kernel trick used frequently in machine learning. To begin with, for any \(x,y\in\mathcal{X}\) define the rank-one operator \(C_{xy}:\mathbb{H}\to\mathbb{H}\) by
\[C_{xy}\psi:=\langle\psi,\Phi(y)\rangle\Phi(x)=\psi(y)\Phi(x).\]
For \(t\geq 0\) and \(\psi\in\mathbb{H}\) we further define the cross-covariance operator \(C^{t}_{\mathbb{H}}:\mathbb{H}\to\mathbb{H}\) by
\[C^{t}_{\mathbb{H}}\psi:=\int\int C_{xy}\psi\,\rho_{t}(x,dy)\,d\mu(x)=\int(K^{t }\psi)(x)\Phi(x)\,d\mu(x)=\mathcal{E}K^{t}\psi=\mathcal{E}K^{t}\mathcal{E}^{ *}\psi.\]
Thus, we have
\[C^{t}_{\mathbb{H}}=\mathcal{E}K^{t}\mathcal{E}^{*}. \tag{2.10}\]
In other words, the cross-covariance operator \(C^{t}_{\mathbb{H}}\) represents the action of the Koopman semigroup through the lens of the RKHS integral operator \(\mathcal{E}\) (see [20] for details). Being the product of the two Hilbert-Schmidt operators \(\mathcal{E}K^{t}\) and \(\mathcal{E}^{*}\), the operator \(C^{t}_{\mathbb{H}}\) is trace class for all \(t\geq 0\) (cf. [16, p. 521]).
Note that due to \(\rho_{0}(x,\,\cdot\,)=\delta_{x}\), for \(t=0\) this reduces to the already introduced covariance operator
\[\int\int C_{xy}\,\rho_{0}(x,dy)\,d\mu(x)=\int C_{xx}\,d\mu(x)=\mathcal{E} \mathcal{E}^{*}=C_{\mathbb{H}}.\]
The identity (2.10) shows that for all \(\eta,\psi\in\mathbb{H}\) we have
\[\langle\eta,C^{t}_{\mathbb{H}}\psi\rangle=\langle\eta,K^{t}\psi\rangle_{\mu}, \tag{2.11}\]
which shows that the role of \(C^{t}_{\mathbb{H}}\) is analogous to that of the stiffness matrix in a traditional finite-dimensional approximation of the Koopman operator. In this analogy, the covariance operator \(C_{\mathbb{H}}\) plays the role of the mass matrix.
### Empirical estimators
Next, we introduce empirical estimators for \(C^{t}_{\mathbb{H}}\) based on finite data \((x_{k},y_{k})\), \(k=1,\ldots,m\). We consider two sampling scenarios for fixed \(t>0\):
1. The \(x_{k}\) are drawn i.i.d. from \(\mu\), and each \(y_{k}\sim\mu\) is obtained from the conditional distribution \(\rho_{t}(x_{k},\cdot)\), i.e., \(y_{k}|(x_{k}=x)\sim\rho_{t}(x,\cdot)\) for \(\mu\)-a.e. \(x\in\mathcal{X}\). For example, \(y_{k}\) can be obtained by simulating the SDE (2.1) starting from \(x_{k}\) until time \(t\).
2. \(\mu\) is ergodic and both \(x_{k}\) and \(y_{k}\) are obtained from a single (usually long-term) simulation of the dynamics \(X_{t}\) at discrete integration time step \(\Delta t>0\), using a sliding-window estimator, i.e., \[x_{0}=X_{0}\sim\mu,\quad x_{k}=X_{k\Delta t},\quad\text{and}\quad y_{k}=X_{k \Delta t+t}.\] Moreover, we assume that there exists a Riesz basis \((\psi_{j})_{j=0}^{\infty}\) of \(L^{2}_{\mu}(\mathcal{X})\) consisting of eigenfunctions of the generator \(\mathcal{L}\) with corresponding eigenvalues \(\mu_{j}\) satisfying \(\sum_{j=0}^{\infty}e^{2(\operatorname{Re}\mu_{j})\Delta t}<\infty\).
**Remark 2.10**.: It easily follows from the discussion in Appendix B that the last assumption on the generator \(\mathcal{L}\) and on the decay of its eigenvalues \(\mu_{j}\) is equivalent to the similarity of \(\mathcal{L}\) to an (unbounded) normal operator \(N\) such that \(e^{N\Delta t}\in L(L^{2}_{\mu}(\mathcal{X}))\) is Hilbert-Schmidt. If the assumption holds with \(\psi_{j}=Se_{j}\), where \((e_{j})\) is an orthonormal basis of \(L^{2}_{\mu}(\mathcal{X})\), the operator \(N\) is given by \(N=\sum_{j}\mu_{j}\langle\,\cdot\,,e_{j}\rangle e_{j}\) with \(\operatorname{dom}N=\{\psi:(\mu_{j}\langle\psi,e_{j}\rangle)\in\ell^{2}\}\) and \(\mathcal{L}=SNS^{-1}\). The condition \(\sum_{j=0}^{\infty}e^{2(\operatorname{Re}\mu_{j})\Delta t}<\infty\) then obviously means that the eigenvalues of \(e^{N\Delta t}\) form an \(\ell^{2}\) sequence.
Recall that the joint distribution of two random variables \(X\) and \(Y\) is given by
\[dP_{X,Y}(x,y)=dP_{Y|X=x}(y)\cdot dP_{X}(x).\]
Set \(X=x_{k}\) and \(Y=y_{k}\). Then, in both cases **(1)** and **(2)**, we have \(P_{X}=\mu\) and
\[P_{Y|X=x}(B)=P(y_{k}\in B|x_{k}=x)=P(X_{t}\in B|X_{0}=x)=\rho_{t}(x,B).\]
In other words, for the joint distribution \(\mu_{0,t}\) of \(x_{k}\) and \(y_{k}\) we have
\[d\mu_{0,t}(x,y)=d\rho_{t}(x,\cdot)(y)\cdot d\mu(x)=\rho_{t}(x,dy)\cdot d\mu(x).\]
More explicitly,
\[\mu_{0,t}(A\times B)=\int_{A}\rho_{t}(x,B)\,d\mu(x).\]
Now, since
\[C_{\mathbb{H}}^{t}=\int\int C_{xy}\,\rho_{t}(x,dy)\,d\mu(x)=\int C_{xy}\,d\mu_{ 0,t}(x,y)=\mathbb{E}\big{[}C_{x_{k},y_{k}}\big{]},\]
for the empirical estimator for \(C_{\mathbb{H}}^{t}\) we choose the expression
\[\widehat{C}_{\mathbb{H}}^{m,t}=\frac{1}{m}\sum_{k=0}^{m-1}C_{x_{k},y_{k}}. \tag{2.12}\]
## 3. Variance of the Empirical Estimator
In case **(1)**, the law of large numbers [3, Theorem 2.4] and, in case **(2)**, ergodicity [2] ensures the expected behavior
\[\lim_{m\to\infty}\|\widehat{C}_{\mathbb{H}}^{m,t}-C_{\mathbb{H}}^{t}\|_{HS}=0 \qquad\text{a.s.}\]
However, this is a purely qualitative result, and nothing is known a priori on the rate of this convergence. The main result of this section, Theorem 3.1, contains an exact expression for the Hilbert-Schmidt variance of the empirical estimator \(\widehat{C}_{\mathbb{H}}^{m,t}\) based on \(m\) data points, which then yields probabilistic estimates for the expression \(\|\widehat{C}_{\mathbb{H}}^{m,t}-C_{\mathbb{H}}^{t}\|_{HS}\), see Proposition 3.4. Here, our focus is on the estimation from a single ergodic trajectory, i.e., case **(2)** above. While the broader line of reasoning partially resembles that of our previous paper [38], we require additional steps due to the infinite-dimensional setting introduced by the RKHS.
In Theorem 3.1 and its proof, we will be concerned with evolving kernels \(k_{t}:\mathcal{X}\times\mathcal{X}\to\mathbb{R}\), defined by
\[k_{t}(x,x^{\prime}):=\int\int k(y,y^{\prime})\,\rho_{t}(x,dy)\,\rho_{t}(x^{ \prime},dy^{\prime}).\]
We have
\[k_{t}(x,x^{\prime})=\int\int\langle\Phi(y),\Phi(y^{\prime})\rangle\,\rho_{t}( x,dy)\,\rho_{t}(x^{\prime},dy^{\prime})=\left\langle\int\Phi(y)\,\rho_{t}(x,dy), \int\Phi(y^{\prime})\,\rho_{t}(x^{\prime},dy^{\prime})\right\rangle.\]
The integrals in the last expression are well defined as limits in \(\mathbb{H}\) for \(\mu\)-a.e. \(x,x^{\prime}\in\mathcal{X}\) as
\[\int\int\|\Phi(y)\|\,\rho_{t}(x,dy)\,d\mu(x)=\int\int\sqrt{\varphi(y)}\,\rho_{ t}(x,dy)\,d\mu(x)=\int\sqrt{\varphi(x)}\,d\mu(x)\,\leq\,\|\varphi\|_{1}^{1/2},\]
see (2.8). This shows that \(k_{t}\) is well defined (\((\mu\otimes\mu)\)-a.e.) and that it is a positive definite kernel on its domain. Moreover, \(k_{0}=k\) and
\[|k_{t}(x,x^{\prime})|\leq\int\sqrt{\varphi(y^{\prime})}\int\sqrt{\varphi(y)}\, \rho_{t}(x,dy)\,\rho_{t}(x^{\prime},dy^{\prime})=(K^{t}\sqrt{\varphi})(x)\cdot (K^{t}\sqrt{\varphi})(x^{\prime}).\]
In particular, \(k_{t}\in L^{2}_{\mu\otimes\mu}(\mathcal{X}^{2})\) with \(\|k_{t}\|_{L^{2}_{\mu\otimes\mu}}\leq\|\varphi\|_{1}\). By \(\Phi_{t}\) we denote the corresponding feature map, i.e.,
\[\Phi_{t}(x)=k_{t}(x,\cdot).\]
Note that not necessarily \(\Phi_{t}(x)\in\mathbb{H}\). Finally, we define
\[\Phi_{t,x}:=\Phi(x)\Phi_{t}(x).\]
We are now in the position to formulate our first main result.
**Theorem 3.1**.: _Setting \(z_{k}=(x_{k},y_{k})\), \(k=1,\ldots,m\), the Hilbert-Schmidt variance of the empirical estimator can be written as_
\[\mathbb{E}\big{[}\|\widehat{C}^{m,t}_{\mathbb{H}}-C^{t}_{\mathbb{H}}\|^{2}_{ HS}\big{]}=\frac{1}{m}\left[\mathbb{E}_{0}(t)+2\sum_{k=1}^{m-1}\frac{m-k}{m} \cdot\mathbb{E}\big{[}\langle C_{z_{k}}-C^{t}_{\mathbb{H}},C_{z_{0}}-C^{t}_{ \mathbb{H}}\rangle_{HS}\big{]}\right], \tag{3.1}\]
_where_
\[\mathbb{E}_{0}(t):=\mathbb{E}\big{[}\|C_{z_{0}}-C^{t}_{\mathbb{H}}\|^{2}_{HS} \big{]}=\langle K^{t}\varphi,\varphi\rangle_{\mu}-\langle k,k_{t}\rangle_{L^{2 }_{\mu\otimes\mu}}.\]
_In case_ **(1)**_, \(\mathbb{E}\big{[}\|\widehat{C}^{m,t}_{\mathbb{H}}-C^{t}_{\mathbb{H}}\|^{2}_{ HS}\big{]}=\frac{1}{m}\mathbb{E}_{0}(t)\), whereas in case_ **(2)** _we have_
\[\mathbb{E}\big{[}\|\widehat{C}^{m,t}_{\mathbb{H}}-C^{t}_{\mathbb{H}}\|^{2}_{ HS}\big{]}=\frac{1}{m}\left[\mathbb{E}_{0}(t)+2\sum_{j=1}^{\infty}\frac{d_{j}.tq_{j}}{1-q_{j}}\left(1-\frac{1}{m}\cdot\frac{1-q_{j}^{m}}{1-q_{j}}\right) \right], \tag{3.2}\]
_with_
\[q_{j}=e^{\mu_{j}\Delta t},\quad d_{j,t}=\langle c_{j,t},\psi_{j}\rangle_{\mu},\quad\text{and}\quad c_{j,t}(x)=\langle\Phi_{t,x},\widetilde{\psi}_{j} \rangle_{\mu}.\]
Before proving Theorem 3.1 in Subsection 3.1 below, let us comment on its statements and draw some conclusions.
**Remark 3.2**.: (a) Note that, by ergodicity of the invariant measure \(\mu\), the generator \(\mathcal{L}\) has no eigenvalues on the imaginary axis, except the simple zero eigenvalue (see Proposition D.1 in the Appendix). In contrast, if we drop the ergodicity assumption, we have
\[\mathbb{E}\big{[}\|\widehat{C}^{m,t}_{\mathbb{H}}-C^{t}_{\mathbb{H}}\|^{2}_{ HS}\big{]}=\frac{1}{m}\left[\mathbb{E}_{0}(t)+2\sum_{j=\nu_{0}}^{\infty} \frac{d_{j,t}q_{j}}{1-q_{j}}\left(1-\frac{1}{m}\cdot\frac{1-q_{j}^{m}}{1-q_{j} }\right)\right]+\frac{m-1}{m}\sum_{j=1}^{\nu_{0}-1}d_{j,t},\]
where \(\nu_{0}=\#\{j:\mu_{j}\in\frac{2\pi i}{\Delta t}\mathbb{Z}\}\) is the number of eigenvalues of \(\mathcal{L}\) of the form \(\frac{2k\pi i}{\Delta t}\), \(k\in\mathbb{Z}\), counting multiplicities. Obviously, the last term does not decay to zero as \(m\to\infty\) if \(\sum_{j=1}^{\nu_{0}-1}d_{j,t}\neq 0\).
(b) The definition of \(c_{j,t}\) requires \(\Phi_{t,x}\) to be in \(L^{2}_{\mu}(\mathcal{X})\) for \(\mu\)-a.e. \(x\in\mathcal{X}\). This will in fact be proved in Lemma 3.6 below.
In the following, we let
\[\sigma_{m}^{2}:=\mathbb{E}_{0}(t)+2\sum_{j=1}^{\infty}\frac{d_{j,t}q_{j}}{1-q_ {j}}\left(1-\frac{1}{m}\cdot\frac{1-q_{j}^{m}}{1-q_{j}}\right)\qquad\text{and} \qquad\sigma_{\infty}^{2}:=\mathbb{E}_{0}(t)+2\sum_{j=1}^{\infty}\frac{d_{j,t}q _{j}}{1-q_{j}}.\]
Then
\[\mathbb{E}\big{[}\|\widehat{C}^{m,t}_{\mathbb{H}}-C^{t}_{\mathbb{H}}\|^{2}_{ HS}\big{]}=\frac{\sigma_{m}^{2}}{m}\]
and \(\sigma_{m}^{2}\to\sigma_{\infty}^{2}\) as \(m\to\infty\). Both infinite series converge absolutely as \((q_{j})\in\ell^{2}\) by assumption, and \((d_{j,t})\in\ell^{2}\) as shown in the proof of Theorem 3.1. We can therefore interpret \(\sigma_{\infty}^{2}\) as _asymptotic variance_ of the estimator \(\hat{C}^{m,t}_{\mathbb{H}}\), similar to our previous results in [38, Lemma 6].
An upper bound on the variance can be obtained as follows:
**Corollary 3.3**.: _In case_ **(2)**_, for all \(m\in\mathbb{N}\) we have_
\[\sigma_{m}^{2}\,\leq\,\langle K^{t}\varphi,\varphi\rangle_{\mu}\left[1+\frac{4B} {A\delta_{q}}\|q\|_{\ell^{2}}\right], \tag{3.3}\]
_where \(A\) and \(B\) denote the lower and upper Riesz bounds of \((\psi_{j})\), respectively,_
\[q=(q_{j})_{j=1}^{\infty}\,,\qquad\text{and}\qquad\delta_{q}=\inf_{j\geq 1}|1-q_{ j}|>0.\]
Proof.: First of all, by Lemma 3.6,
\[\mathbb{E}_{0}(t)=\langle K^{t}\varphi,\varphi\rangle_{\mu}-\langle k,k_{t} \rangle_{L^{2}_{\mu\otimes\mu}}\leq\langle K^{t}\varphi,\varphi\rangle_{\mu}.\]
We have \(|1-q_{j}|\geq\delta_{q}\) and \(|q_{j}|\leq 1\) for all \(j\geq 1\) and hence
\[\frac{1}{|1-q_{j}|}\cdot\left|1-\frac{1}{m}\cdot\frac{1-q_{j}^{m}}{1-q_{j}} \right|\leq\frac{1}{\delta_{q}}\left(1+\frac{1}{m}\sum_{k=0}^{m-1}|q_{j}|^{k} \right)\leq\frac{2}{\delta_{q}}.\]
This and (3.7) imply (3.3).
**Proposition 3.4**.: _We have the following probabilistic bound on the estimation error:_
\[\mathbb{P}\big{(}\|C_{\mathbb{H}}^{t}-\widehat{C}_{\mathbb{H}}^{m,t}\|_{HS}> \varepsilon\big{)}\leq\left\{\begin{aligned} &\frac{\sigma_{m}^{2}}{m \varepsilon^{2}},&&\text{in case \bf(2),}\\ &\frac{E_{0}(t)}{m\varepsilon^{2}},&&\text{in case \bf(1),}\\ & 2\,e^{-\frac{m\varepsilon^{2}}{8\|k\|_{\infty}^{2}}},&& \text{in case \bf(1) with bounded kernel.}\end{aligned}\right. \tag{3.4}\]
_In particular, the above also holds upon replacing the left-hand side by \(\mathbb{P}\big{(}\|\mathcal{E}K^{t}\psi-\widehat{C}_{\mathbb{H}}^{m,t}\psi \|>\varepsilon\big{)}\) for \(\psi\in\mathbb{H}\), \(\|\psi\|=1\)._
Proof.: The inequalities (3.4) and (3.5) are an immediate consequence of Markov's inequality, applied to the random variable \(\|C_{\mathbb{H}}^{t}-\widehat{C}_{\mathbb{H}}^{m,t}\|_{HS}^{2}\). The inequality (3.6) follows from \(C_{\mathbb{H}}^{t}-\widehat{C}_{\mathbb{H}}^{m,t}=\frac{1}{m}\sum_{k=0}^{m-1} (C_{\mathbb{H}}^{t}-C_{z_{k}})\), Hoeffding's inequality for Hilbert space-valued random variables [43, Theorem 3.5] (see also [30, Theorem A.5.2]), and (cf. Lemma 3.6 below)
\[\|C_{\mathbb{H}}^{t}-C_{xy}\|_{HS}\leq\|C_{\mathbb{H}}^{t}\|_{HS}+\|C_{xy}\|_ {HS}=\sqrt{\langle k,k_{t}\rangle_{L^{2}_{\mu\otimes\mu}}}+\sqrt{\varphi(x) \varphi(y)}\leq 2\|k\|_{\infty},\]
since also \(\|k_{t}\|_{\infty}\leq\|k\|_{\infty}\). The estimate
\[\|\mathcal{E}K^{t}\psi-\widehat{C}_{\mathbb{H}}^{m,t}\psi\|=\|\mathcal{E}K^{t }\mathcal{E}^{*}\psi-\widehat{C}_{\mathbb{H}}^{m,t}\psi\|=\|(C_{\mathbb{H}}^{ t}-\widehat{C}_{\mathbb{H}}^{m,t})\psi\|\leq\|C_{\mathbb{H}}^{t}-\widehat{C}_{ \mathbb{H}}^{m,t}\|_{HS}\]
finally yields the last claim.
**Remark 3.5**.: Under additional assumptions (boundedness of the kernel, mixing, etc.), other concentration inequalities than Markov's, such as, e.g., [3, Theorem 2.12] (\(\alpha\)-mixing) or [46, Theoreme 3.1] (\(\beta\)-mixing), might lead to better estimates than (3.4).
### Proof of Theorem 3.1
**Lemma 3.6**.: _Let \(t\geq 0\). Then \(\Phi_{t,x}\in L^{2}_{\mu}(\mathcal{X})\) for \(\mu\)-a.e. \(x\in\mathcal{X}\) with_
\[\|\Phi_{t,x}\|_{\mu}^{2}\leq\varphi(x)(K^{t}\varphi)(x)\cdot\langle K^{t} \varphi,\varphi\rangle_{\mu}.\]
_Moreover, for every \(t\geq 0\) we have_
\[\|C_{xy}\|_{HS}^{2}=\varphi(x)\varphi(y)\qquad\text{and}\qquad\|C_{\mathbb{H} }^{t}\|_{HS}^{2}=\langle k,k_{t}\rangle_{L^{2}_{\mu\otimes\mu}}=\int\int\Phi_ {t,x}(y)\,d\mu(y)\,d\mu(x).\]
Proof.: We estimate
\[|\Phi_{t,x}(x^{\prime})|^{2} =|k(x,x^{\prime})k_{t}(x,x^{\prime})|^{2}\leq\varphi(x)\varphi(x^{ \prime})(K^{t}\sqrt{\varphi})^{2}(x)\cdot(K^{t}\sqrt{\varphi})^{2}(x^{\prime})\] \[\leq\varphi(x)(K^{t}\varphi)(x)\cdot\varphi(x^{\prime})(K^{t} \varphi)(x^{\prime}),\]
where we have applied Jensen's inequality to \((K^{t}\sqrt{\varphi})(x)\). This proves the first inequality. Next, if \((f_{j})\subset\mathbb{H}\) denotes the Mercer basis corresponding to \(k\), then
\[\langle C_{xy},C_{x^{\prime}y^{\prime}}\rangle_{HS}=\sum_{i}\langle C_{xy}f_{i },C_{x^{\prime}y^{\prime}}f_{i}\rangle=\sum_{i}f_{i}(y)f_{i}(y^{\prime})k(x,x ^{\prime})=k(x,x^{\prime})k(y,y^{\prime})\]
This proves \(\|C_{xy}\|_{HS}^{2}=\varphi(x)\varphi(y)\). Moreover, it yields
\[\|C_{\mathbb{H}}^{t}\|_{HS}^{2} =\left\|\int C_{xy}\,d\mu_{0,t}(x,y)\right\|_{HS}^{2}=\int\int k(x,x^{\prime})k(y,y^{\prime})\,d\mu_{0,t}(x,y)\,d\mu_{0,t}(x^{\prime},y^{\prime})\] \[=\int\int k(x,x^{\prime})\left[\int\int k(y,y^{\prime})\,\rho_{t }(x,dy)\,\rho_{t}(x^{\prime},dy^{\prime})\right]\,d\mu(x^{\prime})\,d\mu(x)= \langle k,k_{t}\rangle_{L^{2}_{\mu\otimes\mu}},\]
as claimed.
Proof of Theorem 3.1.: First of all, we have
\[\mathbb{E}\big{[}\|\widehat{C}_{\mathbb{H}}^{m,t}-C_{\mathbb{H}} ^{t}\|_{HS}^{2}\big{]} =\mathbb{E}\Big{[}\Big{\|}\frac{1}{m}\sum_{k=0}^{m-1}(C_{z_{k}}-C _{\mathbb{H}}^{t})\Big{\|}_{HS}^{2}\Big{]}=\mathbb{E}\Big{[}\frac{1}{m^{2}} \sum_{k,\ell=0}^{m-1}\big{\langle}C_{z_{k}}-C_{\mathbb{H}}^{t},C_{z_{\ell}}-C _{\mathbb{H}}^{t}\big{\rangle}_{HS}\Big{]}\] \[=\mathbb{E}\left[\frac{1}{m^{2}}\sum_{k=0}^{m-1}\|C_{z_{k}}-C_{ \mathbb{H}}^{t}\|_{HS}^{2}+\frac{2}{m^{2}}\sum_{k=0}^{m-1}\sum_{\ell=k+1}^{m-1 }\big{\langle}C_{z_{k}}-C_{\mathbb{H}}^{t},C_{z_{\ell}}-C_{\mathbb{H}}^{t} \big{\rangle}_{HS}\right]\] \[=\frac{1}{m}\mathbb{E}\big{[}\|C_{z_{0}}-C_{\mathbb{H}}^{t}\|_{HS }^{2}\big{]}+\frac{2}{m^{2}}\sum_{k=1}^{m-1}(m-k)\mathbb{E}\big{[}\langle C_{z_ {k}}-C_{\mathbb{H}}^{t},C_{z_{0}}-C_{\mathbb{H}}^{t}\rangle_{HS}\big{]}.\]
where we exploited that \(\mathbb{E}[\langle C_{z_{k}}-C_{\mathbb{H}}^{t},C_{z_{\ell}}-C_{\mathbb{H}}^{t} \rangle_{HS}]\) only depends on the difference \(\ell-k\).
Let us compute the first term. Since \(\mathbb{E}[C_{z_{0}}]=C_{\mathbb{H}}^{t}\) and thus \(\mathbb{E}[\langle C_{z_{0}},C_{\mathbb{H}}^{t}\rangle_{HS}]=\|C_{\mathbb{H}}^{ t}\|_{HS}^{2}\),
\[\mathbb{E}\big{[}\|C_{z_{0}}-C_{\mathbb{H}}^{t}\|_{HS}^{2}\big{]}=\mathbb{E} \big{[}\|C_{z_{0}}\|_{HS}^{2}\big{]}-\|C_{\mathbb{H}}^{t}\|_{HS}^{2}.\]
For \(\psi\in\mathbb{H}\) we have
\[\|C_{z_{0}}\psi\|^{2}=\|\psi(y_{0})\Phi(x_{0})\|^{2}=\psi(y_{0})^{2}\varphi(x_ {0}).\]
Using the Mercer basis \((f_{i})\subset\mathbb{H}\) corresponding to \(k\) in \(\mathbb{H}\) (cf. Theorem 2.5), we obtain
\[\mathbb{E}\big{[}\|C_{z_{0}}\|_{HS}^{2}\big{]}=\mathbb{E}\Big{[}\sum_{i}\|C_{ z_{0}}f_{i}\|^{2}\Big{]}=\mathbb{E}\Big{[}\sum_{i}f_{i}(y_{0})^{2}\varphi(x_{0}) \Big{]}=\mathbb{E}[\varphi(x_{0})\varphi(y_{0})].\]
Note that the latter equals (\(\varphi(x)=k(x,x)\) by definition)
\[\mathbb{E}[\varphi(x_{0})\varphi(y_{0})]=\int\varphi(x)\int\varphi(y)\,\rho_{t }(x,dy)\,d\mu(x)=\int\varphi(x)(K^{t}\varphi)(x)\,d\mu(x)=\langle K^{t}\varphi,\varphi\rangle_{\mu}.\]
We obtain
\[\mathbb{E}\big{[}\|C_{z_{0}}-C_{\mathbb{H}}^{t}\|_{HS}^{2}\big{]}=\mathbb{E}[ \varphi(x_{0})\varphi(y_{0})]-\langle k,k_{t}\rangle_{L^{2}_{\mu\otimes\mu}}= \langle K^{t}\varphi,\varphi\rangle_{\mu}-\langle k,k_{t}\rangle_{L^{2}_{\mu \otimes\mu}}=E_{0}(t)\]
and thus (3.1).
**Case (1).** In this case, \(z_{k}\) and \(z_{\ell}\) are independent for \(k\neq\ell\), so that
\[\mathbb{E}\big{[}\langle C_{z_{k}}-C_{\mathbb{H}}^{t},C_{z_{\ell}}-C_{ \mathbb{H}}^{t}\rangle_{HS}\big{]}=0.\]
Hence, the statement of the theorem for case **(1)** follows.
**Case (2).** Here, the cross terms do not vanish. In fact,
\[\begin{split}\mathbb{E}\big{[}\langle C_{z_{k}}-C^{t}_{\mathbb{H}}, C_{z_{0}}-C^{t}_{\mathbb{H}}\rangle_{HS}\big{]}&=\mathbb{E}[\langle C_{z_{ k}},C_{z_{0}}\rangle_{HS}]-\|C^{t}_{\mathbb{H}}\|_{HS}^{2}=\mathbb{E}\Big{[}\sum_{i} \langle C_{z_{k}}f_{i},C_{z_{0}}f_{i}\rangle\Big{]}-\|C^{t}_{\mathbb{H}}\|_{HS }^{2}\\ &=\mathbb{E}\Big{[}\Big{(}\sum_{i}f_{i}(y_{k})f_{i}(y_{0})\Big{)} k(x_{k},x_{0})\Big{]}-\|C^{t}_{\mathbb{H}}\|_{HS}^{2}\\ &=\mathbb{E}\big{[}k(y_{k},y_{0})k(x_{k},x_{0})\big{]}-\|C^{t}_{ \mathbb{H}}\|_{HS}^{2}.\end{split}\]
Now,
\[\begin{split}\mathbb{E}\big{[}k(y_{k},y_{0})k(x_{k},x_{0})\big{]}& =\int\int\int\int k(y^{\prime},y)k(x^{\prime},x)\,\rho_{t}(x^{ \prime},dy^{\prime})\,\rho_{k\Delta t}(x,dx^{\prime})\,\rho_{t}(x,dy)\,d\mu(x) \\ &=\int\int k(x,x^{\prime})\left[\int\int k(y,y^{\prime})\,\rho_{t }(x,dy)\,\rho_{t}(x^{\prime},dy^{\prime})\right]\,\rho_{k\Delta t}(x,dx^{ \prime})\,d\mu(x)\\ &=\int\int k(x,x^{\prime})k_{t}(x,x^{\prime})\,\rho_{k\Delta t}(x,dx^{\prime})\,d\mu(x)\\ &=\int\left[\int[\Phi(x)\Phi_{t}(x)](x^{\prime})\,\rho_{k\Delta t }(x,dx^{\prime})\right]\,d\mu(x)\\ &=\int[K^{k\Delta t}\Phi_{t,x}](x)\,d\mu(x).\end{split}\]
Hence,
\[\mathbb{E}\big{[}\langle C_{z_{k}}-C^{t}_{\mathbb{H}},C_{z_{0}}-C^{t}_{ \mathbb{H}}\rangle_{HS}\big{]}=\int(K^{k\Delta t}\Phi_{t,x})(x)\,d\mu(x)- \langle k,k_{t}\rangle_{L^{2}_{\mu\otimes\mu}}.\]
Let us now exploit the assumptions on the spectral properties of the generator \(\mathcal{L}\) in case **(2)**. For \(\mu\)-a.e. \(x\in\mathcal{X}\), we have
\[\Phi_{t,x}=\sum_{j=0}^{\infty}c_{j,t}(x)\psi_{j},\]
the series converging in \(L^{2}_{\mu}(\mathcal{X})\). Therefore,
\[K^{s}\Phi_{t,x}=\sum_{j=0}^{\infty}c_{j,t}(x)K^{s}\psi_{j}=\sum_{j=0}^{\infty} c_{j,t}(x)e^{\mu_{j}s}\psi_{j},\]
and thus (for \(k\geq 1\))
\[\int\big{(}K^{k\Delta t}\Phi_{t,x}\big{)}(x)\,d\mu(x)=\int\sum_{j=0}^{\infty} c_{j,t}(x)e^{\mu_{j}k\Delta t}\psi_{j}(x)\,d\mu(x)=\sum_{j=0}^{\infty}d_{j,t} \cdot e^{\mu_{j}k\Delta t}=\sum_{j=0}^{\infty}d_{j,t}\cdot q_{j}^{k}.\]
This series converges absolutely for each \(t\geq 0\) due to our assumption that \(\sum_{j}|q_{j}|^{2}<\infty\) and since for each \(j\in\mathbb{N}_{0}\) we have by Lemma 3.6 that
\[\begin{split}\sum_{j=0}^{\infty}|d_{j,t}|^{2}\leq B^{2}\sum_{j=0} ^{\infty}\|c_{j,t}\|_{\mu}^{2}&=B^{2}\int\sum_{j=0}^{\infty}| \langle\Phi_{t,x},\widetilde{\psi}_{j}\rangle_{\mu}|^{2}\,d\mu(x)\\ &\leq\frac{B^{2}}{A^{2}}\int\|\Phi_{t,x}\|_{\mu}^{2}\,d\mu(x)\leq \frac{B^{2}}{A^{2}}\langle K^{t}\varphi,\varphi\rangle_{\mu}^{2},\end{split} \tag{3.7}\]
where \(A\) and \(B\) are the Riesz bounds of \((\psi_{j})\).
Without loss of generality, we may assume that \(\mu_{0}=0\) with \(\psi_{0}=\mathbb{1}\) and \(\mu_{1},\ldots,\mu_{\nu_{0}-1}\in\frac{2\pi\mathrm{i}}{\Delta t}\mathbb{Z}\), and \(\psi_{k}\in\mathbb{1}^{\perp}\) for \(k\geq 1\), see Lemma 2.9. The duality relations then imply \(\widetilde{\psi}_{0}=\mathbb{1}\). Now, \(c_{0,t}(x)=\langle\Phi_{t,x},\mathbb{1}\rangle_{\mu}=\int k(x,y)k_{t}(x,y)\,d \mu(y)\) and hence
\[d_{0,t}=\langle c_{0,t},\mathbb{1}\rangle_{\mu}=\int c_{0,t}(x)\,d\mu(x)=\int \int k(x,y)k_{t}(x,y)\,d\mu(y)\,d\mu(x)=\langle k,k_{t}\rangle_{L^{2}_{\mu\otimes \mu}}. \tag{3.8}\]
This implies
\[\mathbb{E}\big{[}\langle C_{z_{k}}-C^{t}_{\mathbb{H}},C_{z_{0}}-C^{t}_{ \mathbb{H}}\rangle_{HS}\big{]}=\sum_{j=0}^{\infty}d_{j,t}\cdot q_{j}^{k}- \langle k,k_{t}\rangle_{L^{2}_{\mu\otimes\mu}}=\sum_{j=1}^{\infty}d_{j,t} \cdot q_{j}^{k}\]
and therefore
\[\mathbb{E}\big{[}\|\widehat{C}^{m,t}_{\mathbb{H}}-C^{t}_{\mathbb{ H}}\|_{HS}^{2}\big{]} =\frac{1}{m}\mathbb{E}_{0}(t)+\frac{2}{m}\sum_{j=1}^{\infty}d_{j, t}\sum_{k=1}^{m-1}(1-\tfrac{k}{m})q_{j}^{k}\] \[=\frac{1}{m}\left[\mathbb{E}_{0}(t)+2\sum_{j=1}^{\nu_{0}-1}d_{j, t}\sum_{k=1}^{m-1}(1-\tfrac{k}{m})q_{j}^{k}+2\sum_{j=\nu_{0}}^{\infty}d_{j,t} \sum_{k=1}^{m-1}(1-\tfrac{k}{m})q_{j}^{k}\right].\]
The identity
\[\sum_{k=1}^{m-1}\big{(}1-\tfrac{k}{m}\big{)}q^{k}=\left\{\begin{array}{ll} \frac{q}{1-q}\left(1-\frac{1}{m}\cdot\frac{1-q^{m}}{1-q}\right)&\text{if }q\neq 1\\ \frac{m-1}{2}&\text{if }q=1\end{array}\right.\]
finally yields (3.2).
## 4. Bound on the Koopman prediction error
The kernel cross-covariance operator \(C^{t}_{\mathbb{H}}\) can also be used to approximate the predictive capabilities of the Koopman operator, for observables in \(\mathbb{H}\). Approximating the full Koopman operator involves the inverse of the co-variance operator, which becomes an unbounded operator on a dense domain of definition in the infinite-dimensional RKHS case. Moreover, its empirical estimator \(\widehat{C}^{m}_{\mathbb{H}}\) is finite-rank and thus not even injective. While Fukumizu et al. tackle this problem in [10] by means of a regularization procedure, we choose to use pseudo-inverses instead (cf. Remark 4.2). We truncate the action of the Koopman operator using \(N\) terms of the Mercer series expansion and derive a bound for the prediction error for fixed truncation parameter \(N\). While we use similar ideas as presented in [11], we heavily rely on our new results on the cross-covariance operator, cf. Section 3. Afterwards, we deal with the case of Koopman-invariance of the RKHS [22]. Here, we establish an estimate for the truncation error, which then yields a bound on the deviation from the full Koopman operator. We emphasize that this error bound is extremely useful in comparison to its prior counterparts based on the assumption that the space spanned by a finite number of so-called observables (dictionary) is invariant under the Koopman operator. The latter essentially requires to employ only Koopman eigenfunctions as observables, see, e.g., [25, 14].
Let \((e_{j})\) be the Mercer orthonormal basis of \(L^{2}_{\mu}(\mathcal{X})\) corresponding to the kernel \(k\) and let \(\lambda_{j}=\|\mathcal{E}e_{j}\|_{\mu}\) as well as \(f_{j}:=\sqrt{\lambda_{j}}e_{j}\) (cf. Theorem 2.5). We arrange the Mercer eigenvalues in a non-increasing way, i.e.,
\[\lambda_{1}\geq\lambda_{2}\geq\ldots.\]
Let \(\psi\in\mathbb{H}\). Then
\[K^{t}\psi=\sum_{j=1}^{\infty}\langle K^{t}\psi,e_{j}\rangle_{\mu}e_{j}=\sum_{ j=1}^{\infty}\langle C^{t}_{\mathbb{H}}\psi,e_{j}\rangle e_{j}=\sum_{j=1}^{N} \langle C^{t}_{\mathbb{H}}\psi,e_{j}\rangle e_{j}+\sum_{j=N+1}^{\infty}\langle C ^{t}_{\mathbb{H}}\psi,e_{j}\rangle e_{j}. \tag{4.1}\]
### Estimation error
In the next theorem, we estimate the probabilistic error between the first summand
\[K_{N}^{t}\psi=\sum_{j=1}^{N}\langle C_{\mathbb{H}}^{t}\psi,e_{j}\rangle e_{j}, \qquad\psi\in\mathbb{H},\]
and its empirical estimator, which is of the form \(\sum_{j=1}^{N}\langle\widehat{C}_{\mathbb{H}}^{m,t}\psi,\widehat{e}_{j}\rangle \widehat{e}_{j}\) with approximations \(\widehat{e}_{j}\) of the \(e_{j}\).
**Theorem 4.1**.: _Assume that the eigenvalues \(\lambda_{j}\) of \(C_{\mathbb{H}}\) are simple, i.e., \(\lambda_{j+1}>\lambda_{j}\) for all \(j\). Fix an arbitrary \(N\in\mathbb{N}\) and let_
\[\delta_{N}=\min_{j=1,\ldots,N}\frac{\lambda_{j}-\lambda_{j+1}}{2}. \tag{4.2}\]
_Further, let \(\varepsilon\in(0,\delta_{N})\) and \(\delta\in(0,1)\) be arbitrary and fix some5\(m\geq\max\{N,\frac{2\sigma_{m}^{2}}{\varepsilon^{2}\delta}\}\). Let now \(\widehat{\lambda}_{1}\geq\ldots\geq\widehat{\lambda}_{m}\) denote the largest \(m\) eigenvalues of \(\widehat{C}_{\mathbb{H}}^{m}\) in descending order and let \(\widehat{e}_{1},\ldots,\widehat{e}_{m}\) be corresponding eigenfunctions, respectively, such that \(\|\widehat{e}_{j}\|=\widehat{\lambda}_{j}^{-1/2}\) for \(j=1,\ldots,m\). If we define_
Footnote 5: By Corollary 3.3, an amount of at least \(m\geq\max\left\{N,\ \frac{2\|\varphi\|_{\mu}^{2}}{\varepsilon^{4}\delta} \left[1+\frac{4B}{A\delta_{q}}\|q\|_{\ell^{2}}\right]\right\}\) data points suffices.
\[\widehat{K}_{N}^{m,t}\psi=\sum_{j=1}^{N}\langle\widehat{C}_{\mathbb{H}}^{m,t} \psi,\widehat{e}_{j}\rangle\widehat{e}_{j},\qquad\psi\in\mathbb{H}, \tag{4.3}\]
_then, with probability at least \(1-\delta\), we have that_
\[\|K_{N}^{t}-\widehat{K}_{N}^{m,t}\|_{\mathbb{H}\to L_{\mu}^{2}(\mathcal{X})} \leq\,\left[\frac{1}{\sqrt{\lambda_{N}}}+\frac{N+1}{\delta_{N}\lambda_{N}}(1+ \|\varphi\|_{1})\|\varphi\|_{1}^{1/2}\right]\varepsilon. \tag{4.4}\]
_All of the above statements equally apply to case_ **(1)** _upon replacing \(\sigma_{m}\) by \(E_{0}(t)\)._
**Remark 4.2**.: (a) If we set \(\widehat{f}_{j}=\widehat{\lambda}_{j}^{1/2}\cdot\widehat{e}_{j}\), then
\[\widehat{C}_{\mathbb{H}}^{m}=\sum_{j=1}^{m}\widehat{\lambda}_{j}\langle\, \cdot\,,\widehat{f}_{j}\rangle\widehat{f}_{j},\]
and thus
\[\sum_{j=1}^{N}\langle\,\cdot\,,\widehat{e}_{j}\rangle\widehat{e}_{j}=\sum_{j=1 }^{N}\frac{1}{\widehat{\lambda}_{j}}\langle\,\cdot\,,\widehat{f}_{j}\rangle \widehat{f}_{j}=(\widehat{C}_{\mathbb{H}}^{m})^{\dagger}\widehat{Q}_{N},\]
where \(\widehat{Q}_{N}=\sum_{j=1}^{N}\langle\,\cdot\,,\widehat{f}_{j}\rangle\widehat{ f}_{j}\) is the orthogonal projector onto the span of the first \(N\) eigenfunctions of \(\widehat{C}_{\mathbb{H}}^{m}\) in \(\mathbb{H}\). Therefore,
\[\widehat{K}_{N}^{m,t}\psi=\sum_{j=1}^{m}\langle\widehat{C}_{\mathbb{H}}^{m,t} \psi,\widehat{e}_{j}\rangle\widehat{e}_{j}=(\widehat{C}_{\mathbb{H}}^{m})^{ \dagger}\widehat{Q}_{N}\widehat{C}_{\mathbb{H}}^{m,t}\psi.\]
In particular, for \(N=m\) we have \(\widehat{K}_{N}^{m,t}=(\widehat{C}_{\mathbb{H}}^{m})^{\dagger}\widehat{C}_{ \mathbb{H}}^{m,t}\), which surely is one of the first canonical choices for an empirical estimator of \(K^{t}\).
(b) The functions \(\widehat{e}_{j}\) have unit length in the empirical \(L_{\mu}^{2}\)-norm:
\[\frac{1}{m}\sum_{k=1}^{m}\widehat{e}_{j}(x_{k})\widehat{e}_{j}(x_{k})=\left\langle \widehat{C}_{\mathbb{H}}^{m}\widehat{e}_{j},\,\widehat{e}_{j}\right\rangle=1.\]
Therefore, projecting onto the first \(N\) empirical Mercer features is the _whitening transformation_ commonly used in traditional EDMD [19].
Proof of Theorem 4.1.: By Proposition 3.4, both events \(\|C_{\mathbb{H}}^{t}-\widehat{C}_{\mathbb{H}}^{m,t}\|_{HS}\leq\varepsilon\) and \(\|C_{\mathbb{H}}-\widehat{C}_{\mathbb{H}}^{m}\|_{HS}\leq\varepsilon\) occur with probability at least \(1-\delta/2\), respectively. Hence, they occur simultaneously with probability at least \(1-\delta\).
In the remainder of this proof we assume that both events occur. Then all the statements deduced in the following hold with probability at least \(1-\delta\).
Let us define the intermediate approximation
\[\widetilde{K}_{N}^{m,t}\psi=\sum_{j=1}^{N}\langle\widehat{C}_{\mathbb{H}}^{m,t }\psi,e_{j}\rangle e_{j},\qquad\psi\in\mathbb{H}.\]
Let \(\psi\in\mathbb{H}\) be arbitrary. Setting \(C:=C_{\mathbb{H}}^{t}-\widehat{C}_{\mathbb{H}}^{m,t}\), we have
\[\|K_{N}^{t}\psi-\widetilde{K}_{N}^{m,t}\psi\|_{\mu}^{2} =\bigg{\|}\sum_{j=1}^{N}\big{\langle}C\psi,e_{j}\big{\rangle}e_{ j}\bigg{\|}_{\mu}^{2}=\sum_{j=1}^{N}\big{|}\big{\langle}C\psi,e_{j}\big{\rangle} \big{|}^{2}=\sum_{j=1}^{N}\big{|}\big{\langle}\psi,C^{*}e_{j}\big{\rangle} \big{|}^{2}\] \[\leq\|\psi\|^{2}\sum_{j=1}^{N}\|C^{*}e_{j}\|^{2}\leq\|\psi\|^{2} \sum_{j=1}^{N}\frac{1}{\lambda_{j}}\|C^{*}f_{j}\|^{2}\leq\frac{\|\psi\|^{2}}{ \lambda_{N}}\sum_{j=1}^{N}\|C^{*}f_{j}\|^{2}\] \[\leq\frac{\|\psi\|^{2}}{\lambda_{N}}\sum_{j=1}^{\infty}\|C^{*}f_{j }\|^{2}=\frac{\|\psi\|^{2}}{\lambda_{N}}\cdot\|C_{\mathbb{H}}^{t}-\widehat{C}_ {\mathbb{H}}^{m,t}\|_{HS}^{2},\]
and thus,
\[\|K_{N}^{t}\psi-\widetilde{K}_{N}^{m,t}\psi\|_{\mu}\,\leq\,\frac{\|\psi\|}{ \sqrt{\lambda_{N}}}\cdot\varepsilon.\]
Next, we aim at estimating the remaining error
\[\widetilde{K}_{N}^{m,t}\psi-\widehat{K}_{N}^{m,t}\psi =\sum_{j=1}^{N}\langle\widehat{C}_{\mathbb{H}}^{m,t}\psi,e_{j} \rangle e_{j}-\sum_{j=1}^{N}\langle\widehat{C}_{\mathbb{H}}^{m,t}\psi,\widehat {e}_{j}\rangle\widehat{e}_{j}\] \[=\sum_{j=1}^{N}\lambda_{j}^{-1}\langle\widehat{C}_{\mathbb{H}}^{ m,t}\psi,f_{j}\rangle f_{j}-\sum_{j=1}^{N}\widehat{\lambda}_{j}^{-1}\langle \widehat{C}_{\mathbb{H}}^{m,t}\psi,\widehat{f}_{j}\rangle\widehat{f}_{j}\] \[=\sum_{j=1}^{N}\lambda_{j}^{-1}\langle f,f_{j}\rangle f_{j}-\sum_ {j=1}^{N}\widehat{\lambda}_{j}^{-1}\langle f,\widehat{f}_{j}\rangle\widehat{f }_{j}\] \[=\sum_{j=1}^{N}\big{[}\lambda_{j}^{-1}P_{j}f-\widehat{\lambda}_{j} ^{-1}\widehat{P}_{j}f\big{]}\] \[=\sum_{j=1}^{N}\lambda_{j}^{-1}(P_{j}-\widehat{P}_{j})f+\sum_{j=1 }^{N}(\lambda_{j}^{-1}-\widehat{\lambda}_{j}^{-1})\widehat{P}_{j}f,\]
where \(f=\widehat{C}_{\mathbb{H}}^{m,t}\psi\),
\[P_{j}f=\langle f,f_{j}\rangle f_{j}\qquad\text{and}\qquad\widehat{P}_{j}f= \langle f,\widehat{f}_{j}\rangle\widehat{f}_{j}.\]
By (2.4), it suffices to estimate the above error in the \(\|\cdot\|\)-norm. By Theorem C.3, the first summand can be estimated as
\[\Big{\|}\sum_{j=1}^{N}\lambda_{j}^{-1}(P_{j}-\widehat{P}_{j})f\Big{\|}\leq\sum _{j=1}^{N}\frac{1}{\lambda_{j}}\|P_{j}-\widehat{P}_{j}\|\|f\|\,\leq\,\frac{N\cdot \|C_{\mathbb{H}}-\widehat{C}_{\mathbb{H}}^{m}\|}{\lambda_{N}\delta_{N}}\,\|f\| \,\leq\,\frac{N}{\lambda_{N}\delta_{N}}\,\|f\|\varepsilon.\]
For the second summand we have
\[\Big{\|}\sum_{j=1}^{N}(\lambda_{j}^{-1}-\widehat{\lambda}_{j}^{-1})\widehat{P}_{j }f\Big{\|}^{2}=\sum_{j=1}^{N}|\lambda_{j}^{-1}-\widehat{\lambda}_{j}^{-1}|^{2} \|\widehat{P}_{j}f\|^{2}=\sum_{j=1}^{N}\frac{|\lambda_{j}-\widehat{\lambda}_{j} |^{2}}{\lambda_{j}^{2}\widehat{\lambda}_{j}^{2}}\|\widehat{P}_{j}f\|^{2}.\]
Now, note that \(\epsilon<\delta_{N}\) by assumption and therefore \(\|C_{\mathbb{H}}-\widehat{C}_{\mathbb{H}}^{m}\|_{HS}\leq\delta_{N}\leq\frac{ \lambda_{N}-\lambda_{N+1}}{2}\leq\frac{\lambda_{N}}{2}\). For \(j=1,\ldots,N\), according to Theorem C.1 this implies
\[\widehat{\lambda}_{j}\,\geq\,\lambda_{j}-|\lambda_{j}-\widehat{\lambda}_{j}| \,\geq\,\lambda_{j}-\|C_{\mathbb{H}}-\widehat{C}_{\mathbb{H}}^{m}\|_{HS}\,\geq \,\lambda_{j}-\tfrac{\lambda_{N}}{2}\,\geq\,\tfrac{\lambda_{j}}{2}.\]
Hence,
\[\Big{\|}\sum_{j=1}^{N}(\lambda_{j}^{-1}-\widehat{\lambda}_{j}^{-1})\widehat{P} _{j}f\Big{\|}^{2}\,\leq\,4\sum_{j=1}^{N}\frac{|\lambda_{j}-\widehat{\lambda}_{ j}|^{2}}{\lambda_{j}^{4}}\|\widehat{P}_{j}f\|^{2}\,\leq\,4\frac{\|C_{\mathbb{H}}- \widehat{C}_{\mathbb{H}}^{m}\|_{HS}^{2}}{\lambda_{N}^{4}}\|\widehat{Q}_{N}f\|^ {2},\]
and thus,
\[\Big{\|}\sum_{j=1}^{N}(\lambda_{j}^{-1}-\widehat{\lambda}_{j}^{-1})\widehat{P} _{j}f\Big{\|}\,\leq\,\frac{2}{\lambda_{N}^{2}}\|f\|\varepsilon\,\leq\,\frac{ 1}{\lambda_{N}\delta_{N}}\|f\|\varepsilon.\]
From
\[\|\widehat{C}_{\mathbb{H}}^{m,t}\|\leq\|\widehat{C}_{\mathbb{H}}^{m,t}-C_{ \mathbb{H}}^{t}\|+\|C_{\mathbb{H}}^{t}\|\leq\|\widehat{C}_{\mathbb{H}}^{m,t}- C_{\mathbb{H}}^{t}\|_{HS}+\|\mathcal{E}K^{t}\mathcal{E}^{*}\|\leq\varepsilon+\| \varphi\|_{1}\]
we conclude
\[\big{\|}\widetilde{K}_{N}^{m,t}\psi-\widehat{K}_{N}^{m,t}\psi\big{\|}\leq\frac {N+1}{\lambda_{N}\delta_{N}}\|\widehat{C}_{\mathbb{H}}^{m,t}\psi\|\varepsilon \,\leq\,\frac{N+1}{\lambda_{N}\delta_{N}}(\varepsilon+\|\varphi\|_{1})\|\psi \|\varepsilon.\]
All together, we obtain (recall (2.4))
\[\|K_{N}^{t}\psi-\widehat{K}_{N}^{m,t}\psi\|_{\mu} \leq\|K_{N}^{t}\psi-\widetilde{K}_{N}^{m,t}\psi\|_{\mu}+\|\varphi \|_{1}^{1/2}\|\widetilde{K}_{N}^{m,t}\psi-\widehat{K}_{N}^{m,t}\psi\|\] \[\leq\frac{\|\psi\|}{\sqrt{\lambda_{N}}}\cdot\varepsilon+\frac{N+1 }{\lambda_{N}\delta_{N}}(\varepsilon+\|\varphi\|_{1})\|\varphi\|_{1}^{1/2}\| \psi\|\varepsilon\] \[=\left[\frac{1}{\sqrt{\lambda_{N}}}+\frac{N+1}{\delta_{N}\lambda _{N}}(1+\|\varphi\|_{1})\|\varphi\|_{1}^{1/2}\right]\varepsilon\cdot\|\psi\|,\]
which implies (4.4).
### Projection error in case of Koopman-invariance of the RKHS
In the preceeding section, we have seen that the empirical operator \(\widehat{K}_{N}^{m,t}\) can be written as \((\widehat{C}_{\mathbb{H}}^{m})^{\dagger}\widehat{C}_{\mathbb{H}}^{m,t}\) if \(m=N\). In the limit \(m\to\infty\), we would arrive at the operator \(C_{\mathbb{H}}^{-1}C_{\mathbb{H}}^{t}\), which is not even well-defined for all \(\psi\in\mathbb{H}\), in general. However, if the RKHS is invariant under \(K^{t}\), the above operator limit is well-defined as a bounded operator on \(\mathbb{H}\). In this situation we are able to extend Theorem 4.1 to an estimate on the full error made by our empirical estimator.
We start by defining the operator
\[K_{\mathbb{H}}^{t}:=C_{\mathbb{H}}^{-1}C_{\mathbb{H}}^{t}\]
on its natural domain
\[\operatorname{dom}K_{\mathbb{H}}^{t}:=\{\psi\in\mathbb{H}:C_{\mathbb{H}}^{t} \psi\in\operatorname{ran}C_{\mathbb{H}}\}. \tag{4.5}\]
We consider \(K_{\mathbb{H}}^{t}\) as an operator from \(\mathbb{H}\) into itself (with domain of definition in \(\mathbb{H}\)).
**Lemma 4.3**.: _We have_
\[\operatorname{dom}K_{\mathbb{H}}^{t}=\{\psi\in\mathbb{H}:K^{t}\psi\in\mathbb{H }\}, \tag{4.6}\]
_and \(K_{\mathbb{H}}^{t}\) is closed._
Proof.: Note that \(C_{\mathbb{H}}^{t}\psi\in\operatorname{ran}C_{\mathbb{H}}\) if and only if \(\mathcal{E}K^{t}\psi=C_{\mathbb{H}}\phi\) for some \(\phi\in\mathbb{H}\). Since \(\mathbb{C}_{\mathbb{H}}\phi=\mathcal{E}\phi\) and \(\ker\mathcal{E}=\{0\}\), the latter is equivalent to \(K^{t}\psi=\phi\in\mathbb{H}\), which proves the representation of the domain. As to the closedness of \(K_{\mathbb{H}}^{t}\), let \((\psi_{n})\subset\operatorname{dom}K_{\mathbb{H}}^{t}\) and \(\phi\in\mathbb{H}\) such that \(\psi_{n}\to\psi\) in \(\mathbb{H}\) and \(K_{\mathbb{H}}^{t}\psi_{n}\to\phi\) in \(\mathbb{H}\) as \(n\to\infty\). The latter implies \(C_{\mathbb{H}}^{t}\psi_{n}\to C_{\mathbb{H}}\phi\), while the first implies \(C_{\mathbb{H}}^{t}\psi_{n}\to C_{\mathbb{H}}^{t}\psi\) in \(\mathbb{H}\) as \(n\to\infty\), from which we conclude that \(C_{\mathbb{H}}^{t}\psi=C_{\mathbb{H}}\phi\), i.e., \(\psi\in\operatorname{dom}K_{\mathbb{H}}^{t}\) and \(K_{\mathbb{H}}^{t}\psi=\phi\).
If the Koopman operator leaves the RKHS \(\mathbb{H}\) invariant (i.e., \(K^{t}\mathbb{H}\subset\mathbb{H}\)), \(K_{\mathbb{H}}^{t}\) is defined on all of \(\mathbb{H}\). Moreover, since the canonical inclusion map \(\mathcal{E}^{*}:\mathbb{H}\to L^{2}(\mu)\) is injective, it possesses an unbounded inverse on its range \(\mathbb{H}\), and therefore:
\[C_{\mathbb{H}}^{-1}C_{\mathbb{H}}^{t}\phi=C_{\mathbb{H}}^{-1}\mathcal{E}K^{t} \mathcal{E}^{*}\phi=(\mathcal{E}\mathcal{E}^{*})^{-1}\mathcal{E}\mathcal{E}^{ *}(\mathcal{E}^{*})^{-1}K^{t}\mathcal{E}^{*}\phi=(\mathcal{E}^{*})^{-1}K^{t} \mathcal{E}^{*}\phi. \tag{4.7}\]
Remarkably, invariance of \(\mathbb{H}\) under the Koopman operator implies that the left-hand side not only reproduces the Koopman operator on \(\mathbb{H}\), but actually defines a bounded operation.
Parts of the next proposition can be found in [22, Theorem 5.3] and [8, Theorem 1].
**Proposition 4.4**.: _For \(t>0\), the following statements are equivalent:_
1. \(K^{t}\mathbb{H}\subset\mathbb{H}\)_._
2. \(K_{\mathbb{H}}^{t}\in L(\mathbb{H})\)_._
3. \(\operatorname{ran}C_{\mathbb{H}}^{t}\subset\operatorname{ran}C_{\mathbb{H}}\)_._
Proof.: With regard to the two representations (4.5) and (4.6) of the domain, it is immediate that both (i) and (iii) are equivalent to \(\operatorname{dom}K_{\mathbb{H}}^{t}=\mathbb{H}\). The equivalence of the latter to (ii) follows from the closed graph theorem.
Note that if one of (i)-(iii) holds, then \(K_{\mathbb{H}}^{t}=K^{t}|_{\mathbb{H}}\).
**Theorem 4.5**.: _In addition to the assumptions in Theorem 4.1, assume that \(\mathbb{H}\) is invariant under the Koopman operator \(K^{t}\). For fixed \(N\in\mathbb{N}\), let \(\delta_{N}\) be as in (4.2), choose \(\varepsilon\), \(\delta\), and \(m\) as in Theorem 4.1 and define the empirical estimator \(\widehat{K}_{N}^{m,t}\) as in (4.3). Then, with probability at least \(1-\delta\) we have that_
\[\|K^{t}-\widehat{K}_{N}^{m,t}\|_{\mathbb{H}\to L_{\mu}^{2}(\mathcal{X})}\, \leq\,\sqrt{\lambda_{N+1}}\,\|K_{\mathbb{H}}^{t}\|+\left[\frac{1}{\sqrt{ \lambda_{N}}}+\frac{N+1}{\delta_{N}\lambda_{N}}(1+\|\varphi\|_{1})\|\varphi\|_ {1}^{1/2}\right]\varepsilon. \tag{4.8}\]
Proof.: First of all, Theorem 4.1 implies that
\[\|K^{t}-\widehat{K}_{N}^{m,t}\|_{\mathbb{H}\to L_{\mu}^{2}( \mathcal{X})} \leq\|K^{t}-K_{N}^{t}\|_{\mathbb{H}\to L_{\mu}^{2}(\mathcal{X})}+\|K_{N} ^{t}-\widehat{K}_{N}^{m,t}\|_{\mathbb{H}\to L_{\mu}^{2}(\mathcal{X})}\] \[\leq\|K^{t}-K_{N}^{t}\|_{\mathbb{H}\to L_{\mu}^{2}(\mathcal{X})}+ \left[\frac{1}{\sqrt{\lambda_{N}}}+\frac{N+1}{\delta_{N}\lambda_{N}}(1+\| \varphi\|_{1})\|\varphi\|_{1}^{1/2}\right]\varepsilon.\]
Now, for \(\psi\in\mathbb{H}\),
\[\|K^{t}\psi-K_{N}^{t}\psi\|_{\mu}^{2} =\Bigg{\|}\sum_{j=N+1}^{\infty}\langle C_{\mathbb{H}}^{t}\psi,e_{ j}\rangle e_{j}\Bigg{\|}_{\mu}^{2}=\sum_{j=N+1}^{\infty}|\langle C_{ \mathbb{H}}^{t}\psi,e_{j}\rangle|^{2}=\sum_{j=N+1}^{\infty}\frac{1}{\lambda_{j} }|\langle C_{\mathbb{H}}^{t}\psi,f_{j}\rangle|^{2}\] \[=\sum_{j=N+1}^{\infty}\frac{1}{\lambda_{j}}|\langle K_{\mathbb{H} }^{t}\psi,C_{\mathbb{H}}f_{j}\rangle|^{2}=\sum_{j=N+1}^{\infty}\lambda_{j}| \langle K_{\mathbb{H}}^{t}\psi,f_{j}\rangle|^{2}\leq\lambda_{N+1}\|K_{\mathbb{H }}^{t}\psi\|^{2},\]
which proves the theorem.
We have just proved that the projection error \(\|K^{t}\psi-K_{N}^{t}\psi\|_{\mu}\) decays at least as fast as the square roots of the eigenvalues of \(C_{\mathbb{H}}\). Recall that \((\lambda_{j})_{j\in\mathbb{N}}\in\ell^{1}(\mathbb{N})\), since \(C_{\mathbb{H}}\) is trace class with \(\sum_{j=1}^{\infty}\lambda_{j}=\operatorname{Tr}(C_{\mathbb{H}})=\|\mathcal{E} ^{*}\|_{HS}^{2}=\|\varphi\|_{1}\), see Lemma 2.4(c).
## 5. Illustration with the Ornstein-Uhlenbeck process
For the numerical illustration of our results, we consider the Ornstein-Uhlenbeck (OU) process on \(\mathcal{X}=\mathbb{R}\), which is given by the SDE
\[dX_{t}=-\alpha X_{t}\,dt+dW_{t},\]
where \(\alpha>0\) is a positive parameter.
### Analytical Results
Since all relevant properties of the OU process are available in analytical form, we can exactly calculate all of the terms appearing in our theoretical error bounds. Moreover, we can also compute the exact estimation and prediction errors for finite data in closed form. Let us begin by recapping the analytical results required for our analysis, which can be found in [41].
The invariant measure \(\mu\), and the density of the stochastic transition kernel \(\rho_{t}\), are given by
\[d\mu(x)=\sqrt{\frac{\alpha}{\pi}}e^{-\alpha x^{2}}\,dx\qquad\text{and}\qquad d \rho_{t}(x,y)=\sqrt{\frac{\alpha}{\pi v_{t}^{2}}}\exp\Big{[}-\frac{\alpha}{v_{ t}^{2}}(y-e^{-\alpha t}x)^{2}\Big{]}\,dx\,dy,\]
with \(v_{t}^{2}=(1-e^{-2\alpha t})/2\alpha\). The Koopman operators \(K^{t}\) are self-adjoint in \(L^{2}_{\mu}(\mathbb{R})\), their eigenvalues and corresponding eigenfunctions are given by
\[q_{j}=e^{-\alpha jt}\qquad\text{and}\qquad\psi_{j}(x)=\frac{1}{\sqrt{2j\alpha ^{j}j!}}H_{j}(\sqrt{2\alpha}x),\quad j\in\mathbb{N}_{0},\]
where \(H_{j}\) are the physicist's Hermite polynomials.
We consider the Gaussian radial basis function (RBF) kernel with bandwidth \(\sigma>0\), i.e.,
\[k(x,y)=\exp\left[-\frac{(x-y)^{2}}{\sigma^{2}}\right].\]
Let us quickly verify that this choice of the kernel satisfies the compatibility assumptions (A1)-(A3). Indeed, (A1) is trivial as \(k(x,x)=1\) and (A3) follows easily from the continuity of the functions in \(\mathbb{H}\). To see that \(\mathbb{H}\) is dense in \(L^{2}_{\mu}(\mathbb{R})\) (i.e., (A2)), let \(\psi\in L^{2}_{\mu}(\mathbb{R})\) be such that \(\langle\psi,\Phi(y)\rangle_{\mu}=0\) for all \(y\in\mathbb{R}\). The latter means that \(\phi*\varphi_{\sigma}=0\), where \(\phi(x)=\psi(x)e^{-\alpha x^{2}}\) and \(\varphi_{\sigma}(x)=e^{-x^{2}/\sigma^{2}}\). We apply the Fourier transform and obtain \(\widehat{\phi}\cdot\widehat{\varphi_{\sigma}}=0\). Noting that the Fourier transform of a Gaussian is again a Gaussian, we get \(\widehat{\phi}=0\) and thus \(\psi=0\).
The Mercer eigenvalues and features with respect to the invariant measure \(\mu\) of the OU process, i.e., the eigenvalues and eigenfunctions of the integral operator \(\mathcal{E}^{*}\mathcal{E}\) in \(L^{2}_{\mu}(\mathbb{R})\), are also available in analytical form [9]. They are given by
\[\lambda_{i}=\sqrt{\frac{\alpha}{C_{1}}}\left[\frac{1}{\sigma^{2}C_{1}}\right] ^{i}\qquad\text{and}\qquad\varphi_{i}(x)=\gamma_{i}e^{-\zeta^{2}x^{2}}H_{i} \left(\sqrt{\alpha}\eta x\right),\quad i\in\mathbb{N}_{0},\]
using the following constants:
\[\eta=\left[1+\frac{4}{\alpha\sigma^{2}}\right]^{1/4},\quad\ \ \gamma_{i}=\left[\frac{\eta}{2^{i}\Gamma(i+1)}\right]^{1/2},\quad\ \ \zeta^{2}=\frac{\alpha}{2}(\eta^{2}-1),\quad\ C_{1}=\alpha+\zeta^{2}+\sigma^{-2}.\]
With these results, we can compute the variance of the empirical estimator for \(C^{t}_{\mathbb{H}}\) as described in Theorem 3.1. The eigenvalues \(q_{j}\) were already given above. The coefficients \(d_{j,t}\) can be calculated using Mercer's theorem as
\[d_{j,t}=\int\int k(x,x^{\prime})k(y,y^{\prime})\psi_{j}(x)\psi_{j}(x^{\prime} )\,d\mu_{0,t}(x,y)\,d\mu_{0,t}(x^{\prime},y^{\prime})\]
\[=\sum_{k,\ell}\lambda_{k}\lambda_{\ell}\left[\int\varphi_{k}(x) \varphi_{\ell}(y)\psi_{j}(x)\,d\mu_{0,t}(x,y)\right]^{2}.\]
The series needs to be truncated at a finite number of terms and the integrals can be calculated by numerical integration. As \(d_{0,t}=\langle k,k_{t}\rangle_{L^{2}_{\mu\otimes\mu}}=\|C^{t}_{\mathbb{H}}\|^ {2}_{HS}\) (cf. (3.8)), and hence
\[\|C^{t}_{\mathbb{H}}\|^{2}_{HS}=\sum_{k,\ell}\lambda_{k}\lambda_{ \ell}\left[\int\varphi_{k}(x)\varphi_{\ell}(y)\,d\mu_{0,t}(x,y)\right]^{2}, \tag{5.1}\]
the Hilbert-Schmidt norm of the cross-covariance operator \(C^{t}_{\mathbb{H}}\) can be computed similarly. Since, for the Gaussian RBF kernel, we have \(\varphi(x)=k(x,x)=1\) for all \(x\), we therefore find
\[\mathbb{E}_{0}(t)=\left\langle K^{t}\varphi,\,\varphi\right\rangle_{\mu}-\|C ^{t}_{\mathbb{H}}\|^{2}_{HS}=1-\|C^{t}_{\mathbb{H}}\|^{2}_{HS},\]
completing the list of terms required by Theorem 3.1. In addition, we notice that upon replacing either one or two of the integrals in (5.1) by finite-data averages, we can also calculate \(\|\hat{C}^{m,t}_{\mathbb{H}}\|^{2}_{HS}\) and \(\langle C^{t}_{\mathbb{H}},\hat{C}^{m,t}_{\mathbb{H}}\rangle_{HS}\). Therefore, the estimation error for finite data \(\{(x_{k},y_{k})\}^{m}_{k=1}\) can be obtained by simply expanding the inner product
\[\|C^{t}_{\mathbb{H}}-\hat{C}^{m,t}_{\mathbb{H}}\|^{2}_{HS}=\|C^{t}_{\mathbb{H} }\|^{2}_{HS}+\|\hat{C}^{m,t}_{\mathbb{H}}\|^{2}_{HS}-2\langle\hat{C}^{m,t}_{ \mathbb{H}},C^{t}_{\mathbb{H}}\rangle_{HS},\]
allowing us to precisely compare the estimation error to the error bounds obtained in Theorem 3.1.
Besides the estimation error for \(C^{t}_{\mathbb{H}}\), we are also interested in the prediction error, which is bounded according to Theorem 4.1. We will compare these bounds to the actual error \(\|(K^{t}_{N}-\hat{K}^{m,t}_{N})\phi\|_{L^{2}_{\mu}(\mathcal{X})}\), for a specific observable \(\phi\in\mathbb{H}\) and a fixed number of \(N\) Mercer features. For the OU process, it is again beneficial to consider Gaussian observables \(\phi\):
\[\phi(x)=\frac{1}{\sqrt{2\pi\sigma_{0}^{2}}}\exp\left[-\frac{(x-m_{0})^{2}}{2 \sigma_{0}^{2}}\right].\]
Application of the Koopman operator leads to yet another, unnormalized Gaussian observable, which is given by
\[K^{t}\phi(x)=\frac{1}{\sqrt{2\pi\sigma_{t}^{2}}}\exp\left[-\frac{(m_{0}-e^{- \alpha t}x)^{2}}{2\sigma_{t}^{2}}\right], \sigma_{t}^{2}=\sigma_{0}^{2}+v_{t}^{2}.\]
The inner products of \(K^{t}\phi\) with the Mercer eigenfunctions \(\varphi_{i}\) can be evaluated by numerical integration, providing full access to the truncated observable \(K^{t}_{N}\phi\). On the other hand, the empirical approximation \(\hat{K}^{m,t}_{N}\phi\) can be computed directly based on the data. We note that
\[\hat{K}^{m,t}_{N}\phi=\sum_{j=1}^{N}\left\langle\hat{C}^{m,t}_{\mathbb{H}}\phi,\,\hat{e}_{j}\right\rangle\hat{e}_{j}=\frac{1}{m}\sum_{k=1}^{m}\phi(y_{k}) \sum_{j=1}^{N}\left\langle\Phi(x_{k}),\,\hat{e}_{j}\right\rangle\hat{e}_{j}= \frac{1}{m}\sum_{k=1}^{m}\phi(y_{k})\sum_{j=1}^{N}\hat{e}_{j}(x_{k})\hat{e}_{j}.\]
The functions \(\hat{e}_{j}\) can be obtained from the eigenvalue decomposition of the standard kernel Gramian matrix
\[\frac{1}{m}K_{\mathcal{X}}:=\frac{1}{m}\left[k(x_{k},x_{l})\right]^{m}_{k,l=1},\]
as the latter is the matrix representation of the empirical covariance operator \(\hat{C}^{m}_{\mathbb{H}}\) on the subspace \(\operatorname{span}\{\Phi(x_{k})\}^{m}_{k=1}\). If \(\frac{1}{m}K_{\mathcal{X}}=V\Lambda V^{\top}\) is the spectral decomposition of the Gramian, then
\[\hat{e}_{j}=\frac{1}{m^{1/2}\hat{\lambda}_{j}}\sum_{l=1}^{m}V_{lj}\Phi(x_{l})\]
are the correctly normalized eigenfunctions according to Theorem 4.1. Plugging this into the above, we find
\[\hat{K}_{N}^{m,t}\phi(x) =\frac{1}{m}\sum_{k=1}^{m}\phi(y_{k})\sum_{j=1}^{N}\frac{1}{m^{1/2} \hat{\lambda}_{j}}\sum_{l=1}^{m}V_{lj}k(x_{l},x_{k})\frac{1}{m^{1/2}\hat{ \lambda}_{j}}\sum_{r=1}^{m}V_{rj}k(x_{r},x)\] \[=\frac{1}{m}\phi(Y)^{\top}\frac{1}{m}K_{\mathcal{X}}\left[V_{N} \Lambda_{N}^{-2}V_{N}^{\top}\right]K_{\mathcal{X},x}\] \[=\frac{1}{m}\phi(Y)^{\top}V_{N}\Lambda_{N}^{-1}V_{N}^{\top}K_{ \mathcal{X},x},\]
where \(\phi(Y)=[\phi(y_{k})]_{k=1}^{m}\), \(K_{\mathcal{X},x}=[k(x_{k},x)]_{k=1}^{m}\), \(V_{N}=V[I_{N}\ 0_{m-N}]^{\top}\), \(\Lambda_{N}=\mathrm{diag}(\widehat{\lambda}_{j})_{j=1}^{N}\).
### Numerical Results
For the actual numerical experiments, we set \(\alpha=1\), choose the elementary integration time step as \(\Delta_{t}=10^{-2}\), and set the lag time to \(t=0.05\). We compute the exact variance \(\mathbb{E}[\|C_{\mathbb{H}}^{t}-\hat{C}_{\mathbb{H}}^{m,t}\|_{HS}^{2}]\) by the expression given in Theorem 3.1, and also the coarser estimate for the variance given in Corollary 3.3. We test three different kernel bandwidths, \(\sigma\in\{0.05,0.1,0.5\}\). All Mercer series are truncated after the first 10 terms for \(\sigma\in\{0.1,0.5\}\), and 20 terms for \(\sigma=0.05\), while Koopman eigenfunction expansions are truncated after 15 terms.
In the first set of experiments, we use Chebyshev's inequality to compute the maximal estimation error \(\|C_{\mathbb{H}}^{t}-\hat{C}_{\mathbb{H}}^{m,t}\|_{HS}\) that can be guaranteed with confidence \(1-\delta=0.9\), for a range of data sizes \(m\) between \(m=20\) and \(m=50.000\). As a comparison, we generate \(200\) independent simulations of length \(m+\frac{t}{\Delta_{t}}\), corresponding to the sliding-window estimator with \(m\) data points, for each data size. We then compute the resulting estimation error using the expressions given in the previous section. We extract the \(1-\delta\)-percentile of the estimation error for all trajectories, i.e., the maximal error that is not exceeded by \(100*(1-\delta)\) percent of the trajectories. In addition, we also use Chebyshev's inequality with the i.i.d. variance \(\frac{1}{m}\mathbb{E}_{0}(t)\) to predict the estimation error. The comparison of these results for all data sizes \(m\) and the different kernel bandwidths is shown in Figure 3. We observe that the bound from Theorem 3.1 is quite accurate, over-estimating the actual error by about a factor three, and captures the detailed qualitative dependence of the estimation error on \(m\). The coarser bound from Corollary 3.3, however, appears to discard too much information, it over-estimates the error by one to two orders of magnitude, and also does not capture the initial slope for small \(m\). Finally, we note that for the larger kernel bandwidths, the i.i.d. variance is indeed too small, leading to an under-estimation of the error. This observation confirms that it is indeed necessary to take the effect of the correlation between data points into account.
In a second set of experiments, we test the performance of our theoretical bounds concerning the prediction of expectations for individual observables, obtained in Theorem 4.1. For the same three Gaussian RBF kernels as in the first set of experiments, we consider the observable \(\phi=\varphi_{0}\), i.e., the first Mercer feature, and choose \(N=10\) in the Mercer series expansion \(K_{N}^{t}\phi\) and its empirical approximation \(\hat{K}_{N}^{m,t}\phi\). Note that \(\phi\) is a different observable depending on the bandwidth. Again, we set \(1-\delta=0.9\), and use the bound from Theorem 4.1 to bound the \(L_{\mu}^{2}\)-error between \(K_{N}^{t}\phi\) and \(\hat{K}_{N}^{m,t}\phi\). As a comparison, we compute the actual \(L_{\mu}^{2}\)-error by numerical integration, using the fact that we can evaluate \(K_{N}^{t}\phi\) and \(\hat{K}_{N}^{m,t}\phi\) based on the discussion above. We repeat this procedure 15 times and provide average errors and standard deviations. The results for all three kernels are shown in Figure 4, and we find that our theoretical bounds are much too pessimistic in all cases. This finding highlights our previous observation that bounding the prediction error outside the RKHS still requires more in-depth research.
## 6. Conclusions
We have analyzed the finite-data estimation error for data-driven approximations of the Koopman operator on reproducing kernel Hilbert spaces. More specifically, we have provided an exact expression for the variance of empirical estimators for the cross-covariance operator, if a sliding-window estimator is applied to a long ergodic trajectory of the dynamical system. This setting is relevant for many complex systems, such as molecular dynamics simulations. Our results present a significant improvement over the state of the art, since they concern a setting where the notorious problem of dictionary selection can be circumvented, and therefore no longer depend on the dictionary size. We have also extended the concept of asymptotic variance to an infinite-dimensional approximation space for the Koopman operator. Our numerical study on the Ornstein Uhlenbeck process has shown that, even using a simple mass concentration inequality, accurate bounds on the estimation error can be obtained.
In our second result, we have extended our estimates to a uniform bound on the prediction error for observables in the RKHS. Thereby, we have circumvented dealing with an unbounded inverse of the covariance operator by applying a finite-dimensional truncation of the associated Mercer series. In case of Koopman-invariance of the RKHS, however, we were able to find a bound on the truncation error which then yields estimates for the full approximation error.
Figure 4. Comparison of the theoretical bound on the prediction error \(\|K_{N}^{t}\phi-\hat{K}_{N}^{m,t}\phi\|_{\mu}\), if \(\phi\) is chosen as the first Mercer feature \(\varphi_{0}\), using \(N=10\) in the Mercer series representation. The predicted error is shown in blue, error bars for the actual error obtained from 15 independent data sets are shown in red. Different panels correspond to the same kernel bandwidths as in Figure 3 above.
Figure 3. Probabilistic error estimates for \(C_{\mathbb{H}}^{t}\) associated to the OU process, at lag time \(t=0.05\), and the Gaussian RBF kernel with different bandwidths \(\sigma\in\{0.05,0.1,0.05\}\) (corresponding to left, center and right panels). The blue and green curves show the estimated error using the fine and coarse bounds from Theorem 4.1 and Corollary 3.3, respectively, while the purple curves represent the bound obtained from the i.i.d.-variance \(\frac{1}{m}\mathbb{E}_{0}(t)\). The red curve shows the \(0.9\)-percentile of the estimation error based on \(200\) independent simulations.
Still, the resulting error bounds have proven very conservative in the numerical examples. Therefore, obtaining sharper bounds on the prediction error constitutes a primary goal for future research.
|
2310.01634 | Deep Insights into Noisy Pseudo Labeling on Graph Data | Pseudo labeling (PL) is a wide-applied strategy to enlarge the labeled
dataset by self-annotating the potential samples during the training process.
Several works have shown that it can improve the graph learning model
performance in general. However, we notice that the incorrect labels can be
fatal to the graph training process. Inappropriate PL may result in the
performance degrading, especially on graph data where the noise can propagate.
Surprisingly, the corresponding error is seldom theoretically analyzed in the
literature. In this paper, we aim to give deep insights of PL on graph learning
models. We first present the error analysis of PL strategy by showing that the
error is bounded by the confidence of PL threshold and consistency of
multi-view prediction. Then, we theoretically illustrate the effect of PL on
convergence property. Based on the analysis, we propose a cautious pseudo
labeling methodology in which we pseudo label the samples with highest
confidence and multi-view consistency. Finally, extensive experiments
demonstrate that the proposed strategy improves graph learning process and
outperforms other PL strategies on link prediction and node classification
tasks. | Botao Wang, Jia Li, Yang Liu, Jiashun Cheng, Yu Rong, Wenjia Wang, Fugee Tsung | 2023-10-02T20:57:11Z | http://arxiv.org/abs/2310.01634v1 | # Deep Insights into Noisy Pseudo Labeling
###### Abstract
Pseudo labeling (PL) is a wide-applied strategy to enlarge the labeled dataset by self-annotating the potential samples during the training process. Several works have shown that it can improve the graph learning model performance in general. However, we notice that the incorrect labels can be fatal to the graph training process. Inappropriate PL may result in the performance degrading, especially on graph data where the noise can propagate. Surprisingly, the corresponding error is seldom theoretically analyzed in the literature. In this paper, we aim to give deep insights of PL on graph learning models. We first present the error analysis of PL strategy by showing that the error is bounded by the confidence of PL threshold and consistency of multi-view prediction. Then, we theoretically illustrate the effect of PL on convergence property. Based on the analysis, we propose a cautious pseudo labeling methodology in which we pseudo label the samples with highest confidence and multi-view consistency. Finally, extensive experiments demonstrate that the proposed strategy improves graph learning process and outperforms other PL strategies on link prediction and node classification tasks.
## 1 Introduction
Pseudo Labeling (PL) [27; 13] is one of the most popular self-supervised learning approaches and has been widely used to tackle the label sparsity problem. Its core idea is to enlarge the training set by self-labeling. As most self-labeled samples should be consistent with ground truth, the enlarged dataset has a larger sample capacity to improve the model generalization. Plenty of studies have shown the effectiveness of PL [7; 26; 16; 5]. However, there is a trade-off between the benefit of PL and the effect of mislabeled samples. When the benefit of PL outweighs the impact of introduced noise, the performance of the base model (i.e., without PL) can be improved. But for non-i.i.d. condition such as graph data, the introduced noisy labels may transfer among the samples and be amplified, which may degrade the performance of base model. Although several methods have been proposed to alleviate this noisy phenomenon [31; 32; 14], there is still neither a clear explanation nor a quantification of how pseudo labeling affects the graph learning models.
In this study, we attempt to answer questions above. Specifically, we evaluate PL's effect on the prediction error of the base model and the convergence of the empirical loss function. For a graph learning model, the message aggregation process would amplify the noises of incorrect labels introduced by PL. These noises can even accumulate to damage the base model's performance. For
example, in a two-layer graph neural network (GNN) for node classification, the mislabeled nodes can inappropriately reduce the predicted confidence of their 2-hop neighbors. Moreover, in some circumstances, such as link prediction task, the pseudo labeled sample would not only serve as the label but also as the model inputs in the consequent iterations. This characteristic further aggravates the loss function's convergence due to the noises added to adjacency matrix.
To visualize the side effect of the PL on graph data, we conduct a toy experiment as shown in Figure 1, where the benefit of PL to a popular link prediction model **GAE**[12] depends on the choice of graph dataset, i.e., PL improves the performance of GAE on Actor, but degrades that on WikiCS. Even worse, due to the incorrect label introduced by PL, PL leads to the model's collapse on Amazon_Photo.
To obtain a stable and consistent result by PL strategy, it is necessary to quantify the impact of introduced noisy labels and theoretically analyze how it affects the graph training procedure compared to the procedure without PL. In this paper, we build a theoretical connection between PL strategy and graph learning models with multi-view augmentations. We prove that the error bound of the PL predictor is jointly bounded by a confidence threshold and the prediction consistency over different augmentations. Moreover, we theoretically analyze that PL strategy affects convergence property by the covariance term in the optimization function. Accordingly, we propose the Cautious Pseudo Labeling (CPL) strategy for the graph learning process that maximizes the confidence threshold by committing the PL samples with high prediction probability. We evaluate the CPL on different graph learning tasks, including link prediction and node classification models, where we observe remarkable and consistent improvements over multiple datasets and base models. As shown in Figure 1, compared with the base model GAE, the average AUC improvement of CPL over three datasets is 7.79%, which clearly validates the superiority of our model.
## 2 Model
In this section, we define the general problem of graph learning, including link prediction and node classification tasks. We give the error bound and convergence property analysis associated with the application of PL strategy. Lastly, we propose cautious PL strategy accordingly.
### Problem Definition
In graph learning, given graph \(G=(\mathcal{V},\mathcal{E})\), \(\mathcal{V}=\{v_{i}\}\) is the node set, \(\mathcal{E}=\{(i,j)\}\) is the edge set, and \(|\mathcal{V}|=N\) is the node number. The feature matrix and adjacent matrix are denoted by \(X=[x_{ij}]_{N\times F}\) and \(A=[a_{ij}]_{N\times N}\), respectively, where \((i,j)\in\mathcal{E}\) if and only if \(a_{ij}\neq 0\). A base GNN model \(g\) is trained on the graph \(G\) and the observed labels. It outputs the probability of prediction target. In this work, we adopt the PL scheme which involves a teacher model \(g_{\phi}\) and a student model \(g_{\psi}\). The teacher model calculates confidence of the unlabeled samples and PL a subset of samples using the strategy \(\mathcal{T}\). Subsequently, student model utilizes the enlarged set to fine-tune the base model and becomes the teacher model in the next iteration.
**Node classification task.** The objective of the node classification task is to predict the probabilities of an unlabeled node belonging to different classes \(g:G\rightarrow\mathbb{R}^{N\times M}\), where \(M\) is the number of classes. The original training set comprises the labeled nodes \(\mathcal{V}_{o}\) and their labels \(Y_{o}\). And the model aims to predict the classes of the unlabeled nodes \(\mathcal{V}_{u}\). In the PL scheme, the teacher model calculates the confidence of the unlabeled nodes \(\hat{Y}_{u}\) and assigns pseudo labels to a selected subset of nodes
Figure 1: The performance of GAE, PL and proposed CPL strategy on link prediction w.r.t. Actor, WikiCS and Amazon_Photo dataset. Horizontal axis is #PL samples, and vertical axis is AUC.
\(\{V_{p},Y_{p}|\mathcal{V}_{p}\subset\mathcal{V}_{u},Y_{p}\subset\hat{Y}_{u}\}\). Then the student model undergoes fine-tuning on the enlarged label set \(g_{\psi}(G)\rightarrow\hat{Y}_{o},\hat{Y}_{o}=Y_{o}\cup Y_{p}\), \(\hat{Y}_{o}\) is the enlarged label set.
**Link prediction task.** The prediction targets are the probabilities of edges between unlabeled node pairs. In our scheme, the link prediction model \(g(g_{emb},s)\) consists of an embedding network \(g_{emb}:G\rightarrow\mathbb{R}^{N\times D}\) and a score function \(s:\mathbb{R}^{D}\times\mathbb{R}^{D}\rightarrow\mathbb{R}\) that estimates the probability of the link, where \(D\) is the dimension of the node embeddings. A portion of edges is observed as the training graph \(G=(\mathcal{V},\mathcal{E}_{o}),\mathcal{E}_{o}\subset\mathcal{E}\). The teacher model predicts the confidence of the unobserved edges \(\mathcal{E}_{u}\) and enlarge the observed edge set \(\hat{\mathcal{E}}_{o}=\mathcal{E}_{p}\cup\mathcal{E}_{o}\) for fine-tuning \(g_{\psi}:G(\mathcal{V},\hat{\mathcal{E}}_{o})\rightarrow\mathbb{R}^{N\times N}\).
It is worth to mention that in the link prediction task, the node embeddings undergo a change once pseudo labels are added, even before fine-tuning. Because the enlarged edge set is also utilized as input for the GNN.
The optimization target is formulated as minimizing \(\mathcal{L}=\text{CE}(g_{\psi},Y_{o})/|Y_{o}|\), where \(\text{CE}(\cdot)\) denotes cross entropy, \(Y_{o}\) is the ground truth label of the observed set and represents \(\mathcal{E}_{o}\) for the link prediction task. The overall performance is evaluated by the 0-1 loss: \(\text{Err}(g)=\mathbb{E}[\text{argmax}(g_{\psi})\neq Y]\).
### Pseudo Labeling Error Analysis
We here present the error bound analysis of the graph learning under the PL strategy. To facilitate mathematical treatment, we introduce several assumptions.
#### 2.2.1 Graph perturbation invariant
We assume **graph perturbation invariant** (GPI) property in GNN, which states that the prediction variance is linearly bounded by the difference between the augmented and original inputs.
**Definition 2.1**: _Given a graph \(G\) and its perturbation \(\hat{G}=G(X\odot M_{x},A\odot M_{a})\) by the random feature masks \(M_{x}\in\{1,0\}^{N\times F}\) and adjacent matrix mask \(M_{a}\in\{1,0\}^{N\times N}\) satisfying_
\[\frac{1}{N\cdot F}\|\textbf{1}^{N\times F}-M_{x}\|_{2}^{2}+\frac{1}{N^{2}}\| \textbf{1}^{N\times N}-M_{a}\|_{2}^{2}<\epsilon, \tag{1}\]
_the GNN \(g(\cdot)\) has GPI property if there exists a constant \(C>0\) such that the perturbed prediction confidence satisfies \(\|g(\hat{G})-g(G)\|_{2}^{2}<C\epsilon\). Here, \(\odot\) is element-wise product, and \(\|\cdot\|_{2}\) is the 2-norm of the vector or matrix._
The GPI property guarantees the variation of the output confidence is linearly bounded by the degree of graph perturbation, which is similar to the \(C\)-Lipschitz condition applied to GNN.
#### 2.2.2 Additive expansion property
With Definition 2.1, if we have a convex measurable function \(f(y)\) that satisfies the \(C\)-Lipchitz condition, we can find a probability measure on the output prediction \(p_{f}(y)\) that satisfies the **additive expansion property** as stated in Proposition 2.2.
**Proposition 2.2**: _Define a local optimal subset as \(U\subset Y\), whose probability is higher than a threshold \(p_{f}(y)>1-q,y\in U\), and its perturbation set \(U_{\epsilon}=\{\hat{y}=g(G):\|\hat{y}-y\|_{2}\leq C\epsilon,y\in U\}\), where \(G\in\{\hat{G}\}\) is the space of the perturbed graph. Then, there exists \(\alpha>1,\eta>0\), s.t. the probability measure \(p_{f}\) satisfying following additive expansion property of on the link prediction task: \(p_{\alpha f}(U_{\epsilon}\setminus U)\geq p_{\alpha f}(U)+\eta\cdot\alpha\)._
The proposition guarantees the continuity of the measurable function in the neighborhood of the local optimal subset \(U\). In practice, \(U\) represents the PL samples, which ideally should be close to the ground truth labels in ideal. However, the correctness of the PL samples is discrete where the continuity condition is hard to be satisfied. To address this issue, we can leverage GPI. By applying multi-view augmentations and calculating average confidence, we canreparameterize the probability measure to be continuous. In this condition, the Proposition 2.2 implies that the probability of the neighborhood under the amplified measure is greater than the original local optimum. This observation opens up potential opportunities for optimization. For a detailed proof of Proposition 2.2, please refer to Appendix A.
#### 2.2.3 Prediction error measurement
Then, we use the additive expansion property to calculate the error bound of the multi-view GNN model under PL strategy. According to Theorem B.2 in [25], when we use 0-1 loss to measure the error of predictor, the error bound can be evaluated by the following theorem.
**Theorem 2.3**: _Let \(q>0\) be a given threshold. For the GNN in the teacher model \(g_{\phi}\), if its corresponding density measure satisfies additive expansion, the error of the student predictor \(g_{\psi}\) is bounded by_
\[\text{Err}(g)\leq 2(q+\mathcal{A}(g_{\psi})), \tag{2}\]
_where \(\mathcal{A}(g_{\psi})=\mathbb{E}_{Y_{\text{test}}}[\mathbf{I}(\exists g_{\psi}( \hat{G})\neq g_{\psi}(G))]\) measures the inconsistency over differently augmented inputs, \(Y_{\text{test}}\) is the test set for evaluation._
The brief derivation is provided in Appendix.B. From to Eq.2, we observe that the prediction error (0-1 loss) is bounded by the confidence threshold \(q\) and the expectation of the prediction inconsistency \(\mathcal{A}\). Intuitively, if the predictions of multi-view augmented inputs tend to be consistent and the threshold is smaller, the error bound will be smaller, which implies a more accurate predictor.
If \(q\) is a small value, the threshold probability of PL \(1-q\) approaches 1. This cautious approach in self-training leads to a smaller lower bound in the error estimate. When applying random PL, setting the confidence threshold as \(q=0.5\), then the maximum theoretical error rate is 1. It means that there is no guarantee on the performance after applying PL.
The inconsistency term \(\mathcal{A}\) is the expectation of the probability that predictions are inconsistent when input graphs are augmented differently. A small value of \(\mathcal{A}\) indicates consistent prediction across different views. In such cases, we have more confidence in the predictions, leading to a smaller error bound. On the other hand, if the predictions from different views are inconsistent with each other, the model lacks robustness, resulting in a larger error bound.
### Convergence Analysis
In this section, we analyze the influence of the PL strategy on the empirical loss to illustrate its convergence property. First, we assume that optimizing teacher model will not influence the convergence property of PL strategy. This assumption stems from the understanding that the teacher model can converge independently without the incorporation of PL.
**Assumption 2.4**: _The loss function defined in Algorithm 1 is non-increasing during the optimization: \(\text{CE}(g_{\psi}^{(t)},\hat{Y}_{o}^{(t+1)})\leq\text{CE}(g_{\phi}^{(t)}, \hat{Y}_{o}^{(t)}).\)_
Then, we show that the PL sample selection strategy \(\mathcal{T}\) influences the covariance term derived from the empirical loss, then affects the convergence property.
\[\mathcal{L}_{\mathcal{T}}^{(t+1)}\leq\beta\text{Cov}\left[\text{ce}\left(g_{ \psi},Y\right),\mathcal{T}\right]+\mathcal{L}_{\mathcal{T}}^{(t)} \tag{3}\]
where \(\beta=|\hat{Y}_{u}|/(|\hat{Y}_{o}|+k)\), \(ce(\cdot)\) is the element-wise cross entropy. The equality is achieved when the model reaches the global optimal under given training set.
The detailed derivation of the Eq.3 is shown in Appendix.C. From Eq.3, we observe that the effect of PL strategy is decoupled and encapsulated in the covariance term. The covariance sign determines whether the empirical loss will increase or decrease in the next iteration of optimization. For instance, when PL samples are randomly selected, \(\mathcal{T}\) would be independent with \(g_{\psi}\). The covariance becomes 0, indicating that the PL does not influence the loss function. However, a carefully designed PL strategy can accelerate the optimization of empirical loss and yield improved convergence properties.
### Cautious Pseudo Labeling
According to Theorem 2.3, setting a higher confidence threshold for pseudo labeling (PL) can lead to a lower prediction error. This is reflected in a positive covariance term, which accelerates the optimization of the empirical loss and improves the convergence property. In order to satisfy these requirements, we propose the iterative Cautious Pseudo Labeling (CPL) strategy. CPL involves carefully and iteratively pseudo labeling the most confident samples to maximize these improvements in prediction accuracy and convergence.
There are a teacher model and a student model in each iteration. First, we calculate multi-view prediction by the teacher model \(g_{\phi}\) as the confidence measure. Although biased, it is a simple and effective measure in many PL strategies. In node classification, the GNN directly outputs the probability as confidence \(g(G)\). For link prediction, the confidence is determined by the inner product similarity of node embedding \(g(G)=\sigma(E^{T}E)\), where \(E=(e_{1},...,e_{N})^{T}=g_{emb}(G)\) is the node embedding, and \(\sigma(\cdot)\) is the sigmoid function.
Then we select PL samples in unobserved set with the highest confidence, enlarging the training set. We iteratively select top-\(k\) confident samples for PL: \(Y_{p}^{(t)}=\mathcal{T}(\hat{Y}_{u}^{t},\overline{g}_{\phi},k)\), where \(\mathcal{T}\in\{0,1\}^{|\hat{Y}_{u}^{(t)}|},\sum\mathcal{T}=k\). \(\overline{g}_{\phi}^{(t)}\) is the averaged confidence of the multi-view augmentation and is equal to \(g_{\phi}^{(t)}\) for the single input. At the \(t\)-th iteration, the selected PL samples are \(Y_{p}^{(t)}\).
Then we update the observed and the unobserved set. The student model is fine-tuned by the enlarged set and becomes the new teacher model in the next iteration. We take the link prediction task as the example, whose main scheme is in Fig.2. The complete algorithm of CPL is shown in Algorithm 1.
The confidence threshold \(q(t)\) recorded in Algorithm 1 is the lowest confidence among the PL samples \(Y_{p}^{(t)}\). At each iteration, we record the lowest confidence in these \(k\) PL samples as \(q^{(t)}\). We update \(q\) to be \(q^{(t)}\) if \(q^{(t)}\) is smaller. Finally, \(q\) serves as the confidence threshold in Eq.2 for error analysis in Theorem 2.3.
The following theorem states the improvement on convergence property under CPL.
**Theorem 2.5**: _Let \(\mathcal{L}_{\mathcal{T}}^{(t)}\) denote the optimization target at the \(t\)-th iteration. The risk is monotonically decreasing under pseudo labeling strategy, i.e.,_
\[\mathcal{L}_{\mathcal{T}}^{(t+1)}\leq\beta\text{Cov}\left[\text{ ce}\left(g_{\psi},Y\right),\mathcal{T}\right]+\mathcal{L}_{\mathcal{T}}^{(t)} \leq\mathcal{L}_{\mathcal{T}}^{(t)}. \tag{4}\]
_Here \(\mathcal{L}_{\mathcal{T}}^{(t+1)}\) is calculated by \(1/|\hat{Y}_{o}^{(t+1)}|\text{CE}(g_{\psi}^{(t)},\hat{Y}_{o}^{(t+1)})\)._
We have demonstrated the first inequality in Eq.3. The second inequality holds because the covariance between the cross-entropy loss and PL strategy is non-positive. On one hand, each view of \(g_{\psi}\) is positively correlated to the ground-truth label \(Y\) due to the consistency regularization. Then, \(g_{\psi}\) exhibits negative correlation with the cross-entropy loss \(\text{ce}(g_{\psi},Y)\). On the other hand, the PL strategy \(\mathcal{T}\) can be seen as a discrete sign function. In CPL, higher confidence samples are more likely to be pseudo-labeled. Therefore, \(\mathcal{T}\) is positively correlated with \(g_{\psi}\). Consequently, \(\mathcal{T}\) is negatively correlated with the cross-entropy term, resulting in a non-positive covariance term \(\text{Cov}(\cdot)\). This guarantees the monotone decreasing behavior of the loss function, as stated in Theorem 2.5. Furthermore, since the loss function is lower bounded, Theorem 2.5 ensures the convergence of Algorithm 1.
Figure 2: A framework illustration of CPL on link prediction: There is a teacher model and a student model that share parameter. The most confident samples in teacher model’s prediction are pseudo labeled. Then the student model is fine-tuned on the enlarged dataset and becomes teacher model in the next iteration.
Time complexityThe computational complexity of CPL depends on the complexity of specific GNN models used. The key operation in CPL is the selection of top-\(k\) values. For node classification, it takes \(O(N\text{log}k)\), while for link prediction, it takes \(O(N^{2}\text{log}k)\). However, since GNN models typically compute probabilities for all samples, the overhead introduced by CPL does not increase the complexity of existing methods significantly. For example, the complexity of GAE [12] for link prediction is \(O(|\mathcal{E}_{o}|D^{2}+N^{2}D)\) (i.e., the complexity of graph convolution operations and inner-products). Integrating CPL into GAE does not increase the complexity since both involve \(O(N^{2})\) operations. In practice, the additional time consumption mainly results from the fine-tuning process.
## 3 Experiments
In this section, we conduct an evaluation to assess the effectiveness of the CPL strategy on both link prediction and node classification tasks. We compare CPL with raw base models as well as other PL strategies. Then, we analyze the impact of CPL capacity, training data ratio, PL strategy, and augmentation methods. Finally, a case study is bridged to the theoretical analysis on convergence and error bound.
### Datasets and Benchmarks
We adopt five public available datasets to evaluate CPL strategy for link prediction, i.e. CiteSeer, Actor, WikiCS, TwitchPT, and Amazon_Photo, and five datasets for node classification, i.e. Cora, CiteSeer, PubMed, Amazon_Photo, and LastFMAsia. Detailed statistics are reported in Table 1.
In link prediction task, as there are few PL-based methods, we apply the CPL strategy on three popular models: **GAE**[12],**node2vec**[4], **SEAL**[29]. To reserve sufficient candidate unobserved samples for PL, the dataset is randomly split into 10%,40%,50% for training, validation, and testing.
In node classification task, we employ CPL on 4 popular base models: **GCN**, **GraphSAGE**, **GAT**, **APPNP**. CPL is compared with two other PL strategies, namely **M3S**[22] and **DR-GST**[18], using the implementation and parameters provided by the authors 2. We adopt the official split for the citation datasets and 5%,15%,80% split for other datasets in node classification.
Footnote 2: [https://github.com/BUPT-GAMMA/DR-GST](https://github.com/BUPT-GAMMA/DR-GST)
We run experiments with 5 random seeds and report the mean and standard deviations of the metrics.
### Performance Comparison
#### 3.2.1 Overall performance comparison
For link prediction, Table 2 lists the AUC and AP of raw baselines and ones with CPL employed. We observe that CPL distinctively increases the performance of baseline models in nearly all cases. Note that the CPL strategy can achieve performance gain under the circumstances of both high and low performance (e.g., the AUC of GAE on Actor improves from 55.34 to 65.58 and the AP of SEAL on CiteSeer improves from 64.38 to 64.94).
For node classification, Table 3 shows the improvement on the base models and comparison with other PL strategies. The CPL strategy can consistently improve the performance of the base models. However, other PL strategy may be ineffective or degrade the base model (e.g., DR-GST on Cora with APPNP, M3S for PubMed with GraphSAGE). CPL also consistently outperforms other PL strategies. The time consumption is also reported in AppendixD.
#### 3.2.2 Impact of pseudo labeling capacity
In the experiment, the number of PL samples in each iteration \(k\) is set from 100 to 1000. We provide a reference interval for the selection of \(k\). Intuitively, a small \(k\) can lead to a reliable result. But a too small \(k\) will unnecessarily increase the training time. And it does not prominently influence the overall performance as we test on different datasets. In the experiment, the predicted confidence distribution is around 1, as there are usually plenty of potential PL samples. \(k\) is much smaller than the total number of unobserved samples. Then the \(k\)-th highest confidence in the unobserved set will not have much difference, as the cases in Table 4.
#### 3.2.3 Impact of training data ratio
PL enlarges the observed dataset and introduces extra information for the training. This effect has a different degree of contribution on different training data. When the training set is partially applied for the training, the ratio of the observed set ranges from 0.1 to 0.9. The variation of AUC and AP on the Amazon_Photo dataset is shown in Fig.3. The PL method can consistently improve the performance of the raw model even starting from a small training set. It is also worth to mention that CPL is more likely to have a more significant contribution when the training set is small, as there is more introduced information.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline
**Dataset** & **Cora** & **CiteSeer** & **PubMed** & **Actor** & **WikiCS** & **TwitchPT** & **Amazon_Photo** & **LastFMAisa** \\ \hline \# Nodes & 2,078 & 3,327 & 19,717 & 7,600 & 11,701 & 1,912 & 7,650 & 7,624 \\ \# Links & 10,556 & 9,104 & 88,648 & 30,019 & 216,123 & 64,510 & 238,162 & 55,612 \\ \# Features & 1433 & 3,703 & 500 & 932 & 300 & 128 & 745 & 128 \\ \# Classes & 7 & 6 & 3 & 5 & 10 & 2 & 8 & 18 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dataset statistics.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & **Model** & **Citeseer** & **Actor** & **WikiCS** & **TwitchPT** & **Amazon_Photo** \\ \hline \multirow{5}{*}{**AUC(\%)**} & GAE & 71.10 \(\pm\) 0.56 & 55.34 \(\pm\) 0.57 & 90.81 \(\pm\) 0.69 & 74.48 \(\pm\) 3.03 & 67.92 \(\pm\) 1.31 \\ & GAE+CPL & **72.45 \(\pm\) 0.24** & **65.58 \(\pm\) 0.14** & **95.56 \(\pm\) 0.24** & **79.67 \(\pm\) 3.77** & **76.30 \(\pm\) 1.84** \\ & node2vec & 52.03 \(\pm\) 0.60 & 53.30 \(\pm\) 0.59 & 88.82 \(\pm\) 0.28 & 79.46 \(\pm\) 0.77 & 89.32 \(\pm\) 0.21 \\ & node2vec+CPL & **55.22 \(\pm\) 1.63** & **65.11 \(\pm\) 2.31** & **91.99 \(\pm\) 0.26** & **84.76 \(\pm\) 3.52** & **89.53 \(\pm\) 0.30** \\ & SEAL & 63.60 \(\pm\) 0.01 & 73.41 \(\pm\) 0.02 & 86.01 \(\pm\) 0.04 & 87.80 \(\pm\) 0.01 & 76.96 \(\pm\) 0.17 \\ & SEAL+CPL & **64.33 \(\pm\) 0.14** & **73.54 \(\pm\) 0.01** & **86.83 \(\pm\) 0.07** & **87.87 \(\pm\) 0.01** & **78.86 \(\pm\) 0.01** \\ \hline \multirow{5}{*}{**AP(\%)**} & GAE & 72.12 \(\pm\) 0.63 & 53.60 \(\pm\) 1.06 & 90.58 \(\pm\) 0.71 & 69.73 \(\pm\) 5.06 & 67.06 \(\pm\) 0.99 \\ & GAE+CPL & **73.54 \(\pm\) 0.20** & **67.65 \(\pm\) 1.06** & **95.58 \(\pm\) 0.29** & **79.09 \(\pm\) 5.48** & **75.52 \(\pm\) 4.23** \\ & node2vec & 52.90 \(\pm\) 0.36 & 55.43 \(\pm\) 0.62 & 92.54 \(\pm\) 0.51 & 83.37 \(\pm\) 0.52 & 91.46 \(\pm\) 0.18 \\ & node2vec+CPL & **56.19 \(\pm\) 1.60** & **68.33 \(\pm\) 2.85** & **93.66 \(\pm\) 0.29** & **85.87 \(\pm\) 2.15** & **91.47 \(\pm\) 0.21** \\ & SEAL & 64.38 \(\pm\) 0.01 & 73.17 \(\pm\) 0.12 & 83.63 \(\pm\) 0.16 & 87.69 \(\pm\) 0.01 & 73.72 \(\pm\) 0.56 \\ & SEAL+CPL & **64.94 \(\pm\) 0.14** & **73.44 \(\pm\) 0.02** & **86.72 \(\pm\) 0.12** & **87.75 \(\pm\) 0.02** & **80.36 \(\pm\) 0.09** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance comparison on link prediction.
#### 3.2.4 Impact of noisy pseudo labeling strategy
As the noise introduced by PL has a large effect in the graph, we compare the CPL with the PL strategy. In PL, the samples are selected randomly in the unobserved set, whose labels are estimated by the teacher model. The comparison of AUC's variation on different datasets is shown in Fig.1. We discover that PL has different effects on the baseline models. On Actor, the PL consistently improves the link predictor, but not as much as the CPL. On WikiCS however, PL keeps weakening the AUC, as the introduced noise outweighs. On Amazon_Photo, PL can improve the performance at the first few iterations. But the AUC suddenly drops from 70% to around 50% after iterations, which illustrates that introducing too much noise may degrade the model. As for the CPL, although it may drop after reaching its best, i.e. for WikiCS, it can distinctively improve the performance compared to the base model and will not lead to failure.
#### 3.2.5 Impact of multi-view augmentation
During the training, we apply multi-view augmented graphs as the input of the teacher model, so that the continuity condition in Theorem 2.3 is easier to be satisfied. We here conduct the ablation experiments that compares multi-view augmentation with different single-view augmentation methods, including drop node (Node view), feature mask (Feature view), and DropEdge (Structure view). In each experiment, a single augmentation method was applied three times. And "Random" refers to the random selection of samples during PL. As shown in Fig.4, all of the single-view augmentation methods have better performance than the base model and their improvements are almost the same. When we use multi-view augmentation, the AUC and AP are further improved. It illustrates that multi-view augmentation contributes to a more robust graph learning and tends to obtain a consistent result, which echoes with our theoretical analysis.
### Case Study
The case study on node classification is conducted on LastFMAisa on different baselines. The detailed intermediate variables about the error analysis are listed in Table 5. We record the inconsistency \(\mathcal{A}\) and confidence \(1-q\) during the CPL as Algorithm.1. Then the theoretical error bound \(\text{Err}_{th}(g)\) is calculated by Theorem 2.3. As \(\text{Err}_{th}(g)>\text{Err}_{exp}(g)\), the experimental error is bounded by the theoretical error, implying that it is an effective bound. We also report the accuracy of PL samples. The CPL has smaller error than other PL strategies and is consistently better than \(\text{Err}_{exp}(g)\).
The case study on link prediction is conducted based on GAE on WikiCS. The relationship between consistency \(1-\mathcal{A}(g)\) and the number of PL samples is shown in Fig.5. The result shows that more PL samples help to increase the consistency of the prediction. We also compare the optimization target of PL \(\mathcal{L}_{\mathcal{R}}^{(t)}\) and CPL \(\mathcal{L}_{\mathcal{T}}^{(t)}\) in Fig.5. We discover that the optimization target of CPL converges faster than PL and is consistently smaller. It implies the covariance term \(\text{Cov}\left[\text{ce},\mathcal{T}\right]\) is negative as illustrated in Theorem 2.5. Thus, the CPL strategy can improve the convergence property of graph learning. A detailed illustration of error bound verification and Knowledge discovery is shown in Appendix E.
## 4 Related Works
Pseudo labeling is a popular approach in self-training (ST), aiming to enlarge the training set by self-annotation. Most studies focus on learning a more accurate PL algorithms to avoid noisy samples. Confidence thresholding is a simple yet effective method in ST, where the similarity with ground truth or consistency is used to measure the confidence, ensuring that flawed samples are excluded from the enlarged dataset [9; 20]. [6] uses the perturbation on the hidden states to yield close predictions for similar unlabeled inputs. [7; 15; 26] rely on the consistency of the unsupervised clustering. [30] utilizes adversarial learning to acquire domain-uninformative representations and train a PL classifier. [3; 17] use a contrastive loss to improve the representation learning before PL. Directly modeling noisy labels can also enhance noise tolerance [31], such as using soft pseudo labels [2], or distilling correct information to mitigate overfitting on noise [32].
The confidence measure plays a crucial role in avoiding the overconfident results [11]. [33] constructs confidence regularizers, expecting a smooth prediction with soft-label and preventing infinite entropy minimization. [10] learns a confidence metric based on the generality and unity of its distribution of pseudo loss. [1] uses the distance between the distributions of positive and negative samples as a confidence measure. [24] applies data augmentation on node features, where consistency is used as a confidence measure for classification. The co-training method constructs multi-view classifiers. It adds pseudo labels in each view to provide complementary information for each other, showing better performance than single-view [5].
Some studies on graph data apply the PL strategy to node classification tasks. M3S [21], IFC-GCN[8] utilize the clustering to PL the unlabeled samples. CaGCN-st [23] is a confidence-calibrated model
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \multicolumn{3}{c}{**CiteSeer**} \\
**k** & Raw & 100 & 500 & 2000 \\ \hline
**AUC(\%)** & 71.10 & 72.25 & 72.45 & 71.98 \\
**AP(\%)** & 72.45 & 73.29 & 73.54 & 73.13 \\ \hline \hline \multicolumn{5}{c}{**Amazon\_Photo**} \\
**k** & Raw & 5 & 50 & 500 \\ \hline
**AUC(\%)** & 67.92 & 76.63 & 76.30 & 75.95 \\
**AP(\%)** & 67.06 & 75.70 & 75.52 & 74.97 \\ \hline \hline \end{tabular}
\end{table}
Table 4: CPL with different \(k\).
Figure 5: Case study of the consistency and convergence on WikiCS: Cautions PL (CPL) improves the prediction consistency within error bound, and converges faster than PL.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & **GCN** & **GraphSAGE** & **GAT** & **APPNP** \\ \hline Inconsistency \(\mathcal{A}\) (\%) & 6.69 & 4.01 & 2.96 & 3.13 \\ Confidence \(1-q\) (\%) & 77.63 & 88.35 & 83.00 & 86.86 \\ Theoretical Erru. (\%) & 58.12 & 31.32 & 39.92 & 32.54 \\ Experimental Err\({}_{exp}\) (\%) & 20.08 & 17.75 & 17.11 & 16.44 \\ PL error (\%) & 7.78 & 6.43 & 8.18 & 6.02 \\ M3S PL error (\%) & 65.63 & 27.00 & 65.00 & 27.49 \\ DR-GST PL error (\%) & 26.31 & 14.10 & 13.38 & 29.09 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Case analysis of CPL on LastFMAisia.
that utilizes low-confidence but high-accuracy samples. DR-GST [18] employs dropout and edge-drop augmentation to conduct information gain inference for selecting PL samples.
## 5 Conclusion
In this study, we provide deep insights into PL strategy by analyzing its impact on prediction error and the convergence properties. We offer theoretical explanations for the effect of PL strategies on graph learning processes, particularly addressing degradation issues. Based on the theoretical analysis, we introduce the CPL strategy, a plug-in and practical technique that can be generally applied to various baseline models. The experiments demonstrate effectiveness and superiority of CPL in link prediction and node classification tasks.
In future work, we plan to explore a more reliable confidence measures as the PL criteria, such as informativeness in the multi-view network and prediction uncertainty.
## Acknowledgments and Disclosure of Funding
The research of Tsung was supported in part by the Industrial Informatics and Intelligence (Triple-i) Institute at HKUST(GZ).
The research of Jia Li was supported by NSFC Grant No. 62206067, Tencent AI Lab Rhino-Bird Focused Research Program and Guangzhou-HKUST(GZ) Joint Funding Scheme 2023A03J0673.
|
2307.04383 | A note on idempotent semirings | For a commutative semiring S, by an S-algebra we mean a commutative semiring
A equipped with a homomorphism from S to A. We show that the subvariety of
S-algebras determined by the identities 1+2x=1 and x^2=x is closed under
non-empty colimits. The (known) closedness of the category of Boolean rings and
of the category of distributive lattices under non-empty colimits in the
category of commutative semirings both follow from this general statement. | George Janelidze, Manuela Sobral | 2023-07-10T07:32:38Z | http://arxiv.org/abs/2307.04383v1 | # A note on idempotent semirings
###### Abstract.
For a commutative semiring \(S\), by an \(S\)-algebra we mean a commutative semiring \(A\) equipped with a homomorphism \(S\to A\). We show that the subvariety of \(S\)-algebras determined by the identities \(1+2x=1\) and \(x^{2}=x\) is closed under non-empty colimits. The (known) closedness of the category of Boolean rings and of the category of distributive lattices under non-empty colimits in the category of commutative semirings both follow from this general statement.
Key words and phrases:commutative semiring, non-empty colimit, coreflective subcategory, Boolean algebra, distributive lattice 2010 Mathematics Subject Classification: 16Y60, 18A30, 18A40, 06E20, 06E75, 06D75 Partially supported by the Centre for Mathematics of the University of Coimbra - UID/MAT/00324/2020
## 1. Introduction
Let us recall:
**1.1.** A _commutative semiring_ is an algebraic structure of the form \(S=(S,0,+,1,\cdot)\) in which \((S,0,+)\) and \((S,1,\cdot)\) are commutative monoids with
\[x0=0,\ \ x(y+z)=xy+xz\]
for all \(x,y,z\in S\). Here and below we use standard notational agreements: e.g. \(xy+xz\) means \((x\cdot y)+(x\cdot z)\). The category of commutative semirings will be denoted by \(\mathsf{CSR}\); as expected, its morphisms are semiring homomorphisms, that is, maps \(f:S\to S^{\prime}\) of (commutative) semirings with
\[f(0)=0,\ \ f(s+t)=f(s)+f(t),\ \ f(1)=1,\ \ f(st)=f(s)f(t)\]
for all \(s,t\in S\).
**1.2.** For a (commutative) semiring \(S\), an \(S\)-_module_ (or \(S\)-_semimodule_) is an algebraic structure of the form \(A=(A,0,+,h)\) in which \((A,0,+)\) is a commutative monoid and \(h:S\times A\to A\) is a map written as \((s,a)\mapsto sa\) that has
\[1a=a,\ \ s(ta)=(st)a,\ \ s0=0,\ \ s(a+b)=sa+sb,\ \ 0a=0,\ \ (s+t)a=sa+ta\]
for all \(s,t\in S\) and \(a,b\in A\). The category of \(S\)-modules will be denoted by \(S\mathsf{\mbox{-}mod}\).
**1.3.** For \(S\) above, a _commutative \(S\)-algebra_ is a pair \((A,f)\) in which \(A\) is a commutative semiring, and \(f:S\to A\) is a semiring homomorphism. Accordingly, the category of commutative \(S\)-algebras is just the comma category \((S\downarrow\mathsf{CSR})\). Equivalently, a commutative \(S\)-algebra can be defined as an algebraic structure of the form \(A=(A,0,+,h,1,\cdot)\) in which \(A=(A,0,+,h)\) is an \(S\)-module and \(A=(A,0,+,1,\cdot)\) is a commutative semiring with \(s(ab)=(sa)b\) for all \(s\in S\) and \(a,b\in A\); we will then have \(sa=f(s)a\).
**1.4.** If \(S\) is a commutative ring, then \((S\downarrow\mathsf{CSR})\) is the ordinary category of commutative (unital) \(S\)-algebras. In particular: if \(S=\mathbb{Z}\) is the ring of integers, then \((S\downarrow\mathsf{CSR})\) is the category \(\mathsf{CRings}\) of commutative rings; if \(S=\{0,1\}\) with
\(1+1=0\), then \((S\downarrow\mathsf{CSR})\) is the category \(\mathsf{CRings}_{\mathsf{2}}\) of commutative rings of characteristic \(2\) (=the category of commutative algebras over the two-element field). The category \(\mathsf{CRings}_{\mathsf{2}}\) contains the category \(\mathsf{BRings}\) of Boolean rings (=commutative rings satisfying the identity \(x^{2}=x\)).
**1.5.** If \(S=\{0,1\}\) with \(1+1=1\), then \((S\downarrow\mathsf{CSR})\) is the category \(\mathsf{AICSR}\) of additively idempotent commutative semirings (=commutative semirings satisfying the identity \(2=1\), or, equivalently, the identity \(2x=x\)). This category contains the category \(\mathsf{DLat}\) of distributive lattices.
In this paper we present a general result (Corollary 2.3 in the next section) on \(S\)-semialgebras, which implies (known) closedness of the categories of Boolean rings and of distributive lattices under non-empty colimits (or, equivalently, just under binary coproducts) in the category of commutative semirings.
## 2. The general case
Let us first mention an obvious general fact:
**Lemma 2.1**.: _Let \(\mathcal{V}\) be a variety of universal algebras, \(\mathcal{W}\) a subvariety of \(\mathcal{V}\), and \(I\) the initial object of \(\mathcal{W}\) (=the free algebra in \(\mathcal{W}\) on the empty set). If \(\mathcal{W}\approx(I\downarrow\mathcal{W})\) is coreflective in \((I\downarrow\mathcal{V})\), then \(\mathcal{W}\) is closed under non-empty colimits in \(\mathcal{V}\). _
Then we take:
* \(\mathcal{V}=(S\downarrow\mathsf{CSR})\);
* \(\mathcal{W}=(S\downarrow\mathsf{CSR})^{*}\) to be the subvariety of \((S\downarrow\mathsf{CSR})\) consisting of all \(S\)-algebras satisfying the identities \(1+2x=1\) and \(x^{2}=x\). This makes \(I\approx S/E\), where \(E\) is the smallest congruence on \(S\) containing \((1+2s,1)\) and \((s^{2},s)\) for each \(s\in S\).
**Theorem 2.2**.: _Let \(I\approx S/E\) be as above. The variety_
\[(S\downarrow\mathsf{CSR})^{*}\approx(I\downarrow(S\downarrow\mathsf{CSR})^{*})\]
_of commutative \(S\)-algebras satisfying the identities \(1+2x=1\) and \(x^{2}=x\) is a coreflective subcategory of \((I\downarrow\mathsf{CSR})\approx(I\downarrow(S\downarrow\mathsf{CSR}))\)._
Proof.: It suffices to show that, for each \(A\in(I\downarrow\mathcal{V})\), the set
\[A^{\prime}=\{a\in A\mid 1+2a=1\ \&\ a^{2}=a\}\]
is a subalgebra of \(A\), that is, to show the following:
* \(a,b\in A^{\prime}\Rightarrow a+b\in A^{\prime}\);
* for each \(s\in S\), \(a\in A^{\prime}\Rightarrow sa\in A^{\prime}\);
* \(1\in A^{\prime}\);
* \(a,b\in A^{\prime}\Rightarrow ab\in A^{\prime}\);
Indeed, (i): Suppose \(a\) and \(b\) are in \(A^{\prime}\). Then \(1+2(a+b)=1+2a+2b=1+2b=1\) and \((a+b)^{2}=a^{2}+2ab+b^{2}=a+2ab+b=a(1+2b)+b=a+b\).
(iii): \(1+2\cdot 1=1\) since this equality holds in \(I\).
(iv): Suppose \(a\) and \(b\) are in \(A^{\prime}\). Then
\[1+2ab=1+2a+2ab=1+a+a(1+2b)=1+a+a=1+2a=1\]
and \((ab)^{2}=a^{2}b^{2}=ab\).
(ii) follows from (iv) since \(sa=(s1)a\) and \((s1)\) is in \(A^{\prime}\) (since \(s1\) is the image of class of \(s\) under the homomorphism \(I\to A\)).
From Lemma 3.1 and Theorem 3.2 we obtain:
**Corollary 2.3**.: _The variety \((S\downarrow\mathsf{CSR})^{*}\) of commutative \(S\)-algebras satisfying the identities \(1+2x=1\) and \(x^{2}=x\) is closed under non-empty colimits in the variety \((S\downarrow\mathsf{CSR})\) of all commutative \(S\)-algebras. _
Taking \(S\) to be the ring of natural numbers, we obtain the following special cases of Theorem 2.2 and Corollary 2.3:
**Corollary 2.4**.: _The variety \(\mathsf{CSR}^{*}\) of commutative semirings satisfying the identities \(1+2x=1\) and \(x^{2}=x\) is coreflective in the variety \((\{0,1,2\}\downarrow\mathsf{CSR})\), where \(1+2=1\) in \(\{0,1,2\}\). _
**Corollary 2.5**.: _The variety \(\mathsf{CSR}^{*}\) above is closed under non-empty colimits in \(\mathsf{CSR}\). _
## 3. Boolean rings and distributive lattices
If an object \(A\) of \((\{0,1,2\}\downarrow\mathsf{CSR})\) belongs to \((\{0,1\}\downarrow\mathsf{CSR})\) with \(1+1=0\) in \(\{0,1\}\) making \((\{0,1\}\downarrow\mathsf{CSR})=\mathsf{CRings}_{2}\), then
\[\{a\in A\mid 1+2a=1\ \&\ a^{2}=a\}=\{a\in A\mid 2a=0\ \&\ a^{2}=a\}.\]
But if it is the case with \(1+1=1\) making \((\{0,1\}\downarrow\mathsf{CSR})=\mathsf{AICSR}\), then
\[\{a\in A\mid 1+2a=1\ \&\ a^{2}=a\}=\{a\in A\mid 1+a=1\ \&\ a^{2}=a\}.\]
Therefore we obtain the commutative diagram
where the horizontal arrows are inclusion functors while the left-hand and right-hand vertical arrows are the coreflections induced by the coreflection of Corollary 2.4 represented by the middle vertical arrow. Since \(\mathsf{CRings}_{2}\) and \(\mathsf{AICSR}\) both being of the form \((\{0,1\}\downarrow\mathsf{CSR})\) (with different \(1+1\) in \(\{0,1\}\)) are closed in \(\mathsf{CSR}\) under non-empty colimits, we conclude that both \(\mathsf{BRings}\) and \(\mathsf{DLat}\) are also closed in \(\mathsf{CSR}\) under non-empty colimits. That is, as promised in our Introduction, these two known results follow from what we have done in general (in Section 2).
## 4. Two additional remarks
**4.1.** The Reader might ask, what is special about \((S\downarrow\mathsf{CSR})\)? The answer consists of the following observations:
* \((S\downarrow\mathsf{CSR})\) is the category of commutative monoids in the monoidal category \(S\mbox{-}\mathsf{mod}\) having therefore 'good' colimits; indeed, its binary coproducts are given by tensor products.
* The monoidal category structure of \(S\mbox{-}\mathsf{mod}\) is determined by the fact that it is a commutative variety of universal algebras.
* A commutative variety of universal algebras is semi-additive if and only if it is of the form \(S\mbox{-}\mathsf{mod}\) for some commutative semiring \(S\). This immediately follows from the equivalence \(1.\Leftrightarrow 5.\) in Theorem 2.1 of [2], which refers to [1] for the proof.
**4.2.** The coreflectivity of \(\mathsf{DLat}\) in \(\mathsf{AICSR}\) is a 'finitary copy' of the coreflectivity of the category of frames in the category of quantales, see Section C1.1 of [3]: in fact \(A_{f}\) on Page 479 there is the same as our \(\{a\in A\mid 1+a=1\ \&\ a^{2}=a\}\). |
2305.12398 | Language Knowledge-Assisted Representation Learning for Skeleton-Based
Action Recognition | How humans understand and recognize the actions of others is a complex
neuroscientific problem that involves a combination of cognitive mechanisms and
neural networks. Research has shown that humans have brain areas that recognize
actions that process top-down attentional information, such as the
temporoparietal association area. Also, humans have brain regions dedicated to
understanding the minds of others and analyzing their intentions, such as the
medial prefrontal cortex of the temporal lobe. Skeleton-based action
recognition creates mappings for the complex connections between the human
skeleton movement patterns and behaviors. Although existing studies encoded
meaningful node relationships and synthesized action representations for
classification with good results, few of them considered incorporating a priori
knowledge to aid potential representation learning for better performance.
LA-GCN proposes a graph convolution network using large-scale language models
(LLM) knowledge assistance. First, the LLM knowledge is mapped into a priori
global relationship (GPR) topology and a priori category relationship (CPR)
topology between nodes. The GPR guides the generation of new "bone"
representations, aiming to emphasize essential node information from the data
level. The CPR mapping simulates category prior knowledge in human brain
regions, encoded by the PC-AC module and used to add additional
supervision-forcing the model to learn class-distinguishable features. In
addition, to improve information transfer efficiency in topology modeling, we
propose multi-hop attention graph convolution. It aggregates each node's
k-order neighbor simultaneously to speed up model convergence. LA-GCN reaches
state-of-the-art on NTU RGB+D, NTU RGB+D 120, and NW-UCLA datasets. | Haojun Xu, Yan Gao, Zheng Hui, Jie Li, Xinbo Gao | 2023-05-21T08:29:16Z | http://arxiv.org/abs/2305.12398v1 | # Language Knowledge-Assisted Representation Learning for Skeleton-Based Action Recognition
###### Abstract
How humans understand and recognize the actions of others is a complex neuroscientific problem that involves a combination of cognitive mechanisms and neural networks. Research has shown that humans have brain areas that recognize actions that process top-down attentional information, such as the temporoporarietal association area. Also, humans have brain regions dedicated to understanding the minds of others and analyzing their intentions, such as the medial prefrontal cortex of the temporal lobe. Skeleton-based action recognition creates mappings for the complex connections between the human skeleton movement patterns and behaviors. Although existing studies encoded meaningful node relationships and synthesized action representations for classification with good results, few of them considered incorporating a priori knowledge to aid potential representation learning for better performance. LA-GCN proposes a graph convolution network using large-scale language models (LLM) knowledge assistance. First, the LLM knowledge is mapped into a priori global relationship (GPR) topology and a priori category relationship (CPR) topology between nodes. The GPR guides the generation of new "bone" representations, aiming to emphasize essential node information from the data level. The CPR mapping simulates category prior knowledge in human brain regions, encoded by the PC-AC module and used to add additional supervision-forcing the model to learn class-distinguishable features. In addition, to improve information transfer efficiency in topology modeling, we propose multi-hop attention graph convolution. It aggregates each node's k-order neighbor simultaneously to speed up model convergence. LA-GCN reaches state-of-the-art on NTU RGB+D, NTU RGB+D 120, and MN-UCLA datasets.
Skeleton-based action recognition, GCN, a priori knowledge assistance, efficiency of information interaction.
## 1 Introduction
Human action recognition, one of the core tasks of video understanding, is a classification task that observes and infers agent behavior from a third-person perspective [1]. This task can be widely used in human-computer interaction [2], video surveillance [3], healthcare [4], and short entertainment videos [5]. Among them, skeleton-based action recognition [6, 7, 8, 9, 10, 11, 12, 13, 14, 15] is widespread because of its robustness to various environmental noises in the video and its high compactness that facilitates model focus. Graph convolutional network (GCN) based methods [13, 16] made revolutionary advances in this field.
The human musculoskeletal system allows body parts to move and thus perform different actions [17, 18]. The skeleton-based data modality conforms to the human anatomy [16], thus making the learning of GCN more interpretable. It contains only 2D or 3D coordinates of the main joints of the human body, allowing the model to recognize human movements by reasoning about the skeleton sequences. In addition, the skeleton modal is more privacy-friendly than other modalities.
This paper introduces a language model knowledge-assisted graph convolutional network (LA-GCN) to enhance skeleton-based action recognition. Inspired by current cognitive neuroscience research [19, 20, 21] and benefiting from the development of Large-scale Language Model (LLM) [22, 23, 24], LA-GCN uses a large-scale textual knowledge base to simulate the brain regions that the human brain uses to accomplish behavioral prediction to help GCN network make predictions.
As shown in the upper part of Fig. 1, when observing the
Fig. 1: Schematic of LA-GCN concept. The top half of this figure shows two brain activity processes when humans perform action recognition. The bottom half shows the proposed multi-task learning process. The knowledge of the language model is divided into global information and category information to simulate the a priori knowledge used in human reasoning to aid the model. The encoder infers the correlation between joints and thus refines the topology using contextual information.
behavior of others, the temporoperarietal joint area in the brain is stimulated to activate the corresponding brain area associated with the current action, and the prediction of that action is accomplished by reasoning based on a priori knowledge and goals [21]. Much research [6, 7, 8, 12, 16, 25, 26, 27] has tried to model essential joint information to enhance topology learning, but it still needs action-related a priori information to assist. Therefore, the proposed LA-GCN has two parts, the global prior relationship topology (GPR Graph) between nodes and the category prior relationship topology (CPR Graph) between nodes obtained from the LLM to assist the GCN in action reasoning. First, the GPR Graph is used to guide the generation of a new skeleton modal to assume the function of modeling critical information from the data level and using this modal as the input to the "neuronal cluster" GCN for feature aggregation. Then, the CPR Graph is used to compose an a priori consistency-assisted classification (PC-AC) module to help model learning based on features with enhanced semantic relationships. The framework of our proposed approach is shown at the bottom of Fig. 1 for the "make a phone call" action.
Our GPR-Graph in LA-GCN contains relatively well-established meaningful inter-node relationships, even if two nodes are spatially distant. Specifically, we use the text encoder of LLM to extract node features for each joint and establish correlations by finding class centers for all joints. We borrow the BERT [22] widely used for feature extraction in language text as our LLM model. The GPR-Graph generates new skeleton data for the GCN network by preserving the key "bones" according to the rules. The global prior connection of the new skeleton representation can reduce the difficulty of topology modeling and get the differentiated feature representation.
CPR Graph is a mapping of LLM's prior knowledge for action classes. Meanwhile, our PC-AC module aims to simulate how humans think and reason with a priori knowledge. Therefore, the PC-AC module encodes the CPR Graph as a category template topology to add additional supervision for GCN. It is suitable for solving some challenging process-like action classification problems, such as "reading" and "writing" which are similar in node relationships.
In addition, we propose a new feature aggregation method, a multi-hop attention graph convolution (MHA-GC) block, for improving the efficiency of message passing between nodes. When GC performs normal messaging, feature aggregation of a node is contributed by the directly connected nodes, which are called one-hop neighbors. However, as the depth of the GCN increases, the limitations of one-hop messaging increase layer by layer, leading to message delays and semantic over-smoothing that are detrimental to constructing inter-node contexts [28, 29, 30]. For this reason, we use multi-hop attention in a single GC layer to establish remote node relationships. Specifically, we spread the computation of attention from each node to the nodes indirectly connected to it, and the attention is represented using the distance between node features. As a result, MHA-GC can strengthen the semantic relationships between nodes with connectivity and accelerate model convergence.
We evaluated three skeleton-based action recognition datasets NTU RGB+D 60 [31], NTU RGB+D 120 [32], and NW-UCLA [33]. The performance on cross-subjects split for the first two benchmarks is 93.5% and 90.7%. On NW-UCLA, we have 97.6% of top1 accuracy. The experiments show that the proposed LA-GCN outperforms the state-of-the-art techniques. Our main contributions are summarized as follows:
* An LA-GCN is proposed to use the prior knowledge of LLM to assist the GCN for skeleton-based action recognition.
* A new skeleton representations method is proposed for GCN models ensemble. GPR Graph with global information is performed in this method to reduce the topological modeling difficulty.
* An auxiliary supervised module PC-AC with class information encoding is proposed to improve the recognition rate of similar actions.
* A new multi-hop attention feature aggregation method, MHA-GC, is proposed to improve the model's information transfer efficiency and accelerate model convergence.
## 2 Related Work
### _Topology Construction_
Early skeleton-based action recognition methods contain CNN-based [34, 35, 36] and RNN-based [25, 37, 38, 39] methods. However, it is still necessary to explore the skeletal data structure adequately. Topology modeling is the design focus of GCN-based methods [6, 7, 8, 12, 16, 25, 27]. The pioneer of graph convolution, ST-GCN [16], predefines the topology based on the human structure as the input to GCNs. The topology is fixed in both the training and testing phases. Based on this, multi-scale graph building is introduced to GCNs for multi-level joint relationship modeling [11]. There are limitations in the inference performance of static methods due to their inability to follow the network co-optimization. Some works [6, 9, 13] augment topology learning self-attention mechanisms to model the correlation between two joints given the corresponding features. Topology learned using local embeddings still needs to satisfy the need of GCN for high-quality node relationship modeling. Dynamic GCN [12] uses contextual features of all joints learned together to obtain global correlations. To make the features used for modeling more differentiated, CTR-GCN [8] designs channel-specific topology maps to explore more possibilities for feature learning in different channels. Shift-GCN [40] removes the limitations of predefined topological graphs and uses shifting operators for inter-joint feature fusion, learning node relationships implicitly. Since the action changes in real time, the dynamic approach is more advantageous and generalizes better than the static approach. In this paper, our LA-GCN topology modeling belongs to the dynamic mode.
### _Language Model in Skeleton-Based Action Recognition_
The development of natural language processing (NLP) tasks gave birth to the pre-trained representation model BERT [22] from transformers with a bi-directional encoder. It is a solution to solve NLP tasks using pre-trained LLMs to finetune. However, this solution could be more efficient and can only be adapted to one task at a time. In order to use pre-trained LLMs more efficiently, prompt learning (PL) emerged as a technique capable of adapting different tasks to large models. Specifically, the PL technique adapts the model to a new downstream task by adding specific textual parameters to the input of LLMs based on the definition of new tasks, significantly increasing the efficiency of knowledge utilization in LLMs. Meanwhile, related approaches such as CLIP [24] and [41] have successfully applied PL to learning computer
vision (CV) downstream tasks and demonstrated powerful graphical and textual representation capabilities. This brings light to the CV field to step into a new phase of visual text.
For the action recognition task, ActionCLIP [42] uses the CLIP training scheme for video action recognition and adds a transformer layer to guarantee the temporal modeling of video data. In the construction of PL templates, ActionCLIP directly uses class labels as input text to construct cues such as "[action] of video," "carry out [action] of person," and so on. LST [43] uses LLM for skeleton-text multimodal representation learning in the skeleton-based action recognition task. The PL technique in LST is mainly used as a skeleton text pair building, i.e., using the prompt to allow detailed descriptions generated by GPT3 [23] for each class of skeleton actions. Inspired by human cognitive reasoning processes in cognitive neuroscience, our approach uses LLM knowledge to model human knowledge of brain regions in action reasoning. PL is used to construct topological maps for assisted learning, which contain fine-grained semantic relationships between human actions and joints.
## 3 Preliminaries
The GCN family of methods [6, 8, 11, 12, 13, 16] models the human skeleton in space and time, respectively. Precisely, these methods follow the work of [16] by first converting the skeleton data into a spatiotemporal graph \(G=(V,E)\), where \(V\) denotes joints, and \(E\) is edges, i.e., the physical connection between joints, represented by the graph's adjacency matrix \(A\). Spatial modeling aggregates node features based on the adjacency matrix and use them to represent the relationships between joints in the skeleton graph. Temporal modeling is then used to model the motion patterns of the joints.
If the skeleton-based action recognition task is symbolized can be expressed as follows: \(\mathbb{R}^{N\times T\times d}\rightarrow\mathbb{R}^{p}\), where it has \(N\) joints and \(T\) frames, and the dimension of each joint coordinate is \(d\). The GCN finally outputs the probability value that the action belongs to each class, and there are \(p\) classes in total. For each frame, the graph convolution can be expressed as:
\[F^{l+1}=\sum_{s\in S}\tilde{A_{s}}F^{l}W_{s}, \tag{1}\]
where \(F^{l}\in\mathbb{R}^{N\times C_{1}}\) is denoted as the input feature of channel \(C_{1}\) and \(F^{l+1}\in\mathbb{R}^{N\times C_{2}}\) is denoted as the output feature of channel \(C_{2}\). \(S\) defines the neighborhood of each joint node, and \(S=\{\text{root,centripetal},\text{centrifugal}\}\), where the root is itself, the centripetal group is nodes near the skeleton center, and the centrifugal group is nodes far from the center. \(\tilde{A_{s}}=\Lambda_{s}^{-\frac{1}{2}}A_{s}\Lambda_{s}^{-\frac{1}{2}}\) is normalized matrix \(A\in\{0,1\}^{N\times N}\), and the diagonal matrix \(\Lambda_{s}\) in \(\tilde{A_{s}}\) is define as \(\Lambda_{s}^{ii}=\sum_{j}(A_{s}^{ij})+\alpha\) to prevent empty row, \(\alpha\) is a small positive number, say \(0.001\). \(W_{s}\in\mathbb{R}^{1\times 1\times C_{1}\times C_{2}}\) is the weight of every \(s\). The graph convolution of most GCN-based methods on the \(T\) dimension acts as the rule 1D convolution.
## 4 LLM Guided Topology Assistant Graph Convolution Network
LA-GCN is a novel learning framework that utilizes LLM model knowledge for reasoning about the actions of a given skeleton sequence. We first integrate a priori knowledge into a multimodal skeleton representation (Sec. 4.1 and 4.2). For better feature aggregation of the prior representation, we also introduce a neural architecture (Sec. 4.3) and a loss of category-based prior knowledge for multi-task learning (Sec. 4.4). Finally, the overall learning scheme is given. 1
Footnote 1: Note that all symbols used in this section are summarized in Table XI of the Appendix.
### _A Global Prior Relation Graph Generated by LLM_
We will first extract text features for each class of action labels and all the joints using a large-scale pre-trained network Bert. The loss of BERT [22] during pre-training consists of completing and entering two sentences to predict whether the latter one is the following sentence of the previous one. The output of the last layer of the model is used for the completion of the blank, and the output of the Pooler part is used for the next sentence prediction.
As shown in Fig. 2(a), Bert first tokenizes the input sentences and then adds [CLS] and [SEP] tokens at the end of the sentences. The tokens in the sentence are the indexes of the corresponding words in the vocab. After getting these indexes, we transform them into continuous vectors by the embedding layer. The embedding layer can be regarded as a lexicon-size vector library, and the embedding process indexes the vectors according to their indexes. The output of the last layer of the model can be obtained after N transformer layers: the features of each token and the features of [CLS] after Pooler. Since the features after Pooler contain the semantics of the whole sentence, they can directly use when doing tasks such as text classification in general. Therefore, in this paper, the features after Pooler are also selected as the features of our skeleton nodes. Given \(M\) action categories and \(N\) human nodes, our specific process of extracting features for each node is shown in Fig. 2(b). We feed the contained class name and joint name text, e.g., [joint] function in [class], into the text encoder of LLM to get the corresponding output features \(\text{C}\in\mathbb{R}^{N\times C}\) for all joint tokens of each action category, where \(C=256\) is the feature dimension of node feature J.
Then, we want to obtain a global inter-node relationship topology graph GPR-Graph containing the semantic knowledge in LLM to guide the skeleton representation. As shown in the left part of Fig. 3, to obtain the GPR Graph, we first have to find the centroids (CoCLS) of the text features \(\text{J}\in\mathbb{R}^{1\times C}\) of each node on the class dimension. Specifically, the features \(\text{J}_{i}\in\mathbb{R}^{cls\times C},i\in(1,N)\), and \(cls=M\) of the same node on different categories are averaged to obtain the category centroid vector \(\text{J}^{CoCLS}\in\mathbb{R}^{N\times C}\), and \(\text{J}_{i}^{CoCLS}=\frac{1}{M}\sum_{j=1}^{M}\text{J}_{j}^{Cj}\). Then correlations are calculated between nodes, and the similarity measure chooses the Euclidean distance. The final GPR Graph
Fig. 2: Extraction of text features. Subfigure (a) is Bert’s architecture. (b) Our method uses the learned text encoder to extract text features by embedding the names of classes [C] and the names of all joints [J] of the target dataset.
\(\in\mathbb{R}^{N\times N}\) with global prior information is obtained, corresponding to the distance between the node class centers.
### _A Priori Skeleton Modal Representation_
In this section, we introduce a form of skeleton representation generated using a GPR Graph called a priori skeleton modal representation. Consistent with previous work [6, 8, 11, 13], we train our model to complete inference using multiple modal representations. The prior representation primarily utilizes the relative positions of joints for complementary learning, i.e., representing the data as joints and bones. In this case, the bone feature is a transformation of the joint feature: it is obtained by subtracting the starting node from the ending node according to the physical connection [16]. In detail, the joint-bone relationship at the moment \(t\) can be expressed as:
\[\widetilde{X}_{t}=(I-B)X_{t}, \tag{2}\]
where \(B\in\mathbb{R}^{N\times N}\) denotes the bone matrix containing the position relationship between the source and target joints, which is a binary matrix with \(B_{ij}=1\) if the \(i\)-th joint is the source of the \(j\)-th joint and otherwise \(B_{ij}=0\), where the row corresponding to the base point is a zero vector. This processing significantly improves the performance of action recognition, which means that bone representations with considerable differences from the joints can learn more differentiated features to complement. In addition, because of the high weight of physical bone topology in relationship modeling, reasonable skeleton construction is critical to learning inter-node relationships.
We consider the difference between nodes as "bone" data for additional representation. For instance, the NTU datasets [31, 32] include 25 nodes, and if we regard the difference vector between any two nodes as a "bone", there are \(C_{25}^{2}=300\) "bones." According to the nature of the bone matrix \(B\), if we want to construct a matrix that is different from the bone but has the same characteristics as the bone matrix, we need to transform the problem into an "alignment problem": Pick a "bone" connected to each joint except the base point that cannot be repeated. In other words, we will bind the degree of entry for each joint.
We extracted the "bone" data of all samples in the NTU+D 60 [31] training set.2 After calculation, the bone links in the physical bone matrix [16] have a small sum of standard deviations. Based on this, we define the "bone" selection function as \(g(\cdot)\), the new "bones" are represented as \(\widetilde{B}=g_{min}(B_{std})\), where \(B_{std}\) is the sum of the standard deviations of the selected bones and \(g_{min}\) means that the links in the smallest \(B_{std}\) is chosen as the new "bone" representation. Specifically, given a bone vector \(\tilde{x_{t}}\) represented in Eq. 2, its skeleton standard deviation (std) is the average of the std of three coordinates in the time dimension: \(b_{std}=mean(\sigma_{x}(\tilde{x_{t}}),\sigma_{y}(\tilde{x_{t}}),\sigma_{z}( \tilde{x_{t}}))\). We visualize the selected \(\widetilde{B}\) in Fig. 4, while other designs are given for comparison.
Footnote 2: the average standard deviation of all 300 “bones” extracted from the training set is 0.1073.
Nearly half of the bones in \(\widetilde{B}\) in Fig. 4(b) are consistent with the original bone matrix but focus more on some complicated relationships, such as fingertips, toe tips, and hip joints, and have symmetry.3 Further, we use the inter-node distance of the GPR Graph in Sec. 4.1 to weight the bone vector and extract the bones with the minimum std summation as the new skeleton representation. Our a priori modal representation has two main advantages: (1) It captures interactions between remote nodes that are not directly connected. The interaction may help recognize some actions. (2) It contains context-dependence on class properties.
Footnote 3: The summation of standard deviation from (a) to (d) in Fig. 4 are: 0.8265, 0.6977, 2.2312, and 4.1997.
### _Feature Aggressive_
We introduce a multi-hop attention mechanism to model the context dependence of joints, adding further information about neighboring nodes in the topology learning process. This is considering enhancing the learning of long-distance relationships present in the a priori skeletal modal representation. Fig. 5 provides an overview of the encoder-classifier structure of LA-GCN.
#### 4.3.1 More Information for Learnable Topology
When relying solely on the physical topology of the skeleton to aggregate features, information learning suffers from latency and dilution effects [16]. As the relationship between joints changes with the execution of actions, relationship learning between unaturally connected nodes is affected by node spacing. These two issues also affect each other in the aggregation process of
Fig. 4: Multimodal representation of the skeleton. The arrows depict the “bone” vector from the source to the target joint. (a) is the bone matrix [16]. (b), (c) and (d) is our minimum, mean, and maximum std summation matrix, respectively. It was found that nearly half of the skeletons in (b) are consistent with (a) but contain more detailed relationships, and (c) implicitly contains information about the connections between body parts, such as hands, torso and hands, torso and legs, and legs. Where (b) is used as our new “bone” representation.
Fig. 3: Summarize our approach to generate prior topologies. GPR Graph is obtained by computing the class centers of the joints and computing correlations between the node feature of each action to obtain the CPR Graph.
raw GC. For illustration, two remote nodes have just established a connection at some point, and they may be diluted or lost after averaging and non-linear transformation operations in the neighborhood (see Eq.1). The information delay will be further aggravated.
Some approaches [6, 8] use an attention mechanism to guide the learning of the internal topology to complement the node relations adaptively. However, this is not optimal because it contains only first-order neighbor information in the semantic space. The neighborhood information indicated by the semantic space needs to be sufficiently learned, which is especially true for our a priori modal representation with contextual information. Like the video understanding task, allow the network to focus first on extracting intra-frame features and then on the fusion of inter-frame features [44]. Here we want to allow the nodes to learn more fully before integrating. Therefore, we propose an architecture that uses a multi-hop attention mechanism to capture each node's neighborhood to improve the efficiency of information exchange and speed up the convergence of the GCN.
#### 4.3.2 Pre-preparation of The Input Data
Before the skeleton sequence X is fed into the encoder for feature aggregation, we need to make two preparations: one is to weight X using the GPR Graph as mentioned in Sec. 4.2, and the other is to add the position embedding (PE) after X has been linearly transformed to obtain the embedding representation of the nodes. The \(PE\) is the embedding that contains the abstract position of each node in all nodes \(V\). This gives us the initial feature representation:
\[F^{(0)}=\text{X}_{GPR}W_{0}+PE, \tag{3}\]
where \(W_{0}\) is the parameter matrix, \(F^{(0)}\in\mathbb{R}^{N\times C\times T\times V}\), and \(PE\in\mathbb{R}^{C\times V}\). If using the raw skeleton modal, the \(\text{X}_{GPR}\) in Eq. 3 is changed to X.
#### 4.3.3 Encoder Structure
The base block of the encoder consists of two sub-blocks MHA-GC and MS-TC responsible for the spatial and temporal modeling, respectively. As shown in Fig. 5, the output features \(F^{l}\) of the base block are obtained by adding the hidden layer features output by the two submodules using BN normalization with the upper layer feature \(F^{l-1}\) after residual connection.
**MS-TC** We use the commonly used multi-scale temporal encoding module [6, 8, 11] to model skeletal sequences with different time lengths. The module contains two dilated convolutions with different kernel settings and a max pooling layer. The output is obtained by adding the skip connections with the \(1\times 1\) convolution.
**MHA-GC** We propose a new multi-hop attention graph convolution module for inter-node relationship modeling called MHA-GC. Computing multi-hop attention [45] for complementary neighborhood node information, MHA-GC first computes the first-order attention on all nodes \(\tilde{A}^{l}\) for multi-hop diffusion.
As shown in Fig. 5, at each layer \(l\), the feature \(F^{l}\) is passed through two branches containing the transformation equation P, consisting of a pooling layer and a \(1\times 1\) convolution, respectively. Feature vectors \(M,N\in\mathbb{R}^{N\times R\times V}\) are obtained after P, where \(R\) is the reduced-to-feature dimension. Any pair of vertices \((v_{i},v_{j})\) in \(M\) and \(N\) is computed separately to obtain the first-order neighborhood information \(\tilde{A}^{l}=\bigcup_{i,j=1}^{v}\sigma(M_{i}-N_{j})\), where \(\sigma\) is the activation function, \(v\) indicates all nodes, and \(\tilde{A}^{l}_{ij}\) denotes the messages aggregation from node j to node i.
We combine the shared topology \(\tilde{A}^{l}\in\mathbb{R}^{V\times V}\) with the linear mapping of learnable attention \(\tilde{A}^{l}\in\mathbb{R}^{R\times V\times V}\) to obtain the refined attention:
\[\tilde{A}^{l}=\tilde{A}^{l}+\gamma\tilde{A}^{l}W_{3}, \tag{4}\]
where \(\gamma\) and \(W_{3}\) are the refinement weight parameters and mapping layer weights, respectively. The refinement method of \(\tilde{A}^{l}\) and the generation method of the GPR Graph are based on calculating feature distance between nodes to ensure the consistency of feature semantics.
Then, the attention diffusion module calculates the attention between indirectly connected node pairs through a diffusion process based on the first-order attention matrix. Our multi-hop attention is computed as follows:
\[\bar{\mathcal{A}}=\sum_{i=0}^{k}\omega_{i}\tilde{A}^{i}, \tag{5}\]
Fig. 5: LA-GCN architecture. The model consists of a main branch and an auxiliary branch containing an encoder and a classifier. The details of each sub-block structure are shown in the gray box. The symbols \(N\), \(C\), \(T\), and \(V\) in this sub-block indicate the batch size, the number of input/output channels, the number of video frames, and the number of joints, respectively. We use the GPR Graph to generate a skeleton representation \(\text{X}_{GPR}\) as input for learning the topology. This allows the encoder to capture the inter-node context directly with global prior semantic information. The PC-AC guides our neural structure to learn conditional representations based on the class prior node relations.
where \(\omega_{i}=\beta(1-\beta)^{i}\), \(\beta\in(0,1]\) is the decay factor of \(\bar{A}\), and \(\omega_{i}>\omega_{i+1}\); \(\bar{A}^{i}\) is the power of the matrix \(\bar{A}\). The original GC layer passes information in a one-hop attention weighting pattern (see Eq. 1). The power matrix \(\bar{A}^{i}\) gives node information paths of length i, increasing the receptive field for associating relations between nodes. The implementation of \(\omega\) is based on inductive bias, which determines the weight of the contribution of node \(j\) acting on each node \(i\) on the path, and the larger the number of hops, the smaller the weight. The overlap of node aggregation on the same path also alleviates the information dilution effect of the original GC layer aggregating two remote nodes.
Finally, we define the feature aggregation function of the \(\bar{\mathcal{A}}\)-based MHA-GC as
\[F^{l+1}=\sigma(\bar{\mathcal{A}}^{l}F^{l}W_{4}^{l}), \tag{6}\]
where \(\sigma\) denotes the activation function ReLU [46] and \(W_{4}\) is the weights of the output layer.
### _A Priori Consistency-Assisted Classification Module_
The goal of our a priori consistency-assisted classification (PC-AC) module is to design a class relationship topology graph containing a priori node relationships related to the predicted actions to help the GCN perform node feature aggregation. Specifically, the PC-AC module adds additional branch co-supervision to the GCN at training time, forcing each feature that passes through a branch containing a specific category topology to be predicted for that class.
As shown in Fig. 3, we generate the category topology graph T-C using the text features constructed in Sec. 4.1. For each action, the text features of N nodes are combined two by two to calculate the similarity. We refer to T-C as a class topology exemplar and assume that they contain the node relationships that should be present to identify an action.
Next, to achieve our goal, LA-GCN will train the main and PC-AC auxiliary branches in a standard multi-task learning setup. We represent the entire multi-task model with a parameter \(\theta\) as a function \(f_{\theta}(x)\) and the input is X. This parameter \(\theta\) will be updated by the common loss of the primary and auxiliary branches. The input features are obtained using a hard parameter-sharing approach [47]. Specifically, the category prediction of the auxiliary branch uses a set of features shared by the main branch \(\theta_{n}\), as shown in Fig. 5, where n is the selected main branch layer. After the final feature layer, we applied a task-specific layer to output the corresponding predictions, with the softmax classifier used in both branches. In this case, the classification head of the main branch of the model consists of an average global pool, and the classification head of the secondary branch consists of a fully connected layer and an average pooling layer.
We denote the main prediction as \(f_{\theta}{}^{pri}(x)\), the auxiliary prediction as \(f_{\theta}{}^{aux}(x)\), and the ground truth label as \(\hat{y}\). Given the corresponding layer features \(F_{n}^{l}\), and the category topology T-C, the cross-entropy loss in terms of the auxiliary branch is defined as:
\[L_{aug}=-\sum_{k}y_{k}\cdot log(\hat{y_{k}}), \tag{7}\]
where \(y_{k}\) is the one-hot vector containing the accurate category \(c\) of the sample X, and \(\hat{y_{k}}=\frac{e^{z_{i}}}{\sum_{k}e^{z_{k}}}\), logit \(z\) is the output of feature \(Z_{p}\) containing the category prior after our auxiliary classification head, while \(Z_{p}\in\mathbb{R}^{N\times C\times cls\times V}\) is obtained by multiplying \(F_{n}^{l}\in\mathbb{R}^{N\times C\times 1\times V}\) and T-C \(\in\mathbb{R}^{cls\times V\times V}\) in Fig. 5. The PC-AC module aims to maximize the likelihood that the output logit \(z_{i}\) belongs to category \(c\). For the primary supervision of the predicted action labels, logit is then the output of the main branch classification head. Ultimately, to update the parameters \(\theta\) of LA-GCN, we define the multi-task objective as:
\[\underset{\theta}{\text{arg min}}\ (L(f_{\theta}{}^{pri}(x),\hat{y})+\lambda L(f_{ \theta_{n}}{}^{aux}(x),\hat{y})), \tag{8}\]
where \(\lambda\) is the hyper-parameter used to adjust the \(L_{aug}\) weight. When testing, LA-GCN will drop the auxiliary branch to use the main branch for prediction.
The PC-AC module forces the features to be correctly classified is a relatively complex task, and it plays a role in regularization to some extent. During testing, the PC-AC module was removed without affecting the inference speed of the model. Meanwhile, T-C belongs to the fully connected graph, which contains relatively well-established inter-node relationships in each class. This property of T-C allows the PC-AC module to alleviate the over-smoothing problem [28, 29, 30, 45] common to GCN networks. Class-specific semantic information in T-C also improved the recognition rate of similar process actions, as shown in Fig. 6: "reading" and "writing" improved by 9.16% and 8.46%, respectively.
## 5 Experiments
### _Datasets_
**NTU RGB+D.** The dataset [31] contains 56,880 skeleton action sequences and has 60 classes, which can be categorized into daily, healthy, and interactive behaviors. All action data were captured simultaneously by three Microsoft Kinect v2 cameras from different angles. Two evaluation schemes are presented in this paper:1) cross-subject (X-sub), where the training set is the data collected from 20 subjects and the data from the remaining 20 is the test set. 2) cross-view (X-view), using the data captured by the number 2 and 3 cameras as the training set and the data captured by the number 1 camera as the test set.
**NTU RGB+D 120.** The most commonly used is the incremental dataset NTU RGB+D 120 [32] of NTU RGB+D. It has 120 classes, 106 subjects, and 114,480 skeleton sequences. All of these actions were acquired by three cameras. Two benchmarks were introduced to evaluate model performance in NTU RGB+ D120: 1) crossover subjects (X-sub), which, as in NTU RGB+D, requires differentiation between two groups of subjects, and each group consists of 53 volunteers. 2) crossover setup (X-setup), in which data are acquired in different configurations. The training set is the even configuration data, and the test set is the odd configuration data.
**NW-UCLA.** The dataset [33] contains ten basic human actions and 1494 video clips from 10 actors. All data were obtained from three Kinect cameras captured simultaneously. Following the evaluation protocol introduced in [33], the data from the first two cameras are used as the training set, and the data from the last camera as the test set.
### _Training_
All experiments used the PyTorch deep learning framework [48] on 2\(\times\) NVIDIA RTX 3090 GPUs. We used SGD with Nesterov momentum (0.9) as an optimizer with a weight decay of 0.0004. The entire training epoch was 110. The first five epochs were used
with a warm-up strategy [49] to stabilize the training process. The primary learning rate is 0.1 and is reduced by 0.1 at 90 and 100 epochs. On NTU RGB+D and NTU RGB+D 120, the batch size of the experiment is 200, and the data are processed similarly [50, 8]. For NW-UCLA, the batch size was 64, and the same data preprocessing method as in [40] was used. The above configuration was used for all subsequent experiments if not explicitly stated. The source code of LA-GCN is publicly available on [https://github.com/damNull/LAGCN](https://github.com/damNull/LAGCN).
**Overall Architecture.** The encoder consists of nine basic blocks with 64-64-64-128-128-128-256-256-256 channels. The time dimension is reduced to half-time in blocks four and six. The skeleton sequence is first subjected to a feature transformation to obtain an embedding representation of the nodes, and the transformation uses a fully connected layer. This representation is then passed through the spatial and temporal modules to extract the sequence features.
### _Results_
Many state-of-the-art methods use a multi-stream fusion strategy. Specifically, the final results of the experiments fuse four modes (4s), namely, joint, bone, joint motion, and bone motion. They are the original joint coordinates, the vector obtained by differencing the coordinates of the joints with physical connections, the differentiation of the joint in the time dimension, and the differentiation of the bone in the time dimension, respectively. After training a model for each data stream, the softmax scores of each data stream are summed up as the final score during inference. For a fair comparison, we used the same setup as in [6, 8, 12, 40].
Our models are compared with state-of-the-art methods on the datasets NTU RGB+D [31], NTU RGB+D 120 [32], and NW-UCLA [33], respectively, and the experimental results are displayed in Table I, II, and III. Our approach achieves state-of-the-art performance on three datasets under almost all evaluation benchmarks. We use the proposed new "bone" representation for 6s integration. The bones used for training 5s and 6s were selected from the proposed prompt 2 (p2) and prompt 5 (p5), respectively (see Appendix). Compared to the joint stream (86.5%), our method improved by 3.2%, 3.4%, and 4.2% for 2s, 4s, and 6s, respectively. On NTU-RGB+D 120, according to the same settings, our 4s fusion model is 0.6% and 1.0% higher than CTR-GCN 4s [8] and 0.5% higher than both InfoGCN 6s [6]. Our 6s ensemble is 1.1% and 1.3% higher than InfoGCN 6s. Notably, our method is the first to effectively aid topological modeling using the knowledge of large-scale pre-trained models.
### _Ablation Study_
In this section, we analyze different configurations and ablation studies for each component of LA-GCN on the X-sub benchmark of the NTU RGB + D120 [32] dataset.
#### 5.4.1 Effectiveness of PC-AC
The validation of the PC-AC module in Sec. 4.4 includes the overall improvement due to \(L_{aug}\) and the performance of different generated prompts for the class exemplar topology T-C graphs. The model accuracy steadily improves by adding \(L_{aug}\) shown in Table IV. There are differences between the T-C graphs obtained from different textual cues, and we design the prompt text p as
\begin{table}
\begin{tabular}{c||c||c} \hline Method & X-Sub & X-Set \\ \hline Part-Aware LSTM [31] & 26.3 & 25.5 \\ ST-LSTM [37] & 55.7 & 57.9 \\ RotClips-MTCNN [36] & 62.2 & 61.8 \\ ST-GCN [16] & 70.7 & 73.2 \\
2s-AGCN [13] & 82.9 & 84.9 \\ SGN [50] & 82.9 & 84.9 \\ ST-TR [53] & 85.1 & 87.1 \\ Shift-GCN [40] & 85.9 & 87.6 \\ MS-G3D [11] & 86.9 & 88.4 \\ Dynamic-GCN [12] & 87.3 & 88.6 \\ MST-GCN [56] & 87.5 & 88.8 \\ EfficientGCN [7] & 88.7 & 88.9 \\ CTR-GCN [8] & 88.9 & 90.6 \\ Info-GCN [6] & 89.4 & 90.7 \\ \hline LA-GCN (Joint) & 86.5 & 88.0 \\ LA-GCN (Joint+Bone) & 89.7 & 90.9 \\ LA-GCN (4 ensemble) & 89.9 & 91.3 \\ LA-GCN (6 ensemble) & **90.7** & **91.8** \\ \hline \end{tabular}
\end{table} TABLE II: The comparison of Top-1 accuracy (%) on the NTU RGB+D 120 [32] benchmark.
\begin{table}
\begin{tabular}{c||c} \hline Methods & Top1 \\ \hline Lie Group [57] & 74.2 \\ Actionlet ensemble [58] & 76.0 \\ HBRNN+L [59] & 78.5 \\ Ensemble TS-LSTM [38] & 89.2 \\ \hline AGC-LSTM [39] & 93.3 \\
4s Shift-GCN [40] & 94.6 \\ DC-GCN+ADG [54] & 95.3 \\ CTR-GCN [8] & 96.5 \\ InfoGCN [6] & 97.0 \\ \hline LA-GCN (ours) & **97.6** \\ \hline \end{tabular}
\end{table} TABLE III: The comparison of Top-1 accuracy (%) on the NW-UCLA [33] benchmark.
\begin{table}
\begin{tabular}{c||c} \hline Methods & Top1 \\ \hline \begin{tabular}{c} w/o \(L_{aug}\) \\ \(L_{total}\) \\ \end{tabular} &
\begin{tabular}{c} 84.9 \\ \(\mathbf{86.1}^{\uparrow\uparrow\uparrow\downarrow}\) \\ \end{tabular} \\ \hline p1: [\(\mathcal{I}\)] function in [C]. & \(85.6^{\uparrow\uparrow\uparrow\uparrow}\) \\ p2: What happens to [\(\mathcal{I}\)] when a person is [C]? & \(85.8^{\uparrow\uparrow\uparrow\uparrow}\) \\ p3: What will [\(\mathcal{I}\)] act like when [C]? & \(\mathbf{86.1}^{\uparrow\uparrow\downarrow}\) \\ p4: When [C][\(\mathcal{I}\)] of human body. & \(85.5^{\uparrow\uparrow\uparrow}\) \\ p5: When [C] what will [\(\mathcal{I}\)] act like? & \(85.7^{\uparrow\uparrow\uparrow}\) \\ p6: When a person is [C], [\(\mathcal{I}\)] is in motion. & \(85.5^{\uparrow\uparrow\uparrow}\) \\ \hline \end{tabular}
\end{table} TABLE IV: (i) The comparison of accuracy without and with PC-AC, where the \(\lambda\) of \(L_{aug}\) is 0.2. (ii) Contribution of different text prompts.
a complete sentence containing action names [C] and joints [J]. For sample, p1 emphasizes "nodes function," and p2 describes the change of nodes state, as demonstrated in Appendix Table VII. The best result was obtained by p3: "What will [J] act like when [C]" with a 1.2% improvement.
We compare action representations based on trained models with and without PC-AC by principal component analysis (PCA [60]), as shown in Fig. 6. Compared to the case without \(L_{aug}\) loss, the potential representations learned with the aid of the category prior present a class-conditional distribution with a more apparent separation and perform better in the subspace differentiation of process-similar actions. In addition, it makes the intra-class features more convergent.
#### 5.4.2 Improved Interaction Between Neighbor Nodes.
Based on the above baseline, we design two schemes in Table V to replace the original \(1\)-\(hop\) GC. The first one uses MHA-GC for all blocks, and the other replaces only the \(1\)-\(hop\) GC in the first layer. Then, we select \(k\) on top of that. We observe that the message passing performance of multi-hop GC for both schemes is strictly better than that of \(1\)-\(hop\) GC. However, option I suffer from memory overflow at \(3\)-\(hop\). Due to the relatively simple topology of the skeleton, complete replacement is somewhat oversupplied. The best performance in scheme II is 86.5% at \(k=4\). We visualize the MHA in Fig. 7. The multi-hop learnable topology greatly improves the description of the node relationships. The loss curve on the right side shows that MHA-GC can accelerate the model convergence.
**Comparison of Complexity with Other Models** As shown in the Fig. 5, \(\omega_{0}=\beta\) and \(\bar{A^{0}}=I\), \(\bar{\mathcal{A}}F^{l}\) is an approximation of below implementation when the condition k-hop \(\rightarrow\infty\) is satisfied:
\[F^{k+1}=(1-\beta)\bar{\mathcal{A}}F^{k}+\beta F^{l}, \tag{9}\]
where \(0\leq k<K\). If \(k\rightarrow\infty\), then \(lim_{k\rightarrow\infty}\sum_{i=0}^{k}\omega_{i}=1\) and \(\omega_{i}>0\), with the proposition \(lim_{K\rightarrow\infty}F^{K}=\bar{\mathcal{A}}F^{l}\) holds. The proof of this is given in the Appendix. This approximation indicates that the model complexity is in the same magnitude order as the previous attention-based SOTA approach [6, 11, 54]. We compare the model and computational complexity with these methods on NTU-RGB+D 120 [32]. As shown in Table VI, our model balances complexity and final accuracy well. Our model is 1.4% and 1.6% more accurate than the previous SOTA InfoGCN and CTR-GCN while having the same level GFLOPs.
## 6 Limitations
Despite the good results of the proposed LA-GCN in experiments, the attempts to acquire and apply LLM prior knowledge information are more oriented towards manual design. Further integration of language model assistance with self-supervised learning should be exciting. In addition, our skeleton modal representation is obtained based on the statistics of the existing dataset. It would also be interesting to see if new observations and conclusions can be made on more categories of datasets. Finally, expanding LA-GCN to in-the-wild settings is also worth thinking about.
## 7 Conclusion
We present LA-GCN, a representation learning framework aided by prior knowledge of language models. It builds on the cognitive neuroscience of human action recognition, i.e., the recognition process requires multiple brain regions to work together. LA-GCN accomplishes processes of encoding critical information and prediction based on the category a priori knowledge assistance. We introduce a new "bone" multimodal representation guided by global a priori information for model integration. Further, we propose a new multi-hop attention-based graph convolution module, MHA-GC, and demonstrate that it can effectively improve information interaction efficiency for modeling learnable topologies. The "bone" representation and MHA-GC are jointly used to ensure crucial information encoding. Ultimately, our model exhibits state-of-the-art performance on three popular datasets for skeleton-based action recognition.
## Appendix A Structure Approximation Proposition
As described in Sec. 5.4.2, the structure in the dashed box in Fig. 5 MHA-GC sub-block can be approximated by our k-hop structure under certain conditions. The equation of the dash box implementation is \(F^{k+1}=(1-\beta)\bar{A}F^{k}+\beta F^{l}\), where \(k\) is the iterations number of attention diffusion, \(F^{k}\) and \(F^{k+1}\) is the state of \(F^{l}\) at the k-th and k+1-th hop.
**Proposition 1**.: \(lim_{K\rightarrow\infty}F^{K}=\bar{\mathcal{A}}F^{l}\)
\begin{table}
\begin{tabular}{c||c||c||c} \hline \hline Methods & X-Sub & GFLOPs & Param (M) \\ \hline DC-GCN [54] & 84.0 & 1.83 & 3.37 \\ MS-G3D [11] & 84.9 & 5.22 & 3.22 \\ CTR-GCN [8] & 84.9 & 1.97 & 1.46 \\ InfoGCN [6] & 85.1 & 1.84 & 1.57 \\ \hline Ours & **86.5** & **1.76** & **1.46** \\ \hline \hline \end{tabular}
\end{table} TABLE VI: Comparison of computational and model complexity of the state-of-the-arts on NTU RGB+D 120 dataset.
Fig. 6: The left and middle subplots show the PCA projections of potential representations with and without the auxiliary loss \(L_{aug}\) of the PC-AC. The five action classes visualized were randomly selected from NTU RGB+D 120. The right subplot shows the visualization of actions with a greater than 3% change in accuracy.
\begin{table}
\begin{tabular}{c||c} \hline \hline Methods & Top1 \\ \hline \(\bar{A}\) & 86.1 \\ \(\bar{A}\) & **86.5** \\ \hline \(1\)-\(hop:A\) & 86.1 \\ \(2\)-\(hop\) & 86.3 \\ \(3\)-\(hop\) & OOM \\ \hline \(2\)-\(hop^{1st}\) & 86.1 \\ \(3\)-\(hop^{1st}\) & 86.2 \\ \(4\)-\(hop^{1st}\) & **86.5** \\ \(5\)-\(hop^{1st}\) & 85.9 \\ \hline \hline \end{tabular}
\end{table} TABLE V: (i) The comparison of accuracy without and with PC-AC, where the \(\lambda\) of \(L_{aug}\) is 0.2. (ii) Contribution of different text prompts.
Proof.: Let \(E^{0}=F^{l}\), then Eq. 9 becomes:
\[E^{k+1}=(1-\beta)\bar{A}E^{k}+\beta E^{0}. \tag{10}\]
Let \(K>0\) be the hop number, we approximate \(\hat{F}^{l}\) by \(E^{K}\). By recursion, we can obtain
\[\begin{split} E^{K}&=(1-\beta)\bar{A}E^{K-1}+\beta E ^{0}\\ &=(1-\beta)\bar{A}((1-\beta)\bar{A}E^{K-2}+\beta E^{0})+\beta E^{ 0}\\ &=(1-\beta)^{2}\bar{A}^{2}E^{K-2}+\beta(1-\beta)\bar{A}E^{0}+ \beta E^{0}\\ &=(1-\beta)^{3}\bar{A}^{3}E^{K-3}+\beta(1-\beta)^{2}\bar{A}^{2}E ^{0}\\ &+\beta(1-\beta)\bar{A}E^{0}+\beta E^{0}\\ &\cdots\\ &=((1-\beta)^{K}\bar{A}^{K}+\beta\sum_{i=0}^{K-1}(1-\beta)^{i} \bar{A}^{i})E^{0}.\end{split} \tag{11}\]
That is \(E^{K}=((1-\beta)^{K}\bar{A}^{K}+\beta\sum_{i=0}^{K-1}(1-\beta)^{i}\bar{A}^{i} )F^{l}\). As \(\beta\in(0,1]\) and \(\bar{A}^{K}_{i,j}\in(0,1]\), when \(K\rightarrow\infty\), the term \((1-\beta)^{K}\bar{A}^{K}\to 0\).
Thus, \(lim_{K\rightarrow\infty}E^{K}=(\sum_{i=0}^{K-1}\beta(1-\beta)^{i}\bar{A}^{i})F ^{l}=\bar{\mathcal{A}}F^{l}\).
The skeleton sequence contains a fixed number of edges \(\varepsilon\), and by the above approximation, we conclude that the complexity of multi-hop attention does not differ much from one-hop attention. The complexity can all be expressed as \(O(|\varepsilon|)\), where the number of hops \(K\) is the factor that determines the complexity of multiple hops. A better representation can be obtained for skeletal data \(1\leq K\leq 4\) in Sec. 5.4.2. Also, if the structural complexity of the graph increases, then \(K\) needs to increase as well.
## Appendix B Mathematical Explanation of Effectiveness for MHA
In addition to the intuitive conclusion from Table V that the expressiveness of multiple hops is better than that of one hop, we can also perform a spectral analysis [61] of \(\bar{A}\) and \(\bar{\mathcal{A}}\) to obtain relevant proof.
The relationship between our multi-hop and one-hop attention is given in Eq. 5: \(\bar{\mathcal{A}}=\sum_{i=0}^{k}\omega_{i}\bar{A}^{i}\), \(\omega_{i}=\beta(1-\beta)^{i}\), and \(\bar{A}^{i}\) is the power matrix. In spectral analysis, the Jordan decomposition of graph attention \(\bar{A}\) is \(\bar{A}=U\bar{\Lambda}U^{-1}\), where the \(i\)-th column of \(U\) is the eigenvector \(u_{i}\) of \(\bar{A}\), \(\bar{\Lambda}\) is the diagonal matrix, and each element in \(\bar{\Lambda}\) is the eigenvalue corresponding to \(u_{i}\), \(\bar{\Lambda}_{ii}=\lambda_{i}\). Then we have:
\[\bar{\mathcal{A}}=\sum_{l=0}^{K}\omega_{l}\bar{A}^{l}=\sum_{l=0}^{K}\omega_{l} (U\bar{\Lambda}U^{-1})^{l} \tag{12}\]
The eigenvectors of the power matrix \(\bar{A}^{n}\) are same as \(\bar{A}\), we can get \(\bar{A}^{n}=(U\bar{\Lambda}U^{-1})(U\bar{\Lambda}U^{-1})\cdots(U\bar{\Lambda}U ^{-1})=U\bar{\Lambda}^{n}U^{-1}\). In addition, the eigenvectors of \(\bar{A}+\bar{A}^{2}\) are same as \(\bar{A}\). By recursion, the summation of the \(\bar{A}^{l},l\in[0,K]\) has the same eigenvectors as \(\bar{A}\), we have:
\[\bar{\mathcal{A}}=U(\sum_{l=0}^{K}\omega_{l}\bar{\Lambda}^{l})U^{-1}. \tag{13}\]
Therefore, we have an analogy that the eigenvectors of \(\bar{\mathcal{A}}\) and \(\bar{A}\) are also the same.
**Lemma 1**.: _Let \(\bar{\lambda}_{i}\) and \(\lambda_{i}\) be the \(i\)-th eigenvalues of \(\bar{\mathcal{A}}\) and \(\bar{A}\), respectively. Then, the eigenvalue relationship function is_
\[\bar{\lambda}_{i}=\sum_{l=0}^{\infty}\omega_{l}\lambda_{i}^{\,l}=\sum_{l=0}^{ \infty}\beta(1-\beta)^{l}\lambda_{i}^{\,l}=\frac{\beta}{1-(1-\beta)\lambda_{i}} \tag{14}\]
Proof.: \(\bar{L}=I-Q^{-\frac{1}{2}}\bar{A}Q^{-\frac{1}{2}}\) is the symmetric normalized Laplacian of skeleton graph \(G\), where \(Q=diag([q_{1},q_{2},\cdots,q_{N}]),q_{i}=\sum_{j=0}^{\bar{A}_{ij}}\) is the degree matrix. Since \(\bar{A}\) is the one-hop attention matrix of \(G\), if \(\bar{A}\) is generated by softmax, then \(q_{i}=1\) and thus \(Q=I\). Therefore, \(\bar{L}=I-\bar{A}\), and the eigenvalue of \(\bar{L}\) is \(\bar{\lambda}_{i}=1-\lambda_{i}\). Also, the work [62] proves that the value domain of eigenvalues \(\bar{\lambda}_{i}\) of symmetric normalized Laplace matrix is \([0,2]\). Thus, \(-1\leq\lambda_{i}\leq 1\) and \(\beta\in(0,1)\), we have \(|(1-\beta)\lambda_{i}|\leq(1-\beta)<1\). When \(K\rightarrow\infty\), \(((1-\beta)\lambda_{i})^{K}\to 0\) and \(\bar{\lambda}_{i}=lim_{K\rightarrow\infty}\sum_{l=0}^{K}\beta(1-\beta)^{l} \lambda_{i}^{\,l}\). Let \(R\) be \(\sum_{l=0}^{K}\beta(1-\beta)^{l}\lambda_{i}^{\,l}\) is calculated as follow
\[\begin{split} R&=\beta+\beta(1-\beta)\lambda_{i}+ \beta(1-\beta)^{2}\lambda_{i}^{\,2}+\cdots+\beta(1-\beta)^{K}\lambda_{i}^{\,K} \\ &=\beta(1+(1-\beta)\lambda_{i}+((1-\beta)\lambda_{i})^{2}+\cdots+( (1-\beta)\lambda_{i})^{K})\\ &=\beta\frac{1-((1-\beta)\lambda_{i})^{K}}{1-(1-\beta)\lambda_{i} }.\end{split} \tag{15}\]
We get \(\bar{\lambda}_{i}=lim_{K\rightarrow\infty}R=lim_{K\rightarrow\infty}\frac{\beta(1- ((1-\beta)\lambda_{i})^{K})}{1-(1-\beta)\lambda_{i}}=\frac{\beta}{1-(1-\beta) \lambda_{i}^{\,l}}\).
Fig. 7: The left and middle subplots visualize the attention and its matrices of two actions, “clapping” and “kicking,” with and without MHA-GC. The action selection focuses on the upper and lower body, respectively. Each action has a sampling interval of 20 frames. The larger the attention value, the larger the circle’s radius centered on the joint. The right subplot shows the visualization of the loss function before and after adding MHA-GC.
The above proof is under the condition that \(\bar{A}\) is obtained by softmax, but here, we use the feature distance to compute \(\bar{A}\) to maintain semantic consistency with the GPR Graph (Sec. 4.3). For this purpose, we visualize the degree matrix \(Q\) with value domain of eigenvalue less than 0.3. Thus, the eigenvalue \(\lambda_{Qi}\) of \(Q^{-1/2}\) has a value domain greater than 1, and \(-1\leq\lambda_{Qi}\lambda_{i}\lambda_{Qi}\leq 1\). Since \(\tilde{\lambda}_{i}\in[0,2]\), \(|\lambda_{Qi}\lambda_{i}\lambda_{Qi}|\leq 1\). And \(\lambda_{Qi}\geq 1\), we have \(|\lambda_{i}|\leq 1\). The condition \(|(1-\beta)\lambda_{i}|\leq(1-\beta)<1\) still holds.
We further obtain the eigenvalue relations for the normalized laplacian graph of \(\bar{\mathcal{A}}\) and \(\bar{A}\) according to Eq. 14:
\[\frac{\bar{\lambda}_{i}^{G}}{\lambda_{i}^{G}} =\frac{1-\bar{\lambda}_{i}}{\lambda_{i}^{G}}=\frac{1-\frac{\beta} {1-(1-\beta)\lambda_{i}}}{\lambda_{i}^{G}}=\frac{1-\frac{\beta}{1-(1-\beta)(1- \bar{\lambda}_{i}^{G})}}{\lambda_{i}^{G}} \tag{16}\] \[=\frac{1}{\frac{\beta}{1-\beta}+\lambda_{i}^{G}}.\]
If \(\lambda_{i}^{G}\) is large, the multi-hop Laplacian eigenvalues \(\bar{\lambda}_{i}^{G}\) less than one-hop \(\lambda_{i}^{G}\). Eq. 16 shows that multi-hop can add smaller eigenvalues. The smaller the beta, the more pronounced the effect. It has been shown in the spectral analysis that low-frequency signals correspond to topological information of the graph, while higher eigenvalues correspond more to noise [63, 64]. Therefore, multi-hop learned representations can better describe the inter-node relationships.
In addition, we visualize the joint attention of MHA on more actions in Fig. 8, including "drinking," "playing with phone table," "touching the head," "writing," "brushing teeth," "reach into pocket," "jump up," "pick up," "sitting down," and "staggering." Some actions focus on the upper body, such as the head and hands. Some actions focus on the lower body, such as the hip, knee, and foot. Our MHA can correctly show the strength of the relationship between these critical parts in different actions. The relationship between the left hand and the head was stronger for "brushing" than for "drinking." In the two actions of "writing" and "playing with phone tablet," MHA paid more vital attention to the "spine" during writing. In addition, MHA can effectively describe the remote interaction between hands and legs in "jump up" and between neck, shoulders, and feet in "staggering."
## Appendix C Supplemental Experimental Results
### _Ablations on NTU RGB+D 120_
**Weighting Factors for \(L_{aug}\) in Total Loss.** We perform a grid search for the coefficients \(\lambda\) of \(L_{aug}\) and the prompt that generates T-C in PC-AC. As shown in Table VII, \(\lambda=0.2\) performs best for the class exemplars T-C generated using p3. Prompt p3 searches for possible state features of each node as the action is performed and calculates the correlation of features between nodes in T-C. The design has the same goal as the skeleton-based action recognition task and validates the importance of node correlation modeling for correct classification.
**Input and Output Configuration of The PC-AC Module.** PC-AC has the best results when the feature \(\theta_{3}\) of the last stage of the main branch is selected as an input, and the experiments are shown in Table VIII left. Our model consists of nine base blocks. These nine basis blocks are divided into three stages based on the feature dimensional descent process. \(\theta_{3}\), \(\theta_{2}\), and \(\theta_{1}\) corresponds to the output features of the ninth, sixth, and third blocks. Except for the input feature selection, we further compared the different pooling methods used to reduce the feature dimensionality and get classification logits. Table VIII right shows that the average pooling performs the best. The average relationship between joints is more meaningful as the category-by-category representation.
**Multi-stream Ensemble.** We integrated the skeletal modal representations generated by different prompts to observe the final
\begin{table}
\begin{tabular}{c||c||c||c||c} \hline \(\lambda\) & 0.1 & 0.2 & 0.3 & 0.5 \\ \hline p1: [J] function in [C] & 85.0 & 85.6 & 85.3 & 84.8 \\ p2: What happens to [J] when a person is [C]? & 85.1 & 85.8 & 85.5 & 85.1 \\ p3: What will [J] act like when [C]? & 85.3 & **86.1** & 85.6 & 85.3 \\ p4: When [C]/J human body. & 85.0 & 85.5 & 85.0 & 84.8 \\ p5: When [C]/J what will [J] act like? & 85.2 & 85.7 & 85.2 & 84.9 \\ p6: When a person is [C], [J] is in motion. & 85.1 & 85.5 & 85.2 & 85.0 \\ \hline \end{tabular}
\end{table} TABLE VII: \(\lambda\) selection of PC-AC with different text prompts T-C.
Fig. 8: More action visualization for MHA, using different colors for different actions. Each action has a sampling interval of 20 frames. The larger the attention value, the larger the circle’s radius centered on the joint.
performance of the multi-stream integration. We give the classification accuracy of the skeleton representation \(B_{p*}\) generated by each prompt and the ensemble results for 2s, 4s, 5s, and 6s in Table IX. The 2s in Table IX represent the integrated joint and bone models, the 4s represent the integrated 2s and their motion modalities, the 5s represent the integrated 4s and arbitrary \(B_{p*}\) modalities, and the 6s represent the integrated 4s and the two best-performing prompts \(B_{p2}\) and \(B_{p5}\) in the 5s. In Table IX, we can see that the models trained with \(B_{p*}\) have a corresponding improvement in accuracy. Choosing the two best-performing \(B_{p*}\) for integration can lead to higher performance.
### _Multi-stream Ensemble on All Datasets_
The experimental results of the multi-stream integration of our method on different validation classifications of the three benchmark datasets are shown in Table X. All 6s ensembles were done by the best-performing skeleton modalities \(B_{p2}\) and \(B_{p5}\) on NTU RGB+D 120 X-Sub. It can be observed that the integration performance of the skeleton modalities generated by these two prompts is robust on all datasets. Our method uses the sum of standard deviations as the criterion for selecting a new skeleton representation and requires that the newly selected "bone" be similar to the original skeleton representation. Moreover, the preserved bones are guided by a priori information. Although the model learned from the new skeleton representation does not perform as well as the original bone model, its good performance in ensemble experiments demonstrates that the variability of the new "bone" can effectively complement feature learning.
We give all the symbols in Sec. 4 in Table X to make it more accessible.
|
2302.01425 | Fast, Differentiable and Sparse Top-k: a Convex Analysis Perspective | The top-k operator returns a sparse vector, where the non-zero values
correspond to the k largest values of the input. Unfortunately, because it is a
discontinuous function, it is difficult to incorporate in neural networks
trained end-to-end with backpropagation. Recent works have considered
differentiable relaxations, based either on regularization or perturbation
techniques. However, to date, no approach is fully differentiable and sparse.
In this paper, we propose new differentiable and sparse top-k operators. We
view the top-k operator as a linear program over the permutahedron, the convex
hull of permutations. We then introduce a p-norm regularization term to smooth
out the operator, and show that its computation can be reduced to isotonic
optimization. Our framework is significantly more general than the existing one
and allows for example to express top-k operators that select values in
magnitude. On the algorithmic side, in addition to pool adjacent violator (PAV)
algorithms, we propose a new GPU/TPU-friendly Dykstra algorithm to solve
isotonic optimization problems. We successfully use our operators to prune
weights in neural networks, to fine-tune vision transformers, and as a router
in sparse mixture of experts. | Michael E. Sander, Joan Puigcerver, Josip Djolonga, Gabriel Peyré, Mathieu Blondel | 2023-02-02T21:32:13Z | http://arxiv.org/abs/2302.01425v3 | # Fast, Differentiable and Sparse Top-k:
###### Abstract
The top-\(k\) operator returns a \(k\)-sparse vector, where the non-zero values correspond to the \(k\) largest values of the input. Unfortunately, because it is a discontinuous function, it is difficult to incorporate in neural networks trained end-to-end with backpropagation. Recent works have considered differentiable relaxations, based either on regularization or perturbation techniques. However, to date, no approach is fully differentiable _and_ sparse. In this paper, we propose new differentiable _and_ sparse top-\(k\) operators. We view the top-\(k\) operator as a linear program over the permutahedron, the convex hull of permutations. We then introduce a \(p\)-norm regularization term to smooth out the operator, and show that its computation can be reduced to isotonic optimization. Our framework is significantly more general than the existing one and allows for example to express top-\(k\) operators that select values _in magnitude_. On the algorithmic side, in addition to pool adjacent violator (PAV) algorithms, we propose a new GPU/TPU-friendly Dykstra algorithm to solve isotonic optimization problems. We successfully use our operators to prune weights in neural networks, to fine-tune vision transformers, and as a router in sparse mixture of experts.
## 1 Introduction
Finding the top-\(k\) values and their corresponding indices in a vector is a widely used building block in modern neural networks. For instance, in sparse mixture of experts (MoEs) (Shazeer et al., 2017; Fedus et al., 2022), a top-\(k\) router maps each token to a selection of \(k\) experts (or each expert to a selection of \(k\) tokens). In beam search for sequence decoding (Wiseman and Rush, 2016), a beam of \(k\) possible output sequences is maintained and updated at each decoding step. For pruning neural networks, the top-\(k\) operator can be used to sparsify a neural network, by removing weights with the smallest magnitude (Han et al., 2015; Frankle and Carbin, 2018). Finally, top-\(k\) accuracy (e.g., top-3 or top-5) is frequently used to evaluate the performance of neural networks at inference time.
However, the top-\(k\) operator is a discontinuous piecewise affine function with derivatives either undefined or constant (the related top-\(k\)_mask_ operator, which returns a binary encoding of the indices corresponding to the top-\(k\) values, has null derivatives). This makes it hard to use in a neural network trained with gradient backpropagation. Recent works have considered differentiable relaxations, based either on regularization or perturbation techniques (see SS2 for a review). However, to date, no approach is differentiable everywhere _and_ sparse. Sparsity is crucial in neural networks
that require _conditional_ computation. This is for instance the case in sparse mixture of experts, where the top-\(k\) operator is used to "route" tokens to selected experts. Without sparsity, all experts would need to process all tokens, leading to high computational cost. This is also the case when selecting weights with highest magnitude in neural networks: the non-selected weights should be exactly 0 in order to optimize the computational and memory costs.
In this work, we propose novel differentiable everywhere _and_ sparse top-\(k\) operators (see Figure 1). We build upon the framework of Blondel et al. (2020), which casts sorting and ranking as linear programs over the permutahedron, and uses a reduction to isotonic optimization. We significantly generalize that framework in several ways. Specifically, we make the following contributions:
* After reviewing related work in SS2 and background in SS3, we introduce our generalized framework in SS4. We introduce a new nonlinearity \(\varphi\), allowing us to express new operators (such as the top-\(k\) in magnitude). In doing so, we also establish new connections between so-called \(k\)-support norms and the permutahedron.
* We introduce a regularization term to obtain a relaxed top-\(k\) operator. In particular, using \(p\)-norm regularization, we obtain the first differentiable everywhere _and_ sparse top-\(k\) operator (Figure 3).
* In SS5, we derive pool adjacent violator (PAV) algorithms for solving isotonic optimization when using \(\varphi\) and/or when using \(p\)-norm regularization. We show that the Jacobian of our operator can be computed in closed form.
* As a GPU/TPU friendly alternative to PAV, we propose a Dykstra algorithm to solve isotonic optimization, which is easy to vectorize in the case \(p=2\).
* In SS6, we present three applications of our operators. First, we use them to prune weights in a multilayer perceptron during training and show that they lead to better accuracy than with a hard top-\(k\). Second, we define top-\(k\) losses to fine-tune vision transformers (ViTs) and obtain better top-\(k\) accuracy than with the cross-entropy loss. Finally, we use our operators as a router in vision mixture of experts, and show that they outperform the hard top-\(k\) router.
Figure 1: **Illustration of our differentiable and sparse top-k mask.** For \(k=2\), we consider \(\mathbf{\theta}(s)=(3,1,-1+s,s)\in\mathbb{R}^{4}\) and plot \(\text{topkmask}(\mathbf{\theta}(s))_{2}+\text{topkmask}(\mathbf{\theta}(s))_{3}\) as a function of \(s\). We compare the hard version (no regularization) with our proposed operator using \(p\)-norm regularization: \(p=2\) leads to differentiable operator with kinks; \(p=4/3\), leads to a smooth operator. Both operators are **sparse**: they are exactly 0 for some values of \(s\).
Related work
Differentiable loss functions.Several works have proposed a differentiable loss function as a surrogate for a discrete, discontinuous metric. For example, loss functions have been proposed for top-\(k\) accuracy (Lapin et al., 2015, 2016; Berrada et al., 2018; Petersen et al., 2022) and various ranking metrics (Chapelle and Wu, 2010; Adams and Zemel, 2011; Rolinek et al., 2020).
Differentiable operators.In a different line of research, which is the main focus of this work, a differentiable operator is proposed, which can be used either as an intermediate layer in a neural network or as final output, fed into an arbitrary loss function. For example, Amos et al. (2019) and Qian et al. (2022) proposed a smooth (but not sparse) top-\(k\) operator based on binary entropy regularization. Related projections on the capped simplex were proposed (Martins and Kreutzer, 2017; Malaviya et al., 2018; Blondel, 2019) in different contexts. These works use ad-hoc algorithms, while we use a reduction to isotonic optimization. Cuturi et al. (2019) proposed a relaxation of the sorting and ranking operators based on entropy-regularized optimal transport and used it to obtain a differentiable (but again not sparse) top-\(k\) operator. Its computation relies on Sinkhorn's algorithm (Sinkhorn, 1967; Cuturi, 2013), which in addition makes it potentially slow to compute and differentiate. A similar approach was proposed by Xie et al. (2020).
The closest work to ours is that of Blondel et al. (2020), in which sorting and ranking are cast as linear programs over the permutahedron. To make these operators differentiable, regularization is introduced in the formulation and it is shown that the resulting operators can be computed via isotonic optimization in \(O(n\log n)\) time. Unfortunately, the proposed operators still include kinks: they are not differentiable everywhere. The question of how to construct a differentiable everywhere _and_ sparse relaxation with \(O(n\log n)\) time complexity is therefore still open. In this work, we manage to do so by using \(p\)-norm regularization. Furthermore, by introducing a new nonlinearity \(\varphi\), we significantly generalize the framework of Blondel et al. (2020), allowing us for instance to express a new top-\(k\) operator _in magnitude_. We introduce a new GPU/TPU friendly Dykstra algorithm as an alternative to PAV.
Instead of introducing regularization, another technique relies on perturbation (Berthet et al., 2020). This technique has been used to obtain a differentiable (but still _not sparse_) top-k for image patch selection (Cordonnier et al., 2021).
Pruning weights with small magnitude.Many recent works focus on neural network pruning, where parameters are removed to significantly reduce the size of a model. See Blalock et al. (2020) for a recent survey. A simple yet popular method for pruning neural networks is by global magnitude pruning Collins and Kohli (2014); Han et al. (2015): weights with lowest absolute value are set to 0. While most of the pruning techniques are performed after the model is fully trained Blalock et al. (2020), some works prune periodically during training Gale et al. (2019). However, to the best of our knowledge, pruning by magnitude is not done in a differentiable fashion. In this work, we empirically show that pruning weights with a differentiable top-\(k\) operator in magnitude during training leads to faster convergence and better accuracy than with a "hard" one.
Top-k operator for mixture of experts.Sparse mixture of experts models (MoEs) Shazeer et al. (2017) are a class of deep learning models where only a small proportion of the model, known as experts, is activated, depending on its input. Therefore, sparse MoEs are able to increase the number of parameters without increasing the time complexity of the model. Sparse MoEs have achieved great empirical successes in computer vision Riquelme et al. (2021); Zhou et al. (2022) as well as natural language processing Shazeer et al. (2017); Lewis et al. (2021); Fedus et al. (2021). At the heart of the sparse MoE model is its routing mechanism, which determines which inputs (or tokens) are assigned to which experts. In the sparse mixture of experts literature, some works have recently proposed new top-\(k\) operators in the routing module. Hazimeh et al. (2021) proposed a binary encoding formulation
to select non-zero weights. However, their formulation does not approximate the true top-\(k\) operator and sparsity is only supported at inference time, not during training. Liu et al. (2022) proposed an optimal transport formulation supporting \(k\)-sparsity constraints and used it for sparse mixture of experts. In this work, we propose to replace the hard top-\(k\) router, which is a discontinuous function, by our smooth relaxation.
## 3 Background
Notation.We denote a permutation of \([n]\) by \(\sigma=(\sigma_{1},\ldots,\sigma_{n})\) and its inverse by \(\sigma^{-1}\). When seen as a vector, we denote it \(\mathbf{\sigma}\). We denote the set of all \(n!\) permutations by \(\Sigma\). Given a vector \(\mathbf{x}\in\mathbb{R}^{n}\), we denote the version of \(\mathbf{x}\) permuted according to \(\sigma\) by \(\mathbf{x}_{\sigma}\coloneqq(x_{\sigma_{1}},\ldots,x_{\sigma_{n}})\). We denote the \(i\)-th largest value of \(\mathbf{x}\in\mathbb{R}^{n}\) by \(x_{[i]}\). Without loss of generality, we always sort values in descending order. The conjugate of \(f(\mathbf{x})\) is denoted by \(f^{*}(\mathbf{y})\coloneqq\sup_{\mathbf{x}\in\mathbb{R}^{n}}\langle\mathbf{x},\mathbf{y} \rangle-f(\mathbf{x})\).
Review of operators.We denote the **argsort** operator as the permutation \(\mathbf{\sigma}\) sorting \(\mathbf{x}\in\mathbb{R}^{n}\), i.e.,
\[\operatorname{argsort}(\mathbf{x})\coloneqq\mathbf{\sigma},\quad\text{where}\quad x _{\sigma_{1}}\geq\cdots\geq x_{\sigma_{n}}.\]
We denote the **sort** operator as the values of \(\mathbf{x}\in\mathbb{R}^{n}\) in sorted order, i.e.,
\[\operatorname{sort}(\mathbf{x})\coloneqq\mathbf{x}_{\sigma},\quad\text{where}\quad \mathbf{\sigma}=\operatorname{argsort}(\mathbf{x}).\]
The value \([\operatorname{sort}(\mathbf{x})]_{i}\) is also known as the \(i\)-th order statistic. We denote the **rank** operator as the function returning the positions of the vector \(\mathbf{x}\in\mathbb{R}^{n}\) in the sorted vector. It is formally equal to the argsort's inverse permutation:
\[\operatorname{rank}(\mathbf{x})\coloneqq\mathbf{\sigma}^{-1},\quad\text{where}\quad \mathbf{\sigma}=\operatorname{argsort}(\mathbf{x}).\]
Smaller rank \([\operatorname{rank}(\mathbf{x})]_{i}\) means that \(x_{i}\) has higher value. The **top-k mask** operator returns a bit-vector encoding whether each value \(x_{i}\) is within the top-\(k\) values or not:
\[[\operatorname{topkmask}(\mathbf{x})]_{i}\coloneqq\begin{cases}1,&\text{if }[ \operatorname{rank}(\mathbf{x})]_{i}\leq k\\ 0,&\text{otherwise}.\end{cases}.\]
The **top-k** operator returns the values themselves if they are within the top-\(k\) values or \(0\) otherwise, i.e.,
\[\operatorname{topk}(\mathbf{x})\coloneqq\mathbf{x}\circ\operatorname{topkmask}(\mathbf{x}),\]
where \(\circ\) denotes element-wise multiplication. The **top-k in magnitude** operator is defined similarly as
\[\operatorname{topkmag}(\mathbf{x})\coloneqq\mathbf{x}\circ\operatorname{topkmask}(| \mathbf{x}|).\]
To illustrate, if \(x_{3}\geq x_{1}\geq x_{2}\) and \(|x_{2}|\geq|x_{3}|\geq|x_{1}|\), then
* \(\operatorname{argsort}(\mathbf{x})=(3,1,2)\)
* \(\operatorname{sort}(\mathbf{x})=(x_{3},x_{1},x_{2})\)
* \(\operatorname{rank}(\mathbf{x})=(2,3,1)\)
* \(\operatorname{topkmask}(\mathbf{x})=(1,0,1)\)
* \(\operatorname{topk}(\mathbf{x})=(x_{1},0,x_{3})\),
* \(\operatorname{topkmag}(\mathbf{x})=(0,x_{2},x_{3})\),
where in the last three, we used \(k=2\).
Permutahedron.The permutahedron associated with a vector \(\mathbf{w}\in\mathbb{R}^{n}\), a well-known object in combinatorics (Bowman, 1972; Ziegler, 2012), is the convex hull of the permutations of \(\mathbf{w}\), i.e.,
\[P(\mathbf{w})\coloneqq\operatorname{conv}(\{\mathbf{w}_{\sigma}\colon\sigma\in\Sigma\}) \subset\mathbb{R}^{n}.\]
We define the linear maximization oracles (LMO) associated with \(P(\mathbf{w})\) by
\[f(\mathbf{x},\mathbf{w}) \coloneqq\max_{\mathbf{y}\in P(\mathbf{w})}\langle\mathbf{x},\mathbf{y}\rangle \tag{1}\] \[\mathbf{y}(\mathbf{x},\mathbf{w}) \coloneqq\operatorname*{argmax}_{\mathbf{y}\in P(\mathbf{w})}\ \langle\mathbf{x},\mathbf{y}\rangle=\nabla_{1}f(\mathbf{x},\mathbf{w}),\]
where \(\nabla_{1}f(\mathbf{x},\mathbf{w})\) is technically a subgradient of \(f\) w.r.t. \(\mathbf{x}\). The LMO can be computed in \(O(n\log n)\) time. Indeed, the calculation of the LMO reduces to a sorting operation, as shown in the following known proposition.
**Proposition 1**.: _(Linear maximization oracles)_
_If \(w_{1}\geq\dots\geq w_{n}\) (if not, sort \(\mathbf{w}\)), then_
\[f(\mathbf{x},\mathbf{w})=\sum_{i=1}^{n}w_{i}x_{[i]}\quad\text{and}\quad\mathbf{y}(\mathbf{x}, \mathbf{w})=\mathbf{w}_{\operatorname{rank}(\mathbf{x})}.\]
A proof is included for completeness in Appendix A.1.
LP formulations.Let us denote the reversing permutation by \(\mathbf{\rho}\coloneqq(n,n-1,\dots,1)\). Blondel et al. (2020) showed that the sort and rank operators can be formulated as linear programs (LP) over the permutahedron:
\[\operatorname{sort}(\mathbf{x}) =\mathbf{y}(\mathbf{\rho},\mathbf{x})=\operatorname*{argmax}_{\mathbf{y}\in P( \mathbf{x})}\langle\mathbf{\rho},\mathbf{y}\rangle\] \[\operatorname{rank}(\mathbf{x}) =\mathbf{y}(-\mathbf{x},\mathbf{\rho})=\operatorname*{argmax}_{\mathbf{y}\in P( \mathbf{\rho})}\langle-\mathbf{x},\mathbf{y}\rangle.\]
In the latter expression, the minus sign is due to the fact that we use the convention that smaller rank indicates higher value (i.e., the maximum value has rank 1).
Although not mentioned by Blondel et al. (2020), it is also easy to express the top-\(k\) mask operator as an LP
\[\operatorname{topkmask}(\mathbf{x})=\mathbf{y}(\mathbf{x},\mathbf{1}_{k})=\operatorname*{ argmax}_{\mathbf{y}\in P(\mathbf{1}_{k})}\langle\mathbf{x},\mathbf{y}\rangle, \tag{2}\]
where \(\mathbf{1}_{k}\coloneqq(\underbrace{1,...,1}_{k},\underbrace{0,\dots,0}_{n-k})\). For this choice of \(\mathbf{w}\), the permutahedron enjoys a particularly simple expression
\[P(\mathbf{1}_{k})=\{\mathbf{y}\in\mathbb{R}^{n}\colon\langle\mathbf{y},\mathbf{1}\rangle=k, \mathbf{y}\in[0,1]^{n}\}\]
and \(\frac{1}{k}P(\mathbf{1}_{k})\) is known as the capped simplex (Warmuth and Kuzmin, 2008; Blondel et al., 2020). This is illustrated in Figure 2. To obtain relaxed operators, Blondel et al. (2020) proposed to introduce regularization in (1) (see "recovering the previous framework" in the next section) and used a reduction to isotonic optimization.
## 4 Proposed generalized framework
In this section, we generalize the framework of Blondel et al. (2020) by adding an optional nonlinearity \(\mathbf{x}^{\prime}=\varphi(\mathbf{x})\). In addition to the operators covered by the previous framework, this allows us to directly express the top-\(k\) in magnitude operator, which was not possible before.
Introducing a mapping \(\varphi\).Consider a mapping \(\varphi(\mathbf{x})\coloneqq(\phi(x_{1}),\dots,\phi(x_{n}))\). Given \(\mathbf{x}\in\mathbb{R}^{n}\) and \(\mathbf{w}\in\mathbb{R}^{n}\), we define
\[f_{\varphi}(\mathbf{x},\mathbf{w}) \coloneqq f(\varphi(\mathbf{x}),\mathbf{w})\] \[\mathbf{y}_{\varphi}(\mathbf{x},\mathbf{w}) \coloneqq\nabla_{1}f_{\varphi}(\mathbf{x},\mathbf{w}).\]
When \(\varphi(\mathbf{x})=\mathbf{x}\) (identity mapping), we clearly recover the existing framework, i.e., \(f_{\varphi}(\mathbf{x},\mathbf{w})=f(\mathbf{x},\mathbf{w})\) and \(\mathbf{y}_{\varphi}(\mathbf{x},\mathbf{w})=\mathbf{y}(\mathbf{x},\mathbf{w})\).
When \(\varphi(\mathbf{x})\neq\mathbf{x}\), our framework starts to differ from the previous one, since \(\varphi\) affects differentiation. By the chain rule and Danskin's theorem (1966), we get
\[\mathbf{y}_{\varphi}(\mathbf{x},\mathbf{w}) \coloneqq\nabla_{1}f_{\varphi}(\mathbf{x},\mathbf{w}) \tag{3}\] \[=\partial\varphi(\mathbf{x})^{\top}\mathbf{y}(\varphi(\mathbf{x}),\mathbf{w})\] \[=(\phi^{\prime}(x_{1}),\dots,\phi^{\prime}(x_{n}))\circ\mathbf{y}( \varphi(\mathbf{x}),\mathbf{w}),\]
where \(\partial\varphi(\mathbf{x})\in\mathbb{R}^{n\times n}\) denotes the Jacobian of \(\varphi(\mathbf{x})\) and \(\mathbf{y}(\mathbf{x},\mathbf{w})\) is given by Proposition 1.
Top-k in magnitude.As we emphasized, one advantage of our proposed generalization is that we can express the top-\(k\) in magnitude operator. Indeed, with \(\phi(x)=\frac{1}{2}x^{2}\), we can see from (2) and (3) that we have for all \(\mathbf{x}\in\mathbb{R}^{n}\)
\[\text{topkmag}(\mathbf{x})=\mathbf{y}_{\varphi}(\mathbf{x},\mathbf{1}_{k})=\nabla_{1}f_{ \varphi}(\mathbf{x},\mathbf{1}_{k}). \tag{4}\]
Obviously, for all \(\mathbf{x}\in\mathbb{R}^{n}_{+}\), we also have \(\text{topkmag}(\mathbf{x})=\text{topk}(\mathbf{x})\). Top-k in magnitude is useful for pruning weights with small magnitude in a neural network, as we demonstrate in our experiments in SS6.
Introducing regularization.We now explain how to make our generalized operator differentiable. We introduce convex regularization \(R:\mathbb{R}^{n}\to\mathbb{R}\) in the dual space:
\[f^{*}_{\varphi,R}(\mathbf{y},\mathbf{w})\coloneqq f^{*}_{\varphi}(\mathbf{y},\mathbf{w})+R( \mathbf{y}),\]
where \(f^{*}_{\varphi}\) is the conjugate of \(f_{\varphi}\) in the first argument. Going back to the primal space, we obtain a new relaxed operator.
Figure 2: The permutahedron \(P(\mathbf{w})\) is a polytope whose vertices are permutations of \(\mathbf{w}\). Depending, on the choice of \(\mathbf{w}\), it can express several known polytopes. When \(\mathbf{w}=(1,0,0)\), \(P(\mathbf{w})\) is the probability simplex (light beige), which corresponds to the top-1 setting. When \(\mathbf{w}=(\frac{1}{2},\frac{1}{2},0)\), \(P(\mathbf{w})\) is the capped probability simplex (blue), which corresponds to the top-\(k\) setting (here, with \(k=2\)). When \(\mathbf{w}=(\frac{2}{3},\frac{1}{3},0)\), \(P(\mathbf{w})\) is an hexagon, which corresponds to the partial ranking setting (gray).
**Proposition 2**.: _(Relaxed operator)_
_Let \(R\colon\mathbb{R}^{n}\to\mathbb{R}\) be a convex regularizer. Then_
\[f_{\varphi,R}(\mathbf{x},\mathbf{w}) \coloneqq\max_{\mathbf{y}\in\mathbb{R}^{n}}\langle\mathbf{y},\mathbf{x} \rangle-f_{\varphi}^{*}(\mathbf{y},\mathbf{w})-R(\mathbf{y})\] \[=\min_{\mathbf{u}\in\mathbb{R}^{n}}R^{*}(\mathbf{x}-\mathbf{u})+f_{\varphi}( \mathbf{u},\mathbf{w})\] \[\mathbf{y}_{\varphi,R}(\mathbf{x},\mathbf{w}) \coloneqq\mathbf{y}^{*}=\nabla R^{*}(\mathbf{x}-\mathbf{u}^{*})=\nabla_{1}f_{ \varphi,R}(\mathbf{x},\mathbf{w}).\]
A proof is given in Appendix A.2. The mapping \(\varphi\) also affects conjugacy. We have the following proposition.
**Proposition 3**.: _(Conjugate of \(f_{\varphi}\) in the first argument)_
_If \(\phi\) is convex and \(\mathbf{w}\in\mathbb{R}^{n}_{+}\), then_
\[f_{\varphi}^{*}(\mathbf{y},\mathbf{w})=\min_{\mathbf{z}\in P(\mathbf{w})}D_{\phi^{*}}(\mathbf{y},\mathbf{z}),\]
_where \(D_{f}(\mathbf{y},\mathbf{z})\coloneqq\sum_{i=1}^{n}z_{i}f(y_{i}/z_{i})\)._
A proof is given in Appendix A.3. The function \((y_{i},z_{i})\mapsto y_{i}^{\prime}\phi^{*}(y_{i}/z_{i})\) is known as the perspective of \(\phi^{*}\) and is jointly convex when \(z_{i}>0\). The function \(D_{f}(\mathbf{y},\mathbf{z})\) is known as the \(f\)-divergence between \(\mathbf{y}\) and \(\mathbf{z}\). Therefore, \(f_{\varphi}^{*}(\mathbf{y},\mathbf{w})\) can be seen as the minimum "distance" between \(\mathbf{y}\) and \(P(\mathbf{w})\) in the \(\phi^{*}\)-divergence sense.
Recovering the previous framework.If \(\phi(x)=x\), then
\[\phi^{*}(y_{i}/z_{i})=\begin{cases}0,&\text{if }y_{i}=z_{i}\\ \infty,&\text{otherwise}\end{cases}.\]
This implies that \(f_{\varphi}^{*}(\mathbf{y},\mathbf{w})=f^{*}(\mathbf{y},\mathbf{w})=\delta_{P(\mathbf{w})}(\mathbf{y})\), the indicator function of \(P(\mathbf{w})\), which is \(0\) if \(\mathbf{y}\in P(\mathbf{w})\) and \(\infty\) otherwise. In this case, we therefore obtain
\[f_{\varphi,R}(\mathbf{x},\mathbf{w})=\max_{\mathbf{y}\in P(\mathbf{w})}\langle\mathbf{y},\mathbf{x} \rangle-R(\mathbf{y}),\]
which is exactly the relaxation of Blondel et al. (2020).
Differentiable and sparse top-k operators.To obtain a relaxed top-\(k\) operator with our framework, we simply replace \(f_{\varphi}\) with \(f_{\varphi,R}\) and \(\mathbf{y}_{\varphi}\) with \(\mathbf{y}_{\varphi,R}\) in (4) to define
\[\text{topkmag}_{R}(\mathbf{x})\coloneqq\mathbf{y}_{\varphi,R}(\mathbf{x},\mathbf{1}_{k})= \nabla_{1}f_{\varphi,R}(\mathbf{x},\mathbf{1}_{k}).\]
A relaxed top-\(k\) mask can be defined in a similar way, but using \(\phi(x)=x\) instead of \(\phi(x)=\frac{1}{2}x^{2}\). For the regularization \(R\), we propose to use \(p\)-norms to the power \(p\):
\[R(\mathbf{y})=\frac{1}{p}\|\mathbf{y}\|_{p}^{p}\coloneqq\frac{1}{p}\sum_{i=1}^{n}|y_{i }|^{p}.\]
The choice \(p=2\) used in previous works leads to sparse outputs but is not differentiable everywhere. Any \(p\) between \(1\) (excluded) and \(2\) (excluded) leads to differentiable _and_ sparse outputs. We propose to use \(p=4/3\), which is more convenient numerically. This is illustrated in Figure 3.
Connection with \(k\)-support and OWL norms.When \(\phi(x)=\frac{1}{2}x^{2}\) and \(\mathbf{w}=\mathbf{1}_{k}\), we obtain
\[f_{\varphi}^{s}(\mathbf{y},\mathbf{w})=\frac{1}{2}\min_{\mathbf{z}\in[0,1]^{n}}\sum_{i=1}^{n }\frac{y_{i}^{2}}{z_{i}}\quad\text{s.t.}\quad\langle\mathbf{z},\mathbf{1}\rangle=k,\]
which is known as the squared k-support norm (Argyriou et al., 2012; McDonald et al., 2014; Eriksson et al., 2015). Our formulation is a generalization of the squared \(k\)-support norm, as it supports other choices of \(\mathbf{w}\) and \(\phi\). For instance, we use it to define a new notion of \(k\)-support negentropy in Appendix B.1. When \(\varphi(\mathbf{x})=|\mathbf{x}|\), we recover the ordered weighted lasso (OWL) norm (Zeng and Figueiredo, 2014) as
\[f_{\varphi}(\mathbf{x},\mathbf{w})=\sum_{i=1}^{n}w_{i}|x|_{[i]}.\]
With that choice of \(\varphi\), it is easy to see from (3) that (4) becomes a _signed_ top-\(k\) mask. Note that, interestingly, \(k\)-support and OWL norms are not defined in the same space.
Biconjugate interpretation.Let us define the set of \(k\)-sparse vectors, which is nonconvex, as \(S_{k}\coloneqq\{\mathbf{x}\in\mathbb{R}^{n}\colon\|\mathbf{x}\|_{0}\leq k\}\), where \(\|\mathbf{x}\|_{0}\) is the number of non zero elements in \(\mathbf{x}\). We saw that if \(\phi(x)=\frac{1}{2}x^{2}\) then \(f_{\varphi}^{s}(\mathbf{y},\mathbf{1}_{k})\) is the squared \(k\)-support norm. It is known to be the biconjugate (i.e., the tightest convex relaxation) of the squared \(L_{2}\) norm restricted to \(S_{k}\)(Eriksson et al., 2015; Liu et al., 2022). We now prove a more general result: \(f_{\varphi}^{s}(\mathbf{y},\mathbf{1}_{k})\) is the biconjugate of \(\sum_{i=1}^{n}\phi^{s}(y_{i})\) restricted to \(S_{k}\).
**Proposition 4**.: _(Biconjugate interpretation)_
_Let \(\Phi_{k}(\mathbf{y})\coloneqq\sum_{i=1}^{n}\phi^{s}(y_{i})+\delta_{S_{k}}(\mathbf{y})\). Suppose that \(\phi\) is convex. Then the biconjugate of \(\Phi_{k}\) is given by_
\[\Phi_{k}^{**}(\mathbf{y})=f_{\varphi}^{s}(\mathbf{y},\mathbf{1}_{k})\]
See Appendix A.4 for a proof.
Figure 3: **Example of our relaxed top-k operators. We take \(\phi(x)=\frac{1}{2}x^{2}\) and \(k=2\). For an input \(\mathbf{x}=(x_{1},x_{2},\frac{1}{2},1)\), we plot \(y_{\varphi,R}(\mathbf{x},1_{k})_{1}+y_{\varphi,R}(\mathbf{x},1_{k})_{2}\) for \(R=0\) (left), \(R=\frac{\lambda}{2}\|\mathbf{x}\|_{2}^{2}\) (center) and \(R=\frac{\lambda}{p}\|\mathbf{x}\|_{p}^{p}\) with \(p=\frac{4}{3}\) (right). We take \(\lambda=0.3\). While no regularization leads to a discontinuous mapping, the \(2\)-norm regularization leads to continuity and almost-everywhere differentiability, and the \(\frac{4}{3}\)-norm regularization provides a continuously differentiable mapping. We emphasize that, although in the left plot the graph looks connected, it is actually a discontinuous function. Note that our relaxed operators are sparse as they are exactly \(0\) in the center.**
Algorithms
In this section, we propose efficient algorithms for computing our operators. We first show that the calculation of our relaxed operator reduces to isotonic optimization.
Reduction to isotonic optimization.We now show how to compute \(\mathbf{u}^{\star}\) in Proposition 2 by reduction to isotonic optimization, from which \(\mathbf{y}^{\star}\) can then be recovered by \(\mathbf{y}^{\star}=\nabla R^{*}(\mathbf{x}-\mathbf{u}^{\star})\). We first recall the case \(\varphi(\mathbf{x})=\mathbf{x}\), which was already proved in existing works (Lim and Wright, 2016; Blondel et al., 2020).
**Proposition 5**.: _(Reduction, \(\varphi(\mathbf{x})=\mathbf{x}\) case)_
_Suppose that \(R(\mathbf{y})=\sum_{i=1}^{n}r(y_{i})\). Let \(\sigma\) be the permutation sorting \(\mathbf{x}\), \(\mathbf{s}\coloneqq\mathbf{x}_{\sigma}\) and_
\[\mathbf{v}^{\star}=\operatorname*{argmin}_{v_{1}\geq\cdots\geq v_{n}}R^{*}(\mathbf{s} -\mathbf{v})+f(\mathbf{v},\mathbf{w}).\]
_Then \(\mathbf{u}^{\star}\) from Proposition 2 is given by \(\mathbf{u}^{\star}=\mathbf{v}^{\star}_{\sigma^{-1}}\)._
The set \(\{\mathbf{v}\in\mathbb{R}^{n}\colon v_{1}\geq\cdots\geq v_{n}\}\) is called the monotone cone. Next, we show that a similar result is possible when \(\phi(\mathbf{x})\) and \(r^{*}\) are both even functions (sign-invariant) and increasing on \(\mathbb{R}_{+}\).
**Proposition 6**.: _(Reduction, \(\varphi(\mathbf{x})=\varphi(-\mathbf{x})\) case)_
_Suppose that \(R(\mathbf{y})=\sum_{i=1}^{n}r(y_{i})\) and \(\varphi(\mathbf{x})=(\phi(x_{1}),\dots,\phi(x_{n}))\). Assume \(\phi\) and \(r^{*}\) are both even functions (sign-invariant) and increasing on \(\mathbb{R}_{+}\). Let \(\sigma\) be the permutation sorting \(|\mathbf{x}|\), \(\mathbf{s}\coloneqq|\mathbf{x}|_{\sigma}\) and_
\[\mathbf{v}^{\star}=\operatorname*{argmin}_{v_{1}\geq\cdots\geq v_{n}\geq 0}R^{*}(\mathbf{s} -\mathbf{v})+f_{\varphi}(\mathbf{v},\mathbf{w}).\]
_Then, \(\mathbf{u}^{\star}\) (Proposition 2) is equal to \(\operatorname*{sign}(\mathbf{x})\circ\mathbf{v}^{\star}_{\sigma^{-1}}\)._
See Appendix A.6 for a proof. Less general results are proved in (Zeng and Figueiredo, 2014; Eriksson et al., 2015) for specific cases of \(\phi\) and \(R\). The set \(\{v\in\mathbb{R}^{n}\colon v_{1}\geq\cdots\geq v_{n}\geq 0\}\) is called the non-negative monotone cone. In practice, the additional non-negativity constraint is easy to handle: we can solve the isotonic optimization problem without it and truncate the solution if it is not non-negative (Nemeth and Nemeth, 2012).
Pool adjacent violator (PAV) algorithms.Under the conditions of Proposition 6, assuming \(\mathbf{v}\) and \(\mathbf{w}\) are both sorted, we have from Proposition 1 that
\[f(\mathbf{v},\mathbf{w})=\sum_{i=1}^{n}w_{i}v_{i},\quad f_{\varphi}(\mathbf{v},\mathbf{w})= \sum_{i=1}^{n}w_{i}\phi(v_{i}).\]
We then get that the problems in Proposition 5 and 6 are coordinate-wise separable:
\[\mathbf{v}^{\star}=\operatorname*{argmin}_{v_{1}\geq\cdots\geq v_{n}}\sum_{i=1}^{ n}h_{i}(v_{i}), \tag{5}\]
for \(h_{i}(v_{i})=r^{*}(s_{i}-v_{i})+w_{i}\phi(v_{i})\). Such problems can be solved in \(O(n)\) time using the pool adjacent violator (PAV) algorithm (Best et al., 2000). This algorithm works by partitioning the set \([n]\) into disjoint sets \((B_{1},\cdots B_{m})\), starting from \(m=n\) and \(B_{i}=\{i\}\), and by merging these sets until the
isotonic condition is met. A pseudo-code is available for completness in Appendix B.2. At its core, PAV simply needs a routine to solve the "pooling" subproblem
\[\gamma_{B}^{\star}=\operatorname*{argmin}_{\gamma\in\mathbb{R}}\sum_{i\in B}h_{ i}(\gamma) \tag{6}\]
for any \(B\subseteq[n]\). Once the optimal partition \((B_{1},\cdots B_{m})\) is identified, we have that
\[\mathbf{v}^{\star}=(\underbrace{\gamma_{B_{1}}^{\star},\ldots,\gamma_{B_{1}}^{ \star}}_{|B_{1}|},\cdots,\underbrace{\gamma_{B_{m}}^{\star},\ldots,\gamma_{B_{ m}}^{\star}}_{|B_{m}|})\in\mathbb{R}^{n}.\]
Because Proposition 5 and 6 require to obtain the sorting permutation \(\sigma\) beforehand, the total time complexity for our operators is \(O(n\log n)\).
Example.Suppose \(\phi(x)=\frac{1}{2}x^{2}\) and \(R=\frac{\lambda}{2}\|.\|_{2}^{2}\), where \(\lambda>0\). Since \(R^{*}=\frac{1}{2\lambda}\|.\|^{2}\), this gives \(h_{i}(v_{i})=\frac{1}{2}(w_{i}v_{i}^{2}+\frac{1}{\lambda}(s_{i}-v_{i})^{2})\). The solution of the sub-problem is then
\[\gamma_{B}^{\star}=\frac{\sum_{i\in B}s_{i}}{\sum_{i\in B}(\lambda w_{i}+1)}.\]
Using this formula, when \(\lambda\) is small enough, we can upper-bound the error between the hard and relaxed operators: \(\|\text{topkmag}_{R}(\mathbf{x})-\text{topkmag}(\mathbf{x})\|_{\infty}\leq\lambda\| \mathbf{x}\|_{\infty}\). See Appendix A.6 for details and for the case \(p=\frac{4}{3}\).
Dykstra's alternating projection algorithm.The PAV algorithm returns an exact solution of (5) in \(O(n)\) time. Unfortunately, it relies on element-wise dynamical array assignments, which makes it potentially slow on GPUs and TPUs. We propose an alternative to obtain faster computations. Our key insight is that by defining
\[C_{1} \coloneqq\{\mathbf{v}\in\mathbb{R}^{n}\colon v_{1}\geq v_{2},v_{3} \geq v_{4},\ldots\}\] \[C_{2} \coloneqq\{\mathbf{v}\in\mathbb{R}^{n}\colon v_{2}\geq v_{3},v_{4} \geq v_{5},\ldots\},\]
one has \(\{\mathbf{v}\in\mathbb{R}^{n}\colon v_{1}\geq\cdots\geq v_{n}\}=C_{1}\cap C_{2}.\) We can therefore rewrite (5) as
\[\mathbf{v}^{\star}=\operatorname*{argmin}_{\mathbf{v}\in C_{1}\cap C_{2}}\sum_{i=1}^{ n}h_{i}(v_{i}).\]
In the case \(R=\frac{1}{2}\|.\|^{2}\) and \(\phi(x)=x\) or \(\frac{1}{2}x^{2}\), this reduces to a Euclidean projection onto \(C_{1}\cap C_{2}\), for which we can use Dykstra's celebrated projection algorithm (Boyle and Dykstra, 1986, Combettes and Pesquet, 2011).
**Algorithm 1**.: _(Dykstra's projection algorithm)_
_Starting from \(\mathbf{v}^{0}=\mathbf{s}\), \(\mathbf{p}^{0}=\mathbf{q}^{0}=\mathbf{0}\), Dykstra's algorithm iterates_
\[\mathbf{y}^{k} =\operatorname*{argmin}_{\mathbf{y}\in C_{1}}R^{*}(\mathbf{v}^{k}+\mathbf{p}^ {k}-\mathbf{y})+f_{\varphi}(\mathbf{y},\mathbf{w})\] \[\mathbf{p}^{k+1} =\mathbf{v}^{k}+\mathbf{p}^{k}-\mathbf{y}^{k}\] \[\mathbf{v}^{k+1} =\operatorname*{argmin}_{\mathbf{v}\in C_{2}}R^{*}(\mathbf{y}^{k}+\mathbf{q}^ {k}-\mathbf{v})+f_{\varphi}(\mathbf{v},\mathbf{w})\] \[\mathbf{q}^{k+1} =\mathbf{y}^{k}+\mathbf{q}^{k}-\mathbf{v}^{k+1}.\]
Since \(R^{*}=\frac{1}{2}\|.\|^{2}\) and \(\phi(x)=x\) or \(\frac{1}{2}x^{2}\), each argmin calculation corresponds to a Euclidean projection onto \(C_{1}\) or \(C_{2}\), which can be computed in closed form. Therefore, \(\mathbf{v}^{k}\) provably converges to \(\mathbf{v}^{\star}\). Each iteration of Dykstra's algorithm is learning-rate free, has linear time complexity and can be efficiently written as a matrix-vector product using a mask, which makes it particularly appealing for GPUs and TPUs. Interestingly, we find out that when \(\mathbf{w}=\mathbf{1}_{k}\), then Dykstra's algorithm converges surprisingly fast to the exact solution. We validate that Dykstra leads to a faster runtime than PAV in Figure 4. We use 100 iterations of Dykstra, and verify that we obtain the same output as PAV.
For the general non-Euclidean \(R^{*}\) case, i.e., \(p\neq 2\), we can use block coordinate ascent in the dual of (5). We can divide the dual variables into two blocks, corresponding to \(C_{1}\) and \(C_{2}\); see Appendix A.8 for details. In fact, it is known that in the Euclidean case, Dykstra's algorithm in the primal and block coordinate ascent in the dual are equivalent (Tibshirani, 2017). Therefore, although a vectorized implementation could be challenging for \(p\neq 2\), block coordinate ascent can be seen as an elegant way to generalize Dykstra's algorithm to this setting. Convergence is guaranteed as long as each \(h_{i}\) is strictly convex.
Differentiation.The Jacobian of the solution of the isotonic optimization problems can be expressed in closed form for any \(p\)-norm regularization.
**Proposition 7**.: _(Differentiation)_
_Let \(\mathbf{v}^{\star}=(\gamma_{B_{1}}^{\star},\dots,\gamma_{B_{1}}^{\star}\cdots, \gamma_{B_{m}}^{\star},\dots,\gamma_{B_{m}}^{\star})\) be the optimal solution of the isotonic optimization problem with \(R=\frac{1}{p}\|.\|_{p}^{p}\) and \(p>0\). Then one has that \(\mathbf{v}^{\star}\) is differentiable with respect to \(\mathbf{s}=(s_{1},\dots,s_{n})\). Furthermore, for any \(r\in\{1,\dots,m\}\) and \(i\in B_{r}\),_
\[\frac{\partial\gamma_{B_{r}}^{\star}}{\partial s_{i}}=\begin{cases}\frac{| \gamma_{B_{r}}^{\star}-s_{i}|^{q-2}}{\sum_{j\in B_{r}}|\gamma_{B_{r}}^{\star}-s _{j}|^{q-2}},&\text{if }\phi(x)=x\\ \\ \frac{(q-1)|\gamma_{B}^{\star}-s_{i}|^{q-2}}{\sum_{j\in B}(q-1)|\gamma_{B}^{ \star}-s_{j}|^{q-2}+w_{j}},&\text{if }\phi(x)=\frac{1}{2}x^{2},\end{cases}\]
_where \(q\) is such that \(\frac{1}{p}+\frac{1}{q}=1\). When \(i\notin B_{r}\), one simply has \(\frac{\partial\gamma_{B_{r}}^{\star}}{\partial s_{i}}=0\)._
One then has \(\partial v_{j}^{\star}/\partial s_{i}=\partial\gamma_{B_{r_{j}}}^{\star}/ \partial s_{i}\) where \(r_{j}\) is such that \(v_{j}^{\star}=\gamma_{B_{r_{j}}}^{\star}\). See Appendix A.7 for a proof. Thanks to Proposition 7, we do not need to solve a linear system to compute the Jacobian of the solution \(\mathbf{v}^{\star}\), in contrast to implicit differentiation of general optimization problems (Blondel et al., 2021). In practice, this means that we do not need to do the automatic differentiation of the iterates of PAV or Dykstra's projection algorithm to obtain the gradient of a scalar loss function in which our operator is incorporated. In particular, we do not need to store the iterates of Dykstra's projection algorithm in memory. Along with our proposed algorithms, we implement the corresponding Jacobian vector product routines in JAX, using Proposition 7.
Figure 4: **Runtime comparison** for computing the top-\(k\) on a TPU using PAV or Dykstra, as a function of the dimension \(n\). For each \(n\), we set \(k=\lceil n/10\rceil\).
Experiments
We now demonstrate the applicability of our top-k operators through experiments. We will open-source our JAX (Bradbury et al., 2018) implementation upon publication. See Appendix C for additional experimental details.
Weight pruning in neural networks.We experimentally validate the advantage of using a smoothed top-k for weight pruning in neural networks. We use a multilayer perceptron (MLP) with 2 hidden layers and with ReLU activation. The width of the layers are respectively 784, 32, 32 followed by a linear classification head of width 10. More precisely, our model takes as input an image \(\mathbf{a}\in\mathbb{R}^{784}\) and outputs
\[\mathbf{x}=W_{3}\sigma(W_{2}\sigma(W_{1}\mathbf{a}+\mathbf{b}_{1})+\mathbf{b}_{2})+\mathbf{b}_{3} \quad(\text{logits}),\]
where \(W_{1}\in\mathbb{R}^{32\times 784}\), \(\mathbf{b}_{1}\in\mathbb{R}^{32}\), \(W_{2}\in\mathbb{R}^{32\times 32}\), \(\mathbf{b}_{2}\in\mathbb{R}^{32}\), \(W_{1}\in\mathbb{R}^{10\times 32}\), \(\mathbf{b}_{3}\in\mathbb{R}^{10}\) and \(\sigma\) is a ReLU. In order to perform weight pruning we parametrize each \(W_{i}\) as
\[W_{i}=\text{topkmag}_{R}(W_{i}^{\prime})\]
and learn \(W_{i}^{\prime}\) instead of learning \(W_{i}\) directly. The output is then fed into a cross-entropy loss. We compare the performance of the model when applying a hard vs differentiable top-k operator to keep only 10% of the coefficients. For the differentiable top-k, we use a regularization \(R(\mathbf{y})=\frac{\lambda}{2}\|\mathbf{y}\|^{2}\) with \(\lambda=10^{-4}\). We find out that the model trained with the differentiable top-k trains significantly faster than the one trained with the hard top-k. We also verify that our relaxed top-k maintains the 10% rate of non-zero weights. Results on MNIST are displayed in Figure 5.
Smooth top-k loss.To train a neural network on a classification task, one typically minimizes the cross-entropy loss, whereas the performance of the network is evaluated using a top-\(k\) test accuracy. There is therefore a mismatch between the loss used at train time and the metric used at evaluation time. Cuturi et al. (2019) proposed to replace the cross-entropy loss with a differentiable top-\(k\) loss.
Figure 5: **Test error** with respect to training time when training a MLP on MNIST. We compare the baseline (grey curve) with the case where 90% of the weights are set to 0 by magnitude pruning, using a differentiable top-\(k\) (red curve) or a hard one (blue curve).
In the same spirit, we propose to finetune a ViT-B/16 (Dosovitskiy et al., 2020) pretrained on the ImageNet21k dataset on CIFAR 100 (Krizhevsky et al., 2009) using a smooth and sparse top-\(k\) loss instead of the cross-entropy loss. We use a Fenchel-Young loss (Blondel et al., 2020). It takes as input the vector \(\mathbf{a}\) and parameters \(\theta\) of a neural network \(g_{\theta}\):
\[\mathbf{x} =g_{\theta}(\mathbf{a})\quad(\text{logits})\] \[\ell(\mathbf{x},\mathbf{t}) =f_{\varphi,R}(\mathbf{x},\mathbf{1}_{k})-\langle\mathbf{x},\mathbf{t}\rangle,\]
where \(f_{\varphi,R}(\mathbf{x},\mathbf{1}_{k})\) is given by Proposition 2 and \(\mathbf{t}\) is a one-hot encoding of the class of \(\mathbf{a}\). We set \(\phi(x)=x\) as we want a top-\(k\) mask. We consider \(p\) norm regularizations for \(R\), where \(p=2\) or \(p=4/3\). We take \(k=3\). We use the exact same training procedure as described in Dosovitskiy et al. (2020) and use the corresponding pretrained ViT model B/16, and train our model for 100 steps. Results are reported in Figure 6. We find that the ViT finetuned with the smooth top-3 loss outperforms the one finetuned with the cross-entropy loss in terms of top-\(k\) error, for various \(k\).
Sparse MoEs.Finally, we demonstrate the applicability of our proposed smooth top-\(k\) operators on a large-scale classification task using vision sparse mixture of experts (V-MoE) (Riquelme et al., 2021). Vision transformers (ViTs) are made of a succession of self-attention layers and MLP layers. The idea of V-MoEs is to replace MLPs in ViTs by a sparsely-gated mixture of MLPs called experts. This way, only some of the experts are activated by a given patch token. At the heart of the token-expert assignment lies a routing mechanism which performs a top-\(k\) operation on gate values. We focus on the MoE with expert choice routing framework (Zhou et al., 2022), where each expert is assigned to \(k\) tokens. We train a S/32 variant of the V-MoE model, with \(32\times 32\) patches on the JFT-300M dataset (Sun et al., 2017), a dataset with more than 305 million images. Our model has 32 experts, each assigned to \(k=28\) tokens selected among \(n=400\) at each MoE layer. We compare the validation accuracy when using the baseline (hard top-\(k\)) with our relaxed operator. We use \(p=2\) and Dykstra's projection algorithm, as we found it was the fastest method on TPU. We find that our approach improves validation performance. Results are displayed in Figure 7.
Figure 6: **Validation top-k accuracy** when fine-tuning a ViT-B/16 on CIFAR 100 using either the cross-entropy loss or our smooth top-3 loss for training.
Conclusion.In this work, we proposed a generalized framework to obtain fast, differentiable and sparse top-\(k\) and top-\(k\) masks operators, including operators that select values in magnitude. Thanks to a reduction to isotonic optimization, we showed that these operators can be computed using either the Pool Adjacent Violators (PAV) algorithm or Dykstra's projection algorithm, the latter being faster on TPU hardware. We successfully demonstrated the usefulness of our operators for weight pruning, top-\(k\) losses and as routers in vision sparse mixture of experts.
Acknowledgments.We thank Vincent Roulet and Joelle Barral for feedback on a draft of this paper. We thank Felipe Llinares-Lopez for helpful feedbacks regarding the experiments, as well as Fabian Pedregosa for fruitful mathematical discussions.
|
2308.14715 | Coalescent processes emerging from large deviations | The classical model for the genealogies of a neutrally evolving population in
a fixed environment is due to Kingman. Kingman's coalescent process, which
produces a binary tree, universally emerges from many microscopic models in
which the variance in the number of offspring is finite. It is understood that
power-law offspring distributions with infinite variance can result in a very
different type of coalescent structure with merging of more than two lineages.
Here we investigate the regime where the variance of the offspring distribution
is finite but comparable to the population size. This is achieved by studying a
model in which the log offspring sizes have a stretched exponential form. Such
offspring distributions are motivated by biology, where they emerge from a toy
model of growth in a heterogenous environment, but also mathematics and
statistical physics, where limit theorems and phase transitions for sums over
random exponentials have received considerable attention due to their
appearance in the partition function of Derrida's Random Energy Model (REM). We
find that the limit coalescent is a $\beta$-coalescent -- a previously studied
model emerging from evolutionary dynamics models with heavy-tailed offspring
distributions. We also discuss the connection to previous results on the REM. | Ethan Levien | 2023-08-28T17:15:54Z | http://arxiv.org/abs/2308.14715v3 | # Coalescent processes emerging from large deviations
###### Abstract
The classical model for the genealogs of a neutrally evolving population in a fixed is environment is due to Kingman. Kingman's coalescent process, which produces a binary tree, universally emerges from many microscopic models in which the variance in the number of offspring is finite. It is understood that power-law offspring distributions with infinite variance can result in a very different type of coalescent structure with merging of more than two lineages. Here we investigate the regime where the variance of the offspring distribution is finite but comparable to the population size. This is achieved by studying a model in which the logarithm offspring sizes has a stretched exponential form. Such offspring distributions are motivated by biology, where they emerges from a toy model of growth in a heterogenous environment, but also mathematics and statistical physics, where limit theorems and phase transitions for sums over random exponentials have seen considerable attention due to their appearance in the partition function of Derrida's Random Energy Model (REM). We find that the limit coalescent is a \(\beta\)-coalescent - a previously studied model emerging from evolutionary dynamics models with heavy-tailed offspring distributions. We also discuss the connection to previous results on the REM.
## 1 Introduction
Evolution is, in large part, shaped by a tension between two opposing "forces": neutral genetic drift and selection. Neutral genetic drift refers to random changes in genetic composition of population due to chance events in the deaths and reproduction of individuals. Selection, in contrast, results from a deterministic bias towards fitter individuals. A major objective of evolutionary biology is to determine the relative impact of these forces. In order to achieve this we need to understand how different microscopic mechanisms manifest in macroscopic observables, such that the frequency of a genotype, the site frequency spectrum, or the shape of the genealogical tree.
Much of our understanding of the interplay between genetic drift and selection comes from the Wright-Fisher model [17, 18], or its counterpart with overlapping generation, the Moran model. In both models, the source of noise is the random sampling of individuals from generation to generation. In the large population size limit this sampling noise is well approximated by a Gaussian with a variance which is inversely proportional to \(N\), the population size. In neutral evolution without mutation, genetic drift is the sole source of changes in the composition of the population and \(N\) sets the characteristic time-scale of the evolutionary dynamics.
In real populations, the true population size is rarely related in a simple way to the variance in genotype frequency, as predicted by the Wright-fisher model. Therefore, one usually thinks of an \(N\) as an _effective population size_ measuring the overall strength of genetic drift, rather than the literal number of cells. This has motivated the question: What determines the effective population size? Of particular interest is the question of how non-genetic variation between individuals shapes the effective population size [10, 11]. J.H. Gillespie was one of the first to address this question by considering a model in which the number of offspring from each individual is a random variable having finite variance. He showed that in this context the effective population size is obtained by scaling the true population size by the variance in offspring [12, 13].
We now have a more general and mathematically rigorous understanding of this problem which is based on the more general Cannings model [14]. Beginning with \(N\) labeled cells, trajectories of the Cannings model are constructed by generating an exchangeable random vector \((\nu_{i,k},\ldots,\nu_{N,k})\) for each \(k\in\mathbb{N}\) such that \(\sum_{i=1}^{N}\nu_{i,k}=N\). The entries \(\nu_{i,k}\) represents the number of offspring in generation \(k\) which descend from individual \(i\) in the \((k-1)\)th generation. We will henceforth omit the subscript \(k\) and it should be assumed that all quantities associated with a generation are implicitly dependent on \(k\). The term "generation" does not refer to the lifetime of an organism, but rather an epoch during which the population may undergo many cycles of reproduction. Inspired by refs [15, 16, 17], we will focus on the particular case where \(\nu_{i}\) is obtained by generating \(U_{i}\) offspring for each individual in a given generation, and then sampling the individuals in the offspring pool (without replacement) to obtain the next generation. In this case, \(\nu_{i}\) follows a hypergeometric distribution where \(N\) draws are taken from a pool of \(S_{N}=\sum_{i=1}^{N}U_{i}\) with \(U_{i}\) successes. Therefore
\[\mathbb{P}(\nu_{i}=n)=\frac{1}{\binom{S_{N}}{N}}\binom{U_{i}}{n}\binom{S_{N}- U_{i}}{N-n}. \tag{1}\]
This formulation also bears a close resemblance to the model studied by Gillespie [12] and is an example of a Generalized Wright-Fisher model defined in ref [18].
Results on the limit behavior of the neutral Cannings models concern not only the dynamics of genotype frequencies, but also genealogical tree, or _coalescent process_, obtained by following a subset of lineages back and time. These two structures are shown in Figure 1. For both the frequency dynamics and the coalescent there is a functional Central Limit Theorem (CLT) and Generalized Central Limit Theorem (CGLT) which mirror the CLT and GCLT for the iid sum representing the total offspring pool \(S_{N}\). These theorems tell us about the behavior of the genotype frequencies and genealogies over large times in large populations. Here,
large means on the order of the _coalescent time_, \(T_{N}\), defined as the average number of generations we must travel backwards in time to find a common ancestor for two randomly selected individuals. The coalescent time is equivalent to the effective population size in some cases, but is more general in the sense that the definition does not require a mapping to the WF model.
In the Cannings model we can easily express the coalescent time in terms of moments of \(\nu_{i}\) as follows. Note that the number of generations to the most recent common ancestor of two random selected cells follows a geometric distribution with parameter \(c_{N}=T_{N}^{-1}\). That is, \(c_{N}\) is the probability two random selected individuals in one generation share an ancestor in the previous generation. Conditioned on \(\{\nu_{i}\}\), the chance for two individuals to both be descendants of the first individual in the previous generation is \((\nu_{1}/N)(\nu_{1}-1)/(N-1)\). Multiplying by \(N\) and averaging over all possible \(\{\nu_{i}\}_{i=1}^{N}\) gives
\[c_{N}=\frac{\mathbb{E}\left[\nu_{1}(\nu_{1}-1)\right]}{N-1}. \tag{2}\]
Given the distribution of \(U_{i}\), it is not straightforward to compute \(c_{N}\) because it has a nonlinear dependence on the sum \(S_{N}\).
The CLT for the Allele frequencies in the Cannings model describes the limit behavior of the fraction of individuals carrying a particular label at generation \(k\), denoted \(X_{k}^{N}\)[11, 10]. Provided the variance in \(U_{i}\) are finite, \(X_{[T_{N}]}^{\nu}\) converges to (in a weak sense) to a solution of the Ito SDE
\[dX_{t}=\sqrt{X_{t}(1-X_{t})}dW_{t}, \tag{3}\]
which will henceforth be referred to as the Wright-Fisher Diffusion (WFD). The CLT for the coalescent, due to Kingman, concerns the limiting behavior of a stochastic process \(\Psi_{n,k}^{N}\) defined on partitions of \([n]=\{1,\ldots,n\}\) as follows. The initial condition \(\Psi_{n,0}^{N}\) is the partition into singletons, while for \(k>0\) indices \(i,j\in[n]\) are in the same partition if and only if individuals the \(i\)th and \(j\)th individuals in the original sample share an ancestor \(k\) generation in the past (see Figure 1). Kingman proved that \(\Psi_{n,[T_{N}]}^{N}\) converges weakly (technically in the Skorohod topology - see refs [13, 14]) to a process \(\Psi_{n,t}\) for which two lineages merge at a rate \(1\). The limit as \(n\to\infty\) is well-defined and referred to as the Kingman coalescent. We will slightly abuse terminology and refer to the finite \(n\) process and the Kingman coalescent as well. Similar results exist for other microscopic models (e.g. the Moran process) and a large body of work focuses on understanding how the microscopic details of the process shape the coalescent time, or effective population size [10, 11, 12, 13, 14, 15].
The WFD/Kingman models are not always sufficient to capture the dynamics of real evolution. This is due to, for example, _multiple merger_ coalescents appearing in experimental data [12, 13]. Roughly speaking, in multiple merger coalescents the there is a non-negligible probability to observe more than two individuals sharing a common ancestor in a single unit of time (e.g. a generation in the model) on time-scales of order \(T_{N}\). The forward process describing the genotype frequencies associated with these coalescent processes - known as \(\Lambda\)-Fleming Viot processes - are non-diffusive process which, in general, have discontinuous sample paths. In ref [14], Schweinsberg explored the question of whether such genealogies can emerge from neutral evolution by studying the Cannings model with power-law distributions, which resulted in a GCLT for the Cannings model. A number of studies have since investigated the role of power-law tail offspring distribution in generating multiple merger coalescents and non-diffusive allele frequency fluctuations [12, 13]. In these studies, multiple merger coalescents emerge when the the variation in offspring is effectively infinite (in the sense that the variance is undefined). While such offspring distributions do emerge, e.g. from variability in dormancy times [15, 16] and rare mutations [10], power-law offspring do not necessarily model the influence of variation that is finite, but comparable to the population.
In this paper, in order to understand the role of large but finite offspring variability in neutral evolution, we investigate the limit processes which emerge when the population size and offspring variability are simultaneously taken to be large. The specific scaling between population size and offspring variability is inspired by the thermodynamic limit of the Random Energy Model (REM) from disordered systems. This makes a connection between coalescent theory and the REM which is distinct from the one made by Bolthausen and Sznitman in [1]. Our main results (Theorem 2) says that the limit process emerging form the genealogies Cannings under this scaling limit it the same as the coalescent emerging from power law offspring.
Figure 1: (left) A simulation of the Cannings model. Squares indicate individuals at the beginning of each generation and the protruding lines are their offspring. The thick red lines indicate an example of a genealogy, or coalescent obtained from a sample of the final population of labeled cells. For example, \(\Psi_{5,1}^{\xi}=\{\{1\},2\},\{4,5\}\). (right) A larger simulations of a continuous time coalescent process which is obtained in the limit of the model on the left.
### Organization of this paper
This paper is organized as follows. In Section 2 we describe the model under consideration, which is a particular instance of the Cannings model. Motivated by microbial population dynamics, we measure variation on an exponential scale. In Section 3 we review some background of coalescent theory and review what is known about this model. In particular, we discuss the GCLT of [12] which emerges in the limit where the exponent of the power law is 1. We also providing heuristic derivation of previous results. In Section 4 we present the main result which concerns the limiting coalescent when both the population size and offspring variation are taken to be large. In Section 5 we discuss connections to the REM.
## 2 Model
As biological motivation, consider a population that is subject to successive cycles of growth and dilution in a heterogeneous environment. An example is a microbial pathogen, such as _Mycobacterium tuberculosis_, which is passed through a sequence of different hosts [10, 13]. We will use \(\tau_{D}\) to denote the duration of time spent in each growth cycle and \(N\) to denote the bottleneck size, e.g. the number of cells which inoculate an infection in a new host. The initial population of \(N\) cells is dispersed throughout the environment so that each cell occupies a distinct location, or niche. To capture variability in the environment, we assign a random error-capita growth rate, \(g_{i}\), to each niche. The final progeny size from the \(i\)th cell is \(U_{i}=e^{\tau_{D}g_{i}}\). Technically \(U_{i}\) should take values in \(\mathbb{N}\), however we will soon take a limit where the distinction is not relevant and so we can think of \(U_{i}\) as continuous variables. After this period of growth the new population of cells is then mixed together and diluted back to a population of size \(N\) which form the next generation, e.g. by inoculating a new host. The mixing corresponds to cells which are dispersed throughout a host being passed through a physical bottleneck.
We set \(\tau_{D}=1\) and assume \(g_{i}=g_{0}+\zeta\Phi_{i}\) where \(\{\Phi_{i}\}\) are iid random variables with mean zero. With this notation the total number of offspring in a generation is
\[S_{N}=e^{\Theta}\sum_{i=1}^{N}e^{\zeta\Phi_{i}}. \tag{4}\]
The term \(e^{\Theta_{0}}\) will play no role since it is the statistics of the ratio,
\[\frac{U_{i}}{S_{N}}=\frac{e^{\zeta\Phi_{i}}}{S_{N}}, \tag{5}\]
which determine the limit process. The random variables 4 and 5 have a rich history in statistical physics due to their appearance in the Random Energy model (REM) introduced by Derrida [14]. In the REM - a disordered, mean-field approximation to certain spin glass models - these are related to the partition function and (finite) Gibbs measure, respectively. Under the scaling assumption that we later impose \(\zeta\) is closely related to the system-size and \(-\Phi_{i}\) is related to the internal energy of configuration \(i\). The connection to the REM is elaborated on in Section 5. The sum 4 also appears in certain statistical estimators of large deviation rate functions [15, 1], although the connection will not be discussed.
Inspired by previous work on the REM (see e.g. [10]), we assume the growth rate distribution has the form
\[W(\phi)\equiv-\ln P(\Phi_{i}>\phi)\sim W^{*}(\phi)=\frac{1}{q}\phi^{q},\quad q\geq 1 \tag{6}\]
as \(\phi\to\infty\) where \(W(\phi)\) is smooth and bounded as \(\phi\to 0\). Here and throughout this paper \(g\sim f\) means that \(f/g\to 1\). Using Equation 6 allows us to interpolate between power law (\(q=1\)) and lognormal (\(q=2\)) offspring distributions. We will additionally assume that
\[\mathbb{E}[U_{i}]>1, \tag{7}\]
meaning the underlying branching process is super-critical.
## 3 Background on coalescent theory
### The A coalescent
The coalescent process which emerge as limits from the Cannings model have been characterized in ref [15]. We will give a brief overview of these results. Recall from the introduction that \(\Psi_{n,k}^{N}\) represents the discrete coalescent from the Cannings model associated with a sample of size \(n\in\mathbb{N}\). This process takes values in partitions of \([n]\) with each partition representing the individuals from the original sample which share an ancestor by generation \(k\). The limit coalescent obtained by looking at the discrete time coalescent on the time-scales \(T_{N}\) (that is, the limit of \(\Psi_{n,\{\tau_{N}\}}^{N}\)) and taking \(N\to\infty\) is a continuous-time Markov process on \([n]\), denoted \(\Psi_{n,t}\) where the possible transitions are those which combine existing partitions. We can obtain this process as follows.
Conditioned on \(\{\nu_{k}\}_{k=1}^{N}\), the chance that a particular a group of \(k\) individuals in the current generation all descended from the first individual in the previous generation is \((\nu_{1})_{k}/(N)_{k}\), where
\[(n)_{k}=n(n-1)\cdots(n-k+1). \tag{8}\]
More generally, conditioned on \(\{\nu_{i}\}_{k=1}^{N}\) the chance that \(r\) groups of sizes \(k_{1},\ldots,k_{r}\) each descended from individuals \(1,\ldots,r\) in the previous generation is
\[\frac{1}{(N)_{\sum_{i=1}^{N}k_{i}}}\prod_{i=1}^{r}(\nu_{i})_{k_{i}}. \tag{9}\]
Now suppose that among \(n=\sum_{i=1}^{k}k_{i}+s\) lineages \(\sum_{i=1}^{n}k_{i}\) merge into \(\tau\) lineages by collapsing into groups of sizes \(k_{1},\ldots,k_{\tau}\) while \(s\) lineages remain unchanged. Such an event is called an \((n,k_{1},\ldots,k_{\tau};s)\)-collision. The rate of these collisions in the limit process \(\Psi_{n,t}\) will be denoted \(\lambda_{n,k_{1},\ldots,k_{\tau};s}\). The collision rates uniquely determine a continuous-time coalescent process \(\Psi_{n,t}\) for \(n>2\). Moreover, it can be shown that such rates are obtained from the Cannings model by the limit
\[\lambda_{n,k_{1},\ldots,k_{\tau};0}=\lim_{N\to\infty}T_{N}N^{\tau-n}\mathbb{E} \left[\prod_{i=1}^{\tau}(\nu_{i})_{k_{i}}\right]. \tag{10}\]
In particular, if this limit exists and \(T_{N}\to\infty\), the coalescent of the Cannings model \(\Psi_{n,k}^{N}\) converges in the large \(N\) limit to \(\Psi_{n,t}\). The rates for \(s>0\) can be obtained via a certain recursive relation discussed in ref [13].
Of particular relevance for our results are those coalescents for which there are multiple mergers, but not simultaneous multiple mergers. In such coalescents, known as \(\Lambda\)-coalescents, \(\lambda_{n,k_{1},\ldots,k_{\tau},s}=0\) unless \(r=1\). Therefore all the nonzero rates have the form
\[\lambda_{n,k}=\lim_{N\to\infty}T_{N}N^{1-n}\mathbb{E}\left[(\nu_{1})_{k}\right] \tag{11}\]
Any rates defined in this way will satisfy the consistency condition from ref [14]
\[\lambda_{n,k}-\lambda_{n+1,k}=\lambda_{n+1,k+1}. \tag{12}\]
The intuition behind Equation 12 is as follows: The difference between the rate to see \(k\) mergers in a sample of size \(n\) and \(n+1\) is entirely caused by mergers of \(k+1\) lineages in the sample of \(n+1\), since these look like \(k\) mergers when restricted to the sample of size \(n\). Pitman also proved that any triangular array \(\{\lambda_{n,k}\}_{n=1,\ldots,\infty,k=1,\ldots,n}\) satisfying Equation 12 has a representation in terms of a positive measure \(\Lambda:[0,1]\to\mathbb{R}_{\geq 0}\) via the relation
\[\lambda_{n,k}=\int_{0}^{1}z^{k-2}(1-z)^{n-k}\Lambda(dz). \tag{13}\]
Thus, there is a duality between coalescent process and positive measures on the unit interval.
A similar argument can be used to obtain a simpler criteria for the convergence to a \(\Lambda\)-coalescent. In particular, if \(\lambda_{2,2,2,s}=0\) then there are no simultaneous multiple mergers, since these would contribute to this rate. Therefore, if \(T_{N}\to\infty\) and
\[\lim_{N\to\infty}T_{N}\frac{\mathbb{E}[(\nu_{1})_{2}(\nu_{2})_{2}]}{N^{2}}=0 \tag{14}\]
the limit process of the genealogy \(\Psi_{n,k}^{N}\) is a \(\Lambda\)-coalescent [10]. The following lemma follows from Propositions 1 and 3 of ref [13].
**Proposition 1**.: _If \(T_{N}\to\infty\) and Equation 14 is satisfied, then \(\Psi_{n,\lfloor tT_{N}\rfloor}^{N}\) converges1 to a \(\Lambda\)-coalescent \(\Psi_{t}\) with merge rates \(\lambda_{n,k}\) given by Equation 11 for \(k=n\) and Equation 12 otherwise._
Footnote 1: Here and for the remainder of the paper, “converges” means convergence in the Skorohod topology or finite dimensional distributions. We refer to early work, e.g. [10], for technical details concerning the convergence.
We briefly mention the corresponding limit process for the genotype frequencies, called the \(\Lambda\)-Fleming-Viot. At a rate \(\Lambda([\lambda,\lambda+d\lambda))\lambda^{-2}/\Lambda\) select a random individual in the population and replace a fraction \(\lambda\) of the population with individuals of the same genotype. In other words, \(\Lambda\) determines the intensity of different sized jumps in genotype frequency. The scaling by \(\lambda^{-2}\) ensures that infinitesimal small jumps happen more often and make comparable contributions to the overall change in genotype frequency as rare, but highly impactful, large jumps. It can be shown that as \(\Lambda\) approaches a delta function at \(0\), the \(\Lambda\)-FV process converges to the WFD. In contrast, when \(\Lambda\) is some smooth function on \([0,1]\), the process is an infinite superposition of jump processes, each corresponding to a different jump sizes \(\lambda\). This is indeed the case for the particular \(\Lambda\)-FV processes emerging as limits of the Cannings model with infinite offspring variance.
### The power law limit (\(q=1\))
We now return to the Cannings model with offspring distributions given by Equation 6 and look at what happens when \(q=1\). In this case, \(\Phi_{i}\) have exponential tails and the offspring sizes, \(U_{i}\), have power law tails
\[P(U_{i}>u)=P\left(\Phi_{i}>\frac{1}{\zeta}\ln u\right)\sim Cu^{-\alpha_{1}} \tag{15}\]
for \(\alpha_{1}=1/\zeta\) and some constant \(C>0\). The large \(N\) limit of the Cannings model with offspring distributions of the form in Equation 15 is covered by the GCLT for the Cannings model, which is the topic of ref [13]. This result was also derived heuristically using the forward genotype frequency process in [10]. The follow theorem comes from [13, Theorem 4].
**Theorem 1**.: _Assuming Equation 15 holds, then as \(\zeta\to\infty\) we have_
* _When_ \(2\leq\alpha_{1}\)_,_ \(\Psi_{n,\lfloor tT_{N}\rfloor}^{N}\) _converges to the Kingman coalescent._
* _For_ \(1\leq\alpha_{1}<2\)_,_ \(\Psi_{n,\lfloor tT_{N}\rfloor}^{N}\) _converges to a_ \(\Lambda\) _coalescent with_ \(\Lambda\) _give by a_ \(\beta\) _distribution_ \(\mathrm{Beta}(2-\alpha_{1},\alpha_{1})\)_._
_It follows from Equation 13 that the merge rates are_
\[\lambda_{n,k}=\frac{B(k-\alpha_{1},n-k+\alpha_{1})}{B(2-\alpha_{1},\alpha_{1})} \tag{16}\]
_where_ \(B(a,b)=\Gamma(a)\Gamma(b)/\Gamma(a+b)\) _is the beta function._
* _For_ \(\alpha_{1}<1\)_, lineages will coalesce in_ \(O(N^{0})\) _and hence there is no continuous time limit process for any rescaling of time. We refer to_ [13] _for a detailed description of this process._
This theorem mirrors the GCLT for the corresponding sum \(S_{N}\). In the regular GCLT, the two critical points correspond to a breakdown of the CLT (\(\alpha_{1}=2\)) and LLN (\(\alpha_{1}=1\)) for this sum. In the GCLT for the Cannings model, these correspond to the appearance of multiple mergers and a disappearance of any continuous time limit process respectively. Note that the \(\Lambda\)-Fleming-Viot process corresponding to merge rates 16 is well-defined even though the jump intensity diverges (due to the factor of \(\lambda^{-2}\)) because of how the positive and negative jumps cancel, and behave more like a Levy flight rather than a diffusion [10, 1].
### Heuristic derivation of \(\beta\)-coalescent
Here we provide a heuristic derivation of the \(\beta\)-coalescent from the Cannings model with power law offspring distributions. Unless otherwise noted, \(1<\alpha_{1}<2\). These results were rigorously derived in [14], but our derivation is based on slightly different conditions.
To begin, note that we can replace \(\nu_{1}\) with \(N(U_{1}/S_{N})\) when computing averages (this is justified by Lemma 3) and therefore
\[\mathbb{E}\left[(\nu_{1})_{k}\right]\sim\mathbb{E}\left[\nu_{1}^{k}\right] \sim N^{k}\mathbb{E}\left[\left(\frac{U_{1}}{S_{N}}\right)^{k}\right]. \tag{17}\]
By the law of large numbers, \(S_{N}\approx U_{1}+N\mu\) with \(\mu=\mathbb{E}[U_{1}]\) (this is justified by Lemma 6). Let \(f_{U}(u)\) be the density of \(U_{i}\). Replacing sums with integrals (this is justified rigorously by [14, Lemma 12]) leads to
\[N^{k}\mathbb{E}\left[\left(\frac{U_{1}}{S_{N}}\right)^{k}\right] \sim N^{k}\int_{1}^{\infty}\frac{u^{k}}{(u+N\mu)_{k}}f(u)du \tag{18}\] \[=\mu N^{k+1}\int_{(N\mu)^{-1}}^{\infty}\frac{z^{k}}{(z+1)^{k}}f( zN\mu)dz \tag{19}\]
where we have changed variables \(z=u/(N\mu)\). Since the variance is infinite this integral is dominated by the tail we can make the approximation \(f_{U}(u)\approx C\alpha_{1}u^{-1-\alpha_{1}}\). It follows that
\[\int_{(N\mu)^{-1}}^{\infty}\frac{z^{k}}{(z+1)^{k}}f_{U}(zN\mu)dz\sim C\alpha_{ 1}\mu^{-\alpha_{1}}N^{-1-\alpha_{1}}\int_{(N\mu)^{-1}}^{\infty}\frac{z^{k-1- \alpha_{1}}}{(z+1)^{k}}dz \tag{20}\]
Provided \(1<\alpha_{1}<k\) the integral converges as \(N\to\infty\) and
\[\lim_{N\to\infty}\int_{(N\mu)^{-1}}^{\infty}\frac{z^{k-1-\alpha_{1}}}{(z+1)^{ k}}dz=C\alpha_{1}\mu^{-\alpha_{1}}\frac{\Gamma(k-\alpha_{1})\Gamma(\alpha_{1})}{ \Gamma(k)}. \tag{21}\]
Therefore
\[\mathbb{E}\left[(\nu_{1})_{k}\right]\sim N^{k-\alpha_{1}}C\alpha_{1}B(k- \alpha_{1},\alpha_{1})\mu^{-\alpha_{1}} \tag{22}\]
where \(B(a,b)\) is the beta function defined in Theorem 1. Interestingly, this calculation is closely related to Derrida's calculation of the replica overlap in the REM and related models [13].
Equation 20 is consistent with the intuition that the expectation \(\mathbb{E}\left[(\nu_{1})_{k}\right]\) is dominated by the events \(U_{i}>N\), since
\[N^{k}\mathbb{P}(U_{i}>N)\approx CN^{k-\alpha_{1}}. \tag{23}\]
Using Equation 22 with \(k=2\), we have
\[T_{N}^{-1}\approx N\mathbb{P}(U_{i}>N)\sim C\alpha_{1}B(2-\alpha_{1},\alpha_{ 1})\mu^{-\alpha_{1}}N^{1-\alpha_{1}}. \tag{24}\]
Moreover, Equations 22 and 11 also imply
\[\lambda_{n,n}=\frac{B(n-\alpha_{1},\alpha_{1})}{B(2-\alpha_{1},\alpha_{1})}. \tag{25}\]
These are consistent with Equation 16 and, due to the consistency condition 12, uniquely determine all the merge rates.
The proof of 14 uses the fact that \(U_{1}^{2}\vee(N\mu)^{2}<S_{N}\approx(U_{1}+N\mu)^{2}\) to bound the expectation of the product \(\nu_{1}^{2}\nu_{2}^{2}\)[14, Lemma 15] (we use the standard notation \(x\lor y=\max\{x,y\}\)). Therefore,
\[\mathbb{E}[\nu_{1}^{2}\nu_{2}^{2}] \sim N^{4}\mathbb{E}\left[\frac{U_{1}^{2}U_{2}^{2}}{S_{N}^{4}} \right]\leq N^{4}\mathbb{E}\left[\frac{U_{1}^{2}U_{2}^{2}}{(U_{1}^{2}\vee( \mu N)^{2})(U_{2}^{2}\vee(\mu N)^{2})}\right] \tag{26}\] \[=N^{4}\mathbb{E}\left[\frac{U_{1}^{2}}{U_{1}^{2}\vee(\mu N)^{2} }\right]^{2}\leq N^{4}\mathbb{E}\left[\left(\frac{U_{1}}{U_{1}+\mu N}\right)^{ 2}\right]^{2}. \tag{27}\]
By Equations 20, the expectation in the last term is on the order of \(\mathbb{P}(U_{1}>N)=CN^{-\alpha_{1}}\), therefore \(\mathbb{E}[\nu_{1}^{2}\nu_{2}^{2}]\) is on the order of \(N^{4-2\alpha_{1}}\). It follows from Equation 24 that when \(1<\alpha_{1}<2\)
\[T_{N}\frac{\mathbb{E}[\nu_{1}^{2}\nu_{2}^{2}]}{N^{2}}\to 0. \tag{28}\]
By Proposition 1 the limit process is a \(\beta\)-coalescent.
When \(\alpha_{1}>k\), the integral in Equation 20 diverges in the large \(N\) limit. In this case, the behavior of the integral is no longer dominated by the tail, so the replacement of \(f\) with a power-law in passing to Equation 20 is not valid. Instead, we can look at the behavior for very small \(z\), which corresponds to offspring sizes much smaller than the expected value of \(S_{N}\). In particular,
\[\mathbb{E}\left[\nu_{1}^{k}\right]\approx\mu^{-k}\int_{1}^{\infty}u^{k}p(u)du= \frac{\mathbb{E}[U_{1}^{k}]}{\mu^{k}} \tag{29}\]
This leads to \(T_{N}=O(N)\) when \(\alpha_{1}>2\), and therefore multiple mergers are not possible. In this case we obtain the Kingman coalescent consistent with the classical theory of Gillespie. Note that the Theorem in [14] also characterizes the limit process at the critical point \(\alpha_{1}=2\). A different treatment is required for this case since the variance is infinite.
Limit coalescent for large but finite offspring variation
### Main result
In the remainder of this article we consider \(q>1\). We reiterate that in this case the variance is finite for all \(\zeta\), and therefore in the large \(N\) limit (with \(\zeta\) fixed) the convergence is to the WFD/Kingman Coalescent. For any fixed \(N\), if \(\zeta\) is large enough the WFD is of course going to be a very poor approximation. In order to better understand exactly what happens when \(\zeta\) is large, but finite, we seek a scaling of \(N\) with \(\zeta\) such that there is a well-defined limit process as \(\zeta\to\infty\). The appropriate scaling is related to the thermodynamic limit of the REM [11, 12], a connection that is discussed in Section 5.
We start with some definitions. Set \(\mu_{\zeta}=\mathbb{E}[\zeta^{q}]\) and \(A_{\zeta}=\mathbb{E}[S_{\zeta}]=N_{\zeta}\mu_{\zeta}\). Although the large \(N\) and large \(\zeta\) limits are equivalent, we will switch from using the subscript \(N\) to \(\zeta\). Define the _cumulant generating function_
\[r_{\zeta}=\ln\mu_{\zeta}. \tag{30}\]
The relationship between \(r_{\zeta}\) and \(W^{*}(z)\) is crucial for our analysis. Using the Laplace method [10], it can be shown that \(r_{\zeta}\) is asymptotically the convex conjugates of \(W^{*}\) and therefore these functions are related according to the Legendre transform [10]:
\[r_{\zeta}\sim r_{\zeta}^{*}=\inf_{z\geq 0}\{z\zeta-W^{*}(z)\}. \tag{31}\]
Using Equation 6 one finds
\[r_{\zeta}^{*}=\frac{1}{q^{\prime}}\zeta^{q^{\prime}} \tag{32}\]
where \(q^{\prime}\) is the dual exponent
\[q^{\prime}(q)=\frac{q}{q-1}. \tag{33}\]
The equivalence between Equations 32 and 6 is a special case of Kasahara-de Bruijn's exponential Tauberian Theorem [11].
A consequence of 32 is that all the moments of \(U_{i}\) grow as \(\zeta^{q^{\prime}}\), since
\[\mathbb{E}[U_{i}^{m}]=\mathbb{E}[e^{m\zeta\Phi}]=e^{r(m)}\sim e^{m^{\prime}r_ {\zeta}} \tag{34}\]
Intuitively, if \(N_{\zeta}\) grows slower than \(\zeta^{q^{\prime}}\) then \(N_{\zeta}\) cannot keep up with the variation as \(\zeta\) increases and no continuous time limit process will exists. This is the idea behind the scaling assumption
\[N_{\zeta}=e^{r_{\zeta}} \tag{35}\]
where \(\tau\) is a control parameter closely related to temperature in the Random Energy Model. Limit theorems for the sum \(S_{\zeta}=S_{N_{\zeta}}\) (Equation 4) under this scaling assumption are proved in ref [1]. The authors show that, much like in the GCLT, \(S_{\zeta}\) has two critical points, one where the CLT breaks down and one where the LLN breaks down. The following LLN ([1, Theorem 2.1]) is particularly relevant for the results below, while we elaborate on the connection between our results and theirs in Section 5.
**Lemma 1**.: _Assume_
\[\tau>\frac{1}{q-1}. \tag{36}\]
_Then for all \(\varepsilon>0\),_
\[\lim_{\zeta\to\infty}\mathbb{P}\left(\left|\frac{S_{\zeta}}{A_{\zeta}}-1 \right|>\varepsilon\right)=0. \tag{37}\]
By analogy with the case \(q=1\), when variation is large, yet small enough that the LLN holds, we expect that there is a continuous-time limit coalescent for which the collision rates can be deduced from the tail of \(\nu_{1}\). The essence of our argument is that when we evaluate the integral
\[\int_{0}^{\infty}\frac{z^{k}}{(z+1)^{k}}f(zA_{\zeta})dz \tag{38}\]
which appears when computing \(\mathbb{E}[\nu_{1}^{k}]\), we can make the approximation
\[W^{*}\left(\frac{\ln A_{\zeta}z}{\zeta}\right) \approx W^{*}\left(\frac{\ln A_{\zeta}}{\zeta}\right)+\frac{\ln z}{ \zeta}(W^{*})^{\prime}\left(\frac{\ln A_{\zeta}}{\zeta}\right) \tag{39}\] \[=\frac{(1+\tau)^{q}}{q(q^{\prime})^{q}}\zeta^{(q^{\prime}-1)q}+ \ln z\left(\frac{1+\tau}{q^{\prime}}\right)^{q/q^{\prime}}\zeta^{(q^{\prime} -1)(q-1)-1}. \tag{40}\]
By Equation 35 and the relation
\[(q-1)(q^{\prime}-1)=1, \tag{41}\]
the coefficient of \(\ln z\) is independent of \(\zeta\), hence
\[\mathbb{P}(\Phi_{i}>zA_{\zeta})\sim e^{-W^{*}(\ln A_{\zeta}/\zeta)z^{-\alpha} }\approx\mathbb{P}(e^{\zeta\Phi_{i}}>A_{\zeta})z^{-\alpha_{\Phi}(\tau)}. \tag{42}\]
where
\[\alpha_{\Phi}(\tau)=(W^{*})^{\prime}\left((1+\tau)r_{1}^{*}\right)=\left( \frac{1+\tau}{q^{\prime}}\right)^{q/q^{\prime}}. \tag{43}\]
The parameter called \(\alpha_{1}\) in the previous section can be understood as the limit \(\lim_{q\to 1}\alpha_{\Phi}(\tau)\) and when there is no ambiguity, we will simply write \(\alpha=\alpha_{\Phi}(\tau)\). Notice that the critical point \(\tau_{c}=1/(q-1)\) from Lemma 1 corresponds to \(\alpha=1\).
This argument suggests something like Equation 22 should hold. Indeed, we have the following Lemma.
**Lemma 2**.: _For \(1<\alpha<k\), \(k\in\mathbb{N}\) we have_
\[\mathbb{E}[(\nu_{1})_{k}]\sim\alpha B(\alpha,k-\alpha)N(\tau)^{k} \mathbb{P}(e^{\zeta\Phi_{1}}>A_{\zeta}) \tag{44}\]
_as \(\zeta\to\infty\)._
The proof is given in Section 4.2.
Based on Lemma 2, we expect a version of Theorem 1 ([14, Theorem 4]) to hold, where the limit coalescent is a \(\beta\)-coalescent for \(1<\alpha(\tau)<2\). When \(\alpha(\tau)\geq 2\), then the integral \(\mathbb{E}[\nu^{k}]\) is no longer dominated by the tail and instead (similar to Equation 29),
\[T_{\zeta}^{-1}\approx\frac{N_{\zeta}}{A_{\zeta}^{2}}\mathbb{E}[e^{ \zeta\Phi_{1}}]\sim e^{-(\tau+1-2^{\alpha})\tau_{\zeta}^{2}} \tag{45}\]
We can summarize the results for offspring distributions of the form 6 by the following Theorem, which is proved in Section 4.2.
**Theorem 2**.: _Assume Equations 6 (offspring distribution) and 35 (scaling assumption) hold. For \(1<\alpha<2\), the discrete coalescent \(\Psi_{n,|tT_{\zeta}|}^{\zeta}\) converges to the \(\beta\)-coalescent as \(\zeta\to\infty\). For \(\alpha>2\), \(\Psi_{n,|tT_{\zeta}|}^{\zeta}\) converges to the the Kingman coalescent._
In Figure 2, the region \(1<\alpha<2\) is plotted for various values of \(q\). It is by moving upwards through these regions that we obtain \(\beta\)-coalescents. As expected, for larger \(q\), the region becomes more tilted towards the right, since a larger value of \(\zeta\) is needed to obtain a continuous-time limit process for the same \(N\).
The regime \(\alpha(\tau)\leq 1\) requires a different treatment not covered by this result, but seems to be closely related to calculations of the Gibbs measure for the REM at low temperature, as we discuss in Section 5.
### Proofs
Proof of Lemma 2.: As with Equation 18, the idea of the proof is that most of the contribution from the average comes from the event \(\nu_{1}>A_{\zeta}\). Let \(f\) denote the density of \(\Phi\); hence
\[\int_{\phi}^{\infty}f(x)dx=e^{-W(\phi)}. \tag{46}\]
Define
\[\eta_{\zeta}(z)=\frac{1}{\zeta}\ln(A_{\zeta}z). \tag{47}\]
By Lemma 6 and Lemma 4 in the Appendix A,
\[E[\nu_{1}^{k}] \sim N^{k}\int_{\zeta^{-1}\ln(\lambda)}^{\infty}\frac{e^{k\zeta \Phi}A_{\zeta}^{-k}}{(e^{\zeta\Phi}A_{\zeta}^{-1}+1)^{k}}f(\Phi)d\Phi \tag{48}\] \[=e^{k\tau\tau_{\zeta}}\int_{A_{\zeta}^{-1}}^{\infty}\frac{z^{k-1} }{(z+1)^{k}}\frac{1}{\zeta}W^{\prime}(\eta(\zeta,z))\,e^{-W(\eta(\zeta,z))}dz \tag{49}\]
where we have changed variables \(z=u/A_{\zeta}\). This is the step which breaks-down for \(\alpha\leq 1\), since Lemma 6 uses Lemma 1.
Now define \(R_{\zeta}(z)\) and \(K_{\zeta}(z)\) by
\[W\left(\eta_{\zeta}(z)\right) =W\left(\eta_{\zeta}(1)\right)+\hat{\alpha}_{\zeta}\ln z+R_{\zeta }\left(z\right) \tag{50}\] \[W^{\prime}\left(\eta_{\zeta}(z)\right) =\zeta\hat{\alpha}_{\zeta}+\zeta K_{\zeta}(z) \tag{51}\]
where
\[\hat{\alpha}_{\zeta}=\zeta^{-1}W^{\prime}\left(\eta_{\zeta}(1)\right). \tag{52}\]
Observe that \(\alpha_{\zeta}(\tau)\sim\alpha(\tau)\) and by Equation 6 there is constant \(C^{\prime}\) such that
\[K_{\zeta}(z)\leq\frac{C^{\prime}\ln z}{\zeta} \tag{53}\]
for large enough \(\zeta\).
Figure 2: (left) The region \(1<\alpha<2\) in \(\ln N-\zeta\) space for different values of \(q\). We can see how Theorem 1 emerges from the limit \(q\to 1\). In this limit, \(\zeta\) need not grow with \(N\) to obtain a limit process other than the Kingman coalescent. (right) a diagram of the idea behind the definition of \(\alpha\).
Because \(R_{\zeta}(z)>0\) for large enough \(\zeta\) for all \(z\),
\[\int_{A_{\zeta}^{-1}}^{\infty}\frac{z^{k-1-1}}{(z+1)^{k}}\frac{W^{ \prime}(\eta_{\zeta}(z))\,e^{-W(\eta_{\zeta}(z))}dz} \tag{54}\] \[\leq e^{-W(\eta_{\zeta}(0))}\int_{A_{\zeta}^{-1}}^{\infty}\frac{z^ {k-1-\alpha_{\zeta}}}{(z+1)^{k}}\frac{W^{\prime}(\eta_{\zeta}(z))\,z}{\zeta}. \tag{55}\]
Assume \(\zeta\) is large enough that \(|\hat{\alpha}_{\zeta}-\alpha|<\varepsilon\). Then
\[\int_{A_{\zeta}^{-1}}^{\infty}\frac{z^{k-1-\alpha_{\zeta}}}{(z+1)^{k}}\frac{W ^{\prime}(\eta_{\zeta}(z))}{\zeta}dz\leq\int_{0}^{\infty}\frac{z^{k-1-\alpha( \sigma)+\varepsilon}}{(z+1)^{k}}\frac{W^{\prime}(\eta_{\zeta}(z))}{\zeta}dz. \tag{56}\]
As \(\epsilon\to 0\), using that the logarithmic term vanishes,
\[\int_{0}^{\infty}\frac{z^{k-1-\alpha+\varepsilon}}{(z+1)^{k}} \frac{W^{\prime}(\eta_{\zeta}(z))}{\zeta}dz \simeq\alpha\int_{0}^{\infty}\frac{z^{k-1-\alpha}}{(z+1)^{k}}dz \tag{57}\] \[=\alpha\frac{\Gamma(\alpha)\Gamma(k-\alpha)}{\Gamma(k)}=\alpha B( \alpha,k-\alpha). \tag{58}\]
Similarly, for large enough \(\zeta\) and any \(L\)
\[\int_{A_{\zeta}^{-1}}^{\infty}\frac{z^{k-1}}{(z+1)^{k}}\frac{W^{ \prime}(\eta_{\zeta}(z))}{\zeta}e^{-W(\eta_{\zeta}(z))}dz \tag{59}\] \[\geq e^{-R_{\zeta}(L)-W(\eta_{\zeta}(1))}\int_{A_{\zeta}^{-1}}^{ L}\frac{z^{k-1-\alpha-\varepsilon}}{(z+1)^{k}}\frac{W^{\prime}(\eta_{\zeta}(z)) }{\zeta}dz\] (60) \[\sim e^{-R_{\zeta}(L)-W(\eta_{\zeta}(1))}\alpha\int_{A_{\zeta}^{ -1}}^{L}\frac{z^{k-1-\alpha}}{(z+1)^{k}}dz \tag{61}\]
We can expand \(R_{\zeta}\) as
\[R_{\zeta}(L_{\zeta})=\frac{1}{2}\left(\frac{\ln L_{\zeta}}{\zeta}\right)^{2}W ^{\prime\prime}\left(\frac{\ln A_{\zeta}}{\zeta}\right)+\frac{1}{6}\left( \frac{\ln L_{\zeta}}{\zeta}\right)^{3}W^{\prime\prime\prime}\left(\frac{\ln A _{\zeta}}{\zeta}\right)+\cdots. \tag{62}\]
By the definition of \(W^{*}\) (Equation 6),
\[\frac{d^{n}}{dz^{n}}W^{*}(z)=(q-1)\cdots(q-n+1)z^{q-n} \tag{63}\]
which means the coefficient of \(\ln L_{\zeta}/\zeta\) grow as a power law in \(\zeta\) with exponent
\[(q-n)(q^{\prime}-1)=\frac{q-n}{q-1}. \tag{64}\]
Now set \(L_{\zeta}=B\zeta^{q/4}\) for some constant \(B\). Then the \(\zeta\) dependence of the \(n\)th term is
\[n(q^{\prime}/2-1)+(q^{\prime}-1)(q-n)=\frac{4-3n}{4}q^{\prime}, \tag{65}\]
so in the large \(\zeta\) limit \(R_{\zeta}(L_{\zeta})\) will vanish. In summary,
\[\int_{A_{\zeta}^{-1}}^{\infty}\frac{z^{k-1}}{(z+1)^{k}}\frac{W^{ \prime}(\eta_{\zeta}(z))}{\zeta}e^{-W(\eta_{\zeta}(z))}dz\sim e^{-W^{*}(\eta_ {\zeta}(1))}\alpha B(\alpha,k-\alpha) \tag{66}\]
Finally, note that
\[e^{-W(\eta_{\zeta}(1))}=\mathbb{P}\left(e^{\zeta\Phi_{1}}>A_{\zeta}\right), \tag{67}\]
so
\[\mathbb{E}[(\nu_{1})_{k}]\sim\alpha B(\alpha,k-\alpha)N(\tau)^{k}\mathbb{P}(e^{ \zeta\Phi_{1}}>A_{\zeta}). \tag{68}\]
If \(\alpha>k\) the integral derived above diverges and the bounds are no longer useful. The divergence comes from the very small values of \(z\) (meaning small \(\Phi\) relative to \(A_{\zeta}\)) when we make the linear approximation. In this case, we have
\[\mathbb{E}\left[\frac{e^{k\zeta\Phi_{1}}}{(e^{\zeta\Phi_{1}}+A_{ \zeta})^{k}}\right] \sim\frac{1}{A_{\zeta}^{k}}\mathbb{E}[e^{k\zeta\Phi_{1}}] \tag{69}\] \[\sim e^{-\left(k(1+\tau)-k^{\prime\prime}\right)^{\prime}\zeta_{ \zeta}^{*}}. \tag{70}\]
The result follows from the fact that \(\left(k(1+\tau)-k^{\prime\prime}\right)r_{1}^{*}\geq W^{*}((1+\tau)r_{1}^{*})\). In-fact, these are equal exactly at \(\alpha(\tau)=2\).
Proof of Theorem 2.: Lemma 2 implies
\[T_{\zeta}^{-1}\sim\alpha B(\alpha,2-\alpha)N(\tau)\mathbb{P}(e^{\zeta\Phi_{1}}>A_ {\zeta})\sim\alpha B(\alpha,2-\alpha)e^{-W^{*}(\eta_{\zeta}(1))+\tau r^{*}( \zeta)}. \tag{71}\]
Simplifying the exponent, we have
\[W^{*}\left(\eta_{\zeta}(1)\right)+\tau r^{*}(\zeta) =W^{*}\left(\frac{1}{\zeta}(1+\tau)r_{\zeta}^{*}\right)+\tau r^{* }(\zeta) \tag{72}\] \[=\left(\alpha^{q^{\prime}/q}-\frac{1}{q^{\prime}}-\frac{\alpha^{ q^{\prime}}}{q}\right)\zeta^{q^{\prime}}. \tag{73}\]
The prefactor of \(\zeta^{q^{\prime}}\) evaluates to \(0\) at \(\alpha=1\). Since \(q>1\),
\[\frac{d}{d\alpha}\left(\alpha^{q^{\prime}/q}-\frac{1}{q^{\prime}}-\frac{\alpha ^{q^{\prime}}}{q}\right)=\frac{\alpha^{q^{\prime}/q}-\alpha^{q^{\prime}}}{ \alpha(q-1)}<0, \tag{74}\]
so that \(T_{\zeta}\to\infty\) for \(\alpha>1\).
The merge rates in Equation 16 follow from Lemma 2, so it remains to check 14. Applying Lemma 5 and 2,
\[\mathbb{E}[\varphi_{1}^{2}p_{2}^{2}] \sim N^{4}\mathbb{E}\left[\frac{U_{\zeta}^{2}U_{\zeta}^{2}}{S_{N} ^{2}}\right]\leq N^{4}\mathbb{E}\left[\frac{e^{c(\Phi_{1}+\Phi_{2})}}{(e^{2c( \Phi_{1}}\lor A_{\zeta}^{2})(e^{2c(\Phi_{1}}\lor A_{\zeta}^{2}))}\right] \tag{75}\] \[=N^{4}\mathbb{E}\left[\frac{e^{2c(\Phi_{1})}}{e^{2c(\Phi_{1}}\lor A _{\zeta}^{2})}\right]\leq(A^{\prime})^{2}N^{2}e_{N}^{2} \tag{76}\]
Notice that we are again using the idea that the expectation is dominated by the event \(e^{c(\Phi_{1}}>A_{\zeta}\), since the final expression above is asymptotic to \((A^{\prime})^{2}N^{4}\mathbb{P}(e^{c(\Phi_{1}}>A_{\zeta})^{2}\). It follows that
\[T_{N}\frac{\mathbb{E}[\varphi_{1}^{2}p_{2}^{2}]}{N^{2}}=c_{N}\to 0. \tag{77}\]
When \(\alpha>2\), Lemma 2 implies
\[T_{\zeta}\frac{\mathbb{E}\left[\varphi_{1}^{4}\right]}{N^{2}}\sim\frac{ \mathbb{E}\left[\varphi_{1}^{4}\right]}{N\mathbb{E}[\varphi_{1}^{2}]}=\frac{N^ {2}\mathbb{P}(e^{c(\Phi_{1}}>A_{\zeta})}{\mathbb{E}[\varphi_{1}^{2}]}\to 0. \tag{78}\]
But \(T_{\zeta}\) is an increasing function of \(\alpha\) so this holds for all \(\alpha>2\). It follows that when \(\alpha>2\) the convergence is to the Kingman Coalescent.
## 5 Relationship to the REM
### Limit theorems for the partition function
As noted earlier, when \(\Phi\) follow a Gaussian, the total offspring number,
\[S_{\zeta}=\sum_{i=1}^{N}e^{c\Phi}, \tag{79}\]
can be mapped to the partition function of Derrida's Random Energy Model (REM) and the scaling 35 corresponds to the thermodynamic limit. For the REM, Equation 79 is usually written as \(S_{\zeta}=\sum_{i=1}^{2^{n}}e^{-\beta E}\), where \(n_{\zeta}=\log_{2}N_{\zeta}\) is the system size, \(E_{i}\) is the energy and \(\beta\) is the inverse temperature. In our notation,
\[\beta=\sqrt{\frac{2\ln 2}{\tau}},\quad E_{i}=-\sqrt{n}\Phi_{i}. \tag{80}\]
Each individual in our model corresponds to one of the \(2^{n}\) spin configurations on the hypercube \(\mathcal{C}_{n}\equiv\{-1,1\}^{n}\). Note that the negative sign in front of \(E_{i}\) is not inconsequential because we have assumed \(\mathbb{E}[\Phi_{i}]=0\) and can simply change the sign in Equation 6.
In statistical physics, one defines
\[F_{\zeta}=\frac{1}{n_{\zeta}}\ln S_{\zeta}, \tag{81}\]
and is interested in the _free energy density_\(F=\lim_{\zeta\to\infty}F_{\zeta}\). It is by differentiating \(F\) with respect to temperature, or other parameters in our model, that various classical _thermodynamic_ relations [18, 20]. Derrida showed that the REM undergoes a phase transition as \(\tau\) is decreased below a critical value (corresponding to \(\alpha=1\) in our notation) into "frozen" state. In the frozen state the LLN for \(S_{\zeta}\) breaks down and \(F\) is determined by the extremal statistical weights.
There is another phase transition when one looks at the fluctuations in \(F_{n}(\tau)\), or equivalently in \(S_{\zeta}\), which can be understood as a generalization of the GCLT to the case where both the variation in the terms in the sum and their total number are taken to be large. It is therefore natural to examine the connection between these results and ours. This limit theorem for a Gaussian energy distribution is proven in [2] and the extension to distributions of the form Equation 6 is given in [1].
We now state (a slightly watered down) version of the limit Theorem for \(S_{\zeta}\) proved in [1]. The
**Theorem 3** (Exponential Sum Limit Theorem).: _Let_
\[\tilde{\alpha}(\tau)=\left(\frac{q}{q^{\prime}}\tau\right)^{1/q^{\prime}}. \tag{82}\]
_and_
\[Z_{\zeta}=\frac{S_{\zeta}-A_{\zeta}}{B_{\zeta}}. \tag{83}\]
_where_
\[\tau\tau_{\zeta}^{*}=W^{*}\left(\frac{\ln B_{\zeta}}{\zeta}\right). \tag{84}\]
_For \(1<\tilde{\alpha}(\tau)<2\), \(Z_{\zeta}\) converges in distribution to an \(\alpha\)-stable random variable \(Z_{\tilde{\alpha}}\) for which the characteristic function is_
\[\varphi_{\tilde{\alpha}}(u)\equiv E\left[e^{iZ_{\tilde{\alpha}}u}\right]= \exp\left\{-\Gamma(1-\tilde{\alpha})|u|^{\alpha}e^{-i\tilde{\alpha}\tilde{ \alpha}\tilde{\alpha}u(u)/2}\right\} \tag{85}\]
_where the so-called stability parameter, \(\tilde{\alpha}\), is given by Equation 82. When \(\tilde{\alpha}>2\), \(U_{\zeta}\) obeys a central limit theorem, meaning that it converges to a Gaussian when rescaled by the standard deviation._
Briefly, the \(\alpha\)-stable distributions mentioned in this result are defined by the property that they are equal in distribution to linear combinations of realizations of themselves: \(Z\) is \(\alpha\)-stable if and only if there are constant \(c,d,e\) such that \(Z=cZ_{1}+dZ_{2}+e\) for random two variables \(Z_{1}\) and \(Z_{2}\) equal in distribution to \(Z\). The GCLT states that \(\alpha\)-stable distributions arise as limits of sums of all random variables whose variances are not necessarily finite [13, 14]. One, over result results by exactly the role of Theorem 3 in the context of the Cannings model by expanding the GCLt to the "large but finite" regime.
Much like Theorem 2, the idea behind Theorem 3 is that in the transition regime the sum will be dominated by the maximum. From Equation 35 and Equation 6, we have
\[\mathbb{P}\left(\max_{1\leq i\leq N}U_{i}>u\right) =1-\left(1-\mathbb{P}\left(e^{\zeta\Phi}>u\right)\right)^{N} \tag{86}\] \[\approx N\mathbb{P}\left(e^{\zeta\Phi}>u\right)=e^{\tau_{\zeta}-W^{ \gamma}(\ln u/\zeta)}. \tag{87}\]
The left-hand side will approach one as \(\zeta\rightarrow\infty\), so in order to obtain a well-defined limit from Equation 86, we need to look at deviations on an increasing scale \(B_{\zeta}\). We then replace \(u\) with \(uB_{\zeta}\) in the exponent of Equation 86 and make the approximation
\[\tau\tau_{\zeta}-W^{*}\left(\frac{\ln u+\ln B_{\zeta}}{\zeta}\right)\approx \tau\tau_{\zeta}-W^{*}\left(\frac{\ln B_{\zeta}}{\zeta}\right)-\frac{\ln u}{ \zeta}(W^{*})^{\gamma}\left(\frac{\ln B_{\zeta}}{\zeta}\right). \tag{88}\]
In order for this to have a large \(\zeta\) limit, the first two terms must cancel, indicating that \(B_{\zeta}\) should satisfy Equation 84. Since \(B_{\zeta}=O(\zeta^{q})\), by Equation 84 the coefficient of \(\ln u\) in Equation 88 is independent of \(\zeta\) and given by
\[\tilde{\alpha}\equiv\frac{1}{\zeta}(W^{*})^{\gamma}\left(\frac{\ln B_{\zeta}} {\zeta}\right) \tag{89}\]
which simplifies to Equation 82. This plays the role of \(\alpha\) in our analysis of the coalescent process. The two parameters agree at \(\alpha=\tilde{\alpha}=1\), which is the transition to the frozen state; see Figure 3.
### Gibbs measure
Within the REM one can also study the distribution of the statistical weights
\[\omega_{\zeta,\sigma}=\frac{e^{\zeta\Phi_{\sigma}}}{S_{\zeta}} \tag{90}\]
where \(\sigma\in\mathcal{C}_{n}\). In the large-\(\zeta\) limit these weights approach an infinite dimensional _Gibbs measure_ - ref [10] provides a thorough introduction to this formalism. The question of how the Gibbs measure fluctuates between replicates of the energies is closely related to the coalescent process, since \(\{U_{i}/S\}_{i=1,\ldots,N_{\zeta}}\) (asymptotically) have the same distribution as \(\omega_{\zeta,\sigma}\) when the configurations are projected to the one dimensional lattice \([2^{n}]=\{1,\ldots,2^{n}\}\). The projected Gibbs measure approaches the Lebesgue measure on the unit interval in the high temperature regime [12]. However, the main focus context of the REM appear to have been the low temperature regime \(\alpha<1\).
There is already an established connection between coalescent processes and the Gibbs measure via Derrida's generalized REM (GREM). In this model the energies are no longer independent, but drawn from
Figure 3: \(\alpha\) compared to \(\tilde{\alpha}\) as a function of the tail exponent \(q\) for different values of \(\tau\). The lower limit of the plot is the critical point \(\alpha=1\) where both expressions agree. Above this point \(\alpha>\tilde{\alpha}\) for all \(q\).
a Gaussian random field on the \(\mathcal{C}_{n}\) whose correlation function depends on a certain ultrametric distance between configurations - see for a precise description [11]. This ultrametric distance induces a hierarchical structure to the configuration space. Huelle gave a mathematical formulation of Derrida's models [10] and the connection to coalescent processes was made in ref [12] in the context of their abstract cavity method. The authors show that by sampling configurations from the Gibbs measure and constructing a genealogical tree based on the hierarchies induced by the distance, one obtains (up to a time-change) a continuous time coalescent process now referred to as the Bothhausen-Sznitman coalescent. This processes is in-fact the \(\beta\) coalescent with \(\alpha=1\).
Theorem 2 makes a different, but closely related connection between coalescent processes and the REM which is as follows. Suppose we sample two configurations \(\sigma\) and \(\sigma^{\prime}\) from the Gibbs measure and set
\[\varrho_{\sigma,\sigma^{\prime}}=1_{\{\sigma=\sigma^{\prime}\}}. \tag{91}\]
Derrida refers to this as the replica overlap. The distribution of \(\varrho\) conditioned on \(\{\Phi_{\sigma}\}_{\sigma\in\mathcal{C}_{n}}\) is determined by (see [13])
\[\mathbb{E}[\varrho_{\sigma,\sigma^{\prime}}|\{\Phi_{\sigma}\}_{\sigma\in \mathcal{C}_{n}}]=\sum_{\sigma\in\mathcal{C}_{n}}\left(\frac{e^{\zeta\Phi_{ \sigma}}}{S_{\zeta}}\right)^{2} \tag{92}\]
Each term in the sum has the same distribution (in the large \(n\) limit) as \((\nu_{1}/N_{2})^{2}\). Therefore, recalling Equation 2 we can see that the average replica overlap is simply the inverse coalescent time:
\[\mathbb{E}[\varrho_{\sigma,\sigma^{\prime}}]=\mathbb{E}[\mathbb{E}[\varrho_{ \sigma,\sigma^{\prime}}|\{\Phi_{\sigma}\}_{\sigma\in\mathcal{C}_{n}}]]\sim \frac{\mathbb{E}[(\nu_{1})]}{N_{\zeta}-1}\sim T_{\zeta}^{-1}. \tag{93}\]
It is well known that in the high temperature regime (\(\tilde{\alpha}>1\))
\[\mathbb{E}[\varrho_{\sigma,\sigma^{\prime}}]\to 0, \tag{94}\]
while in the low temperature phase Derrida derives an expression in terms of the Beta function in Equation 22.
Now suppose we sample \(n\) configurations \(\sigma_{1},\ldots,\sigma_{n}\) at each replicate and consider the chance that \(n-s\) of these configurations are not unique and instead \(k_{i}\) come from configuration \(i\) for \(i=1,\ldots,r\). This is given by
\[N_{\zeta}^{r-\Xi}\mathbb{E}\left[\prod_{i=1}^{r}\left(\omega_{\zeta,\sigma} \right)_{k_{i}}\right], \tag{95}\]
which is asymptotic to the expression appearing in the definition of \(\lambda_{n,k_{1},\ldots,k_{r},s}\) (Equation 10) and in the REM is related to the higher order replica overlaps. We can then ask what chance of these events is when we take \(T_{\zeta}\) replicates. Therefore, Theorem 2 tells us there is phase transition in the typical composition of our samples at the critical point \(\alpha=2\). As we have explained above, this transition happens at a lower temperature (higher \(\beta\)) than the breakdown of the CLT for the partition function at \(\tilde{\alpha}=2\).
## 6 Discussion
In this article we have explored asymptotic limits for the Cannings model when both the population sizes and offspring variation are simultaneously taken to be large. Such limits are not covered by previous results for scale invariant offspring distribution, since in that setting the offspring variation is infinite for finite \(N\). Our analysis hinges on a scaling scaling assumption under which the total number of offspring produced in a generation is related to the partition function of the REM and the offspring counts \(\nu_{i}\) are related to the Gibbs measure. The large population size limit in our model corresponds to the thermodynamic limit of the REM. As with the REM, competition between fluctuations in growth rates (energies in the REM) with averaging over an increasing system-size (our log population size) leads to a form of weak self-averaging and anomalous scaling of the coalescent time.
Our main finding is that the \(\beta\)-coalescent - a previous studied model of coalescence in populations where variation in offspring is infinite - also emerge from models where the tail of the offspring distribution is thin, but large fluctuations are not too unlikely. This is related to the existing GCLT for the Cannings model [14] in the same way that the fluctuation theorem in ref [2] is related to the GCLT for all sums. Our result does not describe the critical point and low temperature regime, although at least for \(q=2\) these can be likely be deduced from previous results on the REM - see [15]. The limit coalescent processes are the discrete-time \(\Xi\)-coalescents with simultaneous multiple mergings of lineages described in [13].
Biologically, Theorem 2 suggests the \(\beta\)-coalescent serves as a more universal description of neutral evolution in the presence of highly skewed offspring distributions than might be expected. This also indicates that little information about the demographic structure of a population is contained in the coalescent, rather one must look to the coalescent time for detailed information about the offspring distribution.
#### Acknowledgments
I am thankful to Daniel Weissman, Takashi Okada, Linh Huynh, Oskar Hallatschek and Ariel Amir for helpful discussions and feedback. I am also grateful to the organizers of the conference "Evolution Beyond the Classical Limits" held at the Banff International Research Station (BIRS), where preliminary work on this manuscript was carried out.
## Appendix A Additional Lemmas
We will assume \(U_{i}>1\) and therefore \(S_{N}>N\). It is straightforward to obtain the results for the more general case where only Equation 7 holds.
**Lemma 3**.: _For \(0<\varepsilon<x\)_
\[\limsup_{\zeta\to\infty}N_{\zeta}T_{\zeta}\mathbb{P}\left(\frac{\nu_{1}}{N_{ \zeta}}>x\right) \leq\limsup_{\zeta\to\infty}NT\mathbb{P}\left(\frac{U_{1}}{S_{ \zeta}}\geq x-\varepsilon\right)\] (A.1)
_and_
\[\limsup_{\zeta\to\infty}N_{\zeta}T_{\zeta}\mathbb{P}\left(\frac{\nu_{1}}{N_{ \zeta}}>x\right) \geq\liminf_{\zeta\to\infty}N_{\zeta}T_{\zeta}\mathbb{P}\left( \frac{U_{1}}{S_{\zeta}}\geq x+\varepsilon\right)\] (A.2)
Proof.: Following the proof of [13, Theorem 4] we condition on \(\{X_{i}\}\) to obtain
\[\lim_{\zeta\to\infty}NT\mathbb{P}(\nu_{1}/N>x) =\lim_{\zeta\to\infty}NT\mathbb{E}[\mathbb{P}(\nu_{1}/N>x|\{X_{i} \})]\] (A.3) \[=\lim_{\zeta\to\infty}NT\mathbb{E}[\mathbb{P}(\nu_{1}/N>x|\{X_{i} \})1_{\{X_{i}/S\leq x-\varepsilon\}}]\] (A.4) \[\qquad+\lim_{\zeta\to\infty}NT\mathbb{E}[\mathbb{P}(\nu_{1}/N>x| \{X_{i}\})1_{\{X_{i}/S\geq x-\varepsilon\}}]\] (A.5)
Using that \(\nu_{1}|\{X_{i}\}\) has a hypergeometric distribution \((S,X_{1},N)\)
\[\mathbb{E}[\mathbb{P}(\nu_{1}/N>x|\{X_{i}\})1_{\{X_{i}/S\leq x- \varepsilon\}}] <\mathbb{P}(\nu_{1}/N>x|X_{1}=S(x-\varepsilon))\mathbb{P}(X_{1}/S \leq x-\varepsilon)\] (A.6) \[=\mathbb{P}(\nu_{1}>Nx|X_{1}/S+\varepsilon=x)\mathbb{P}(X_{1}/S \leq x-\varepsilon)\] (A.7) \[\leq e^{-2\varepsilon^{2}N}\] (A.8)
Since \(T\) grows sub-exponentially with \(N\),
\[\limsup_{\zeta\to\infty}NT\mathbb{P}(\nu_{1}/N>x) =\limsup_{\zeta\to\infty}NT\mathbb{E}[\mathbb{P}(\nu_{1}/N>x|\{X_ {i}\})1_{\{X_{i}/S\geq x-\varepsilon\}}]\] (A.9) \[\leq\limsup_{\zeta\to\infty}NT\mathbb{P}(X_{1}/S\geq x-\varepsilon)\] (A.10)
which is the upper bound.
The lower bound is obtained similarly using the variable \(1_{\{X_{i}/S\geq x+\varepsilon\}}\).
**Lemma 4** (Similar to Lemma 6 of [13]).: _For \(r\geq 1\) and \(k_{1},\ldots,k_{r}\geq 2\),_
\[\mathbb{E}\left[\prod_{i=1}^{r}(\nu_{i})_{k_{i}}\right]\sim N^{\sum_{i=1}^{r} k_{i}}\mathbb{E}\left[\frac{e^{\sum_{i=1}^{r}k_{i}\zeta\Phi_{i}}}{S_{\zeta}^{ \sum_{i=1}^{r}k_{i}}}\right]\] (A.11)
Proof.: Let \(M_{k_{i},k_{i}}^{i}\) be the event that individuals \(k_{1}\) through \(k_{2}\) descent from individual \(i\) in the previous generation and set
\[J_{k_{1},\ldots,k_{r}} =\bigcup_{i=1}^{r}M_{k_{i},k_{i},\sum_{j<i+1}k_{j}}^{i}.\] (A.12)
As \(N\to\infty\) (or \(\zeta\to\infty\)),
\[\mathbb{P}(J_{k_{1},\ldots,k_{r}}) =\frac{1}{(N)_{\sum_{i=1}^{r}k_{i}}}\mathbb{E}[(\nu_{1})_{k_{1}} \cdots(\nu_{r})_{k_{r}}]\] (A.13) \[\sim\frac{1}{N^{\sum_{i=1}^{r}k_{i}}}\mathbb{E}\left[\prod_{i=1}^ {r}\nu_{i}^{k_{i}}\right]\] (A.14)
Since \(S_{N}\geq N\),
\[P(J_{k_{1},\ldots,k_{r}})=\mathbb{E}\left[\mathbb{P}\left(J_{k_{1},\ldots,k_{r }}\middle|\{\Phi_{i}\}_{i=1}^{N}\right)\right]=\mathbb{E}\left[\frac{e^{\sum_ {i=1}^{r}k_{i}\zeta\Phi_{i}}}{S_{\zeta}^{\sum_{i=1}^{r}k_{i}}}\right],\] (A.15)
which implies the result.
**Lemma 5** (Similar to Lemma 8 of [13]).: _For each \(k\geq 1\) there is a constant \(B_{k}\) such that_
\[\mathbb{E}[\nu^{k}] \geq B_{k}N_{\zeta}^{k}\mathbb{E}\left[\frac{e^{k\Phi_{i}}}{e^{k \Phi_{i}}\lor A_{\zeta}^{k}}\right].\] (A.16)
Proof.: By Lemma 4,
\[\mathbb{E}[\nu^{k}] \sim N^{k}\mathbb{E}\left[\frac{e^{k\Phi_{i}}}{S_{\zeta}^{k}}\right]\] (A.17) \[\geq N^{k}\mathbb{E}\left[\frac{e^{2\Phi_{i}}}{(e^{\Phi_{i}}+2(A_{ \zeta}-\mu_{\zeta}))^{k}}1_{\{\sum_{i=1}^{r}k_{i}\zeta\in 2(A_{\zeta}-\mu_{\zeta})\}}\right]\] (A.18) \[=N^{k}\mathbb{E}\left[\frac{e^{k\Phi_{i}}}{(e^{\Phi_{i}}+2(A_{ \zeta}-\mu_{\zeta}))^{k}}\right]\mathbb{P}\left(\sum_{i=2}^{N_{\zeta}}e^{\Phi_{ i}}\leq 2(A_{\zeta}-\mu_{\zeta})\right)\] (A.19)
By the law of large numbers
\[\mathbb{P}\left(\sum_{i=2}^{N_{\zeta}}e^{\zeta\Phi_{i}}\leq 2(A_{\zeta}-\mu_{ \zeta})\right)\geq\frac{1}{2}\] (A.20)
for large enough \(\zeta\), therefore
\[N^{k}\mathbb{E}\left[\frac{e^{k\zeta\Phi_{1}}}{S_{\zeta}^{k}}\right] \geq\frac{N^{k}}{2}\mathbb{E}\left[\frac{e^{k\zeta\Phi_{1}}}{(e^{ \zeta\Phi_{1}}+2(A_{\zeta}-\mu_{\zeta}))^{k}}\right]\] (A.21) \[\geq\frac{N^{k}}{2^{k}}\mathbb{E}\left[\frac{e^{k\zeta\Phi_{1}}}{( e^{k\zeta\Phi_{1}}+A_{\zeta})^{k}}\right].\] (A.22)
The result follows from
\[(e^{\zeta\Phi_{1}}+A_{\zeta})^{k} \leq 2^{k}(e^{k\zeta\Phi_{1}}\lor A_{\zeta}^{k})\] (A.23)
**Lemma 6**.: _Assume Equation 6 and Equation 35. Then for \(1<\alpha\)_
\[\mathbb{E}\left[\frac{e^{k\zeta\Phi}}{\left(\sum_{i=1}^{N_{\zeta}}e^{\zeta \Phi_{i}}\right)^{k}}\right]\sim\mathbb{E}\left[\frac{e^{k\zeta\Phi_{1}}}{(e^{ \zeta\Phi_{1}}+A_{\zeta})^{k}}\right].\] (A.24)
Proof.: The idea of the proof is based on [12] combined with the LLN for \(S_{\zeta}\) proved in [1, Theorem 2.1]. By the Lemma 1, for any \(\epsilon>0\) and \(\delta\) such that \((1-\delta)\mathbb{E}[X_{i,N}]>1\), we can pick \(\zeta\) large enough that
\[\mathbb{P}\left(1-\delta\leq\frac{S_{\zeta}}{A_{\zeta}}\leq 1+\delta \right)>1-\varepsilon.\] (A.25)
If \(\delta\) and \(N\) are such that Equation A.25 holds,
\[\mathbb{E}\left[\frac{e^{k\zeta\Phi}}{\left(\sum_{i=1}^{N_{\zeta} }e^{\zeta\Phi_{i}}\right)^{k}}\right] =\mathbb{E}\left[\frac{e^{k\zeta\Phi}}{\left(\sum_{i=1}^{N_{\zeta }}e^{\zeta\Phi_{i}}\right)^{k}}1_{\left\{\sum_{i=1}^{N_{\zeta}}e^{\zeta\Phi_{i} }<(1-\delta)N_{A_{\zeta}}\right\}}\right]\] (A.26) \[\qquad+\mathbb{E}\left[\frac{e^{k\zeta\Phi_{1}}}{\left(\sum_{i=1 }^{N_{\zeta}}e^{\zeta\Phi_{i}}\right)^{k}}1_{\left\{\sum_{i=1}^{N_{\zeta}}e^{ \zeta\Phi_{i}}\geq(1+\delta)N_{A_{\zeta}}\right\}}\right]\] (A.27) \[\leq 4\varepsilon\mathbb{E}\left[\frac{e^{k\zeta\Phi}}{(e^{\zeta \Phi_{1}}+N_{\zeta})^{k}}\right]+\mathbb{E}\left[\frac{e^{k\zeta\Phi_{1}}}{(e^ {\zeta\Phi_{1}}+(1-\delta)A_{\zeta})^{k}}\right]\] (A.28)
and
\[\mathbb{E}\left[\frac{e^{k\zeta\Phi}}{\left(\sum_{i=1}^{N_{\zeta}}e^{\zeta \Phi_{i}}\right)^{k}}\right] \geq(1-\varepsilon)\mathbb{E}\left[\frac{e^{k\zeta\Phi_{1}}}{(e^{\zeta\Phi_{1} }+(1+\delta)A_{\zeta})^{k}}\right].\] (A.29)
The result follows after taking \(\varepsilon,\delta\to 0\).
|
2303.00412 | Validating Monte Carlo simulations for an analysis chain in H.E.S.S | Imaging Air Cherenkov Telescopes (IACTs) detect very high energetic (VHE)
gamma rays. They observe the Cherenkov light emitted in electromagnetic shower
cascades that gamma rays induce in the atmosphere. A precise reconstruction of
the primary photon energy and the source flux depends heavily on accurate Monte
Carlo (MC) simulations of the shower propagation and the detector response, and
therefore also on adequate assumptions about the atmosphere at the site and
time of a measurement. Here, we present the results of an extensive validation
of the MC simulations for an analysis chain of the H.E.S.S. experiment with
special focus on the recently installed FlashCam camera on the large 28 m
telescope. One goal of this work was to create a flexible and easy-to-use
framework to facilitate the detailed validation of MC simulations also for past
and future phases of the H.E.S.S. experiment. Guided by the underlying physics,
the detector simulation and the atmospheric transmission profiles were
gradually improved until low level parameters such as cosmic ray (CR) trigger
rates matched within a few percent between simulations and observational data.
This led to instrument response functions (IRFs) with which the analysis of
current H.E.S.S. data can ultimately be carried out within percent accuracy,
substantially improving earlier simulations. | Fabian Leuschner, Johannes Schäfer, Simon Steinmassl, Tim Lukas Holch, Konrad Bernlöhr, Stefan Funk, Jim Hinton, Stefan Ohm, Gerd Pühlhofer | 2023-03-01T11:08:55Z | http://arxiv.org/abs/2303.00412v1 | # Validating Monte Carlo simulations for an analysis chain in H.E.S.S.
###### Abstract:
Imaging Air Cherenkov Telescopes (IACTs) detect very high energetic (VHE) gamma rays. They observe the Cherenkov light emitted in electromagnetic shower cascades that gamma rays induce in the atmosphere. A precise reconstruction of the primary photon's energy and the source flux depends heavily on accurate Monte Carlo (MC) simulations of the shower propagation and the detector response, and therefore also on adequate assumptions about the atmosphere at the site and time of a measurement.
Here, we present the results of an extensive validation of the MC simulations for an analysis chain of the H.E.S.S. experiment with special focus on the recently installed FlashCam camera on the large 28 m telescope. One goal of this work was to create a flexible and easy-to-use framework to facilitate the detailed validation of MC simulations also for past and future phases of the H.E.S.S. experiment.
Guided by the underlying physics, the detector simulation and the atmospheric transmission profiles were gradually improved until low level parameters such as cosmic ray (CR) trigger rates matched within a few percent between simulations and observational data. This led to instrument response functions (IRFs) with which the analysis of current H.E.S.S. data can ultimately be carried out within percent accuracy, substantially improving earlier simulations.
Introduction
MC simulations are crucial for the analysis of data taken by IACTs including the creation of IRFs for these instruments. A prominent toolkit for IACT array simulations is sim_telarray [1] in connection with CORSIKA [2]. This combination is used for MC simulations within one analysis chain of the H.E.S.S. experiment and also for studies of the upcoming Cherenkov Telescope Array (CTA). In this work, we present the validation of simulations for an analysis chain of the H.E.S.S. array. It consists of CT5, a large central telescope (28 m mirror diameter), surrounded by four small telescopes CT1-4 (12 m mirror diameter). Focus is put on the FlashCam camera that was installed at the end of 2019 on CT5 [3], [4]. The goal is to reach consistency between MC simulations and observational data up to the DL31 data level. Repeatability is another important goal of this work which is achieved through a Python code base. Within these proceedings, we report on the various steps of the validation, learned lessons and reached improvements.
Footnote 1: For more information see [https://doi.org/10.3390/universe7100374](https://doi.org/10.3390/universe7100374).
Footnote 2: That is a period of time in which the hardware configuration and instrument response is considered constant. A new phase starts e.g. when hardware is replaced, mirrors are cleaned, software settings are changed, etc.
## 2 Basic MC and single telescope validation
### Basic MC checks
In a first step, single telescope simulations, using a laser with a fixed photon intensity as simulated light source, were used to check the consistency of the charge integration algorithms and the photo electron definition. Thereby, consistency within a few percent between simulations and measurements could be established.
To validate the optical Point Spread Function (PSF) adopted in simulations for each telescope, we use the included ray-tracing feature of sim_telarray which calculates the path of photons through the optical system. The response of the system is evaluated using parallel light as an input. The telescope is pointed at an infinitely distant star at zenith. The off-axis response is probed by introducing a pointing shift between the telescope and the simulated star. An intensity map is created from the photon positions on the camera-lid and a circle is inscribed around the centre of gravity. The radius of this circle is increased until 80 % of the total signal is contained within. This 80 % containment radius is further referred to as the PSF\({}_{80}\) of the telescope.
The PSF\({}_{80}\) is simulated for different telescope elevations \(\Theta\) and compared to measurements obtained within the same hardware phase3. The simulated PSF\({}_{80}\) in sim_telarray is evaluated using the following equation:
Footnote 3: For more information see [https://doi.org/10.3390/universe7100374](https://doi.org/10.3390/universe7100374).
Footnote 4: That is a period of time in which the hardware configuration and instrument response is considered constant. A new phase starts e.g. when hardware is replaced, mirrors are cleaned, software settings are changed, etc.
\[\mathrm{PSF}_{80}(\Theta)=\sqrt{\mathrm{R}_{\mathrm{min}}^{2}+\mathrm{d}_{1}^{ 2}\cdot(\sin(\Theta)-\sin(\Theta_{0}))^{2}+\mathrm{d}_{2}^{2}\cdot(\cos(\Theta )-\cos(\Theta_{0}))^{2}} \tag{1}\]
Here, \(\mathrm{R}_{\mathrm{min}}\), \(\mathrm{d}_{1}\), \(\mathrm{d}_{2}\) and \(\Theta_{0}\) are free parameters that can be specified in the simulation configuration. These are readjusted until the overall deviation between simulated and measured PSF\({}_{80}\) is minimized. With the new telescope dependent parameters the PSF\({}_{80}\) deviation is reduced to \(<5\,\%\approx 0.4\,\mathrm{mm}\) in the focal plane (compare Figure 1) which is deemed acceptable.
### Single telescope consistency
To understand whether the properties of the single telescopes in an array are defined correctly in simulations, parameters such as the trigger rates, the raw pixel intensity, and the contribution of the pedestal as well as its noise need to be validated for each telescope individually.
The measured trigger rates are dominated by hadronic showers. Hence, the investigation of the trigger rates is conducted via proton simulations, applying a correction factor for heavier nuclei, which has been validated beforehand with dedicated simulations to be \(\approx 1.34\) for the trigger rates of the entire H.E.S.S. array. The raw trigger rates are calculated by folding the effective area with a CR proton spectrum. For high accuracy this work uses the Global Spline Fit spectrum by Hans Dembinsky et al. [6] as it also considers spectral features deviating from a simple power law. The effective area is calculated by taking into account the amount of simulated and triggered showers, simulated area and solid angle. One has to keep in mind that H.E.S.S. is operated in a hybrid fashion with CT1-5 triggering in stereo and additionally CT5 in mono mode. Hence, for CT1-4 the stereo participation rate and for CT5 the mono rate are the prime quantities to compare [7], [3].
In addition to the instrument-specific settings (i.e. trigger threshold, Night Sky Background (NSB) values, PSF\({}_{80}\) and reflectivity) some general simulation settings (i.e. simulated energy range, view cone, photon bunch size3 and atmospheric transmission profile) were adjusted to match real measurements. The various adjustments and their effects on the telescope trigger rates are summarised in Table 1.
Footnote 3: Simulated Cherenkov photons are stored in so called photon bunches. A bunch size of five is chosen as a compromise between computing time and accuracy.
The resulting trigger rates can be seen in Figure 2 as a function of zenith angle. For one representative of each of the two H.E.S.S. telescope types, we show both the simulated and real trigger rates. The uncertainty intervals show the investigated systematic uncertainties. For the mirror reflectivity (derived from muon simulations [8]) we accepted an uncertainty of 2 % leading to a systematic uncertainty of \(\approx 12\) % in the trigger rates. The systematic uncertainty arising from the choice of the
Figure 1: Optical PSF\({}_{80}\) for CT1 and CT5 vs. elevation based on sim_telarray ray-tracing compared to real data. A fit was made to real measurements and the resulting fit function evaluated at different elevations following [5]. In the simulation datapoints were directly evaluated without a fitting procedure.
interaction model in the CORSIKA simulation was investigated by repeating the same simulation set with the models QGSJET-2, EPOS, and SYBILL [(2)], [(9)]. A variation of \(\approx 4\,\%\) in the resulting trigger rates was found. In addition, the assumed systematic uncertainty on these rates for the implementation of H.E.S.S. in sim_telarray is \(10\,\%\).
The detector (in H.E.S.S.: photo-multiplier tubes) output in the cameras is offset by the pedestal. While measurements are corrected for the mean of this value, its standard deviation ('pedestal width') contributes as noise, mostly caused by diffuse NSB light [(10)]. Under given observing conditions4, the amount of NSB-photons reaching the cameras depends primarily on the optical brightness of the observed field. Here, differences of a factor of 4 or greater are possible
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Parameter & Change CT1-4 & Effect CT1-4 & Change CT5 & Effect CT5 \\ \hline Aerosol level & \(50\,\%\) & \(10\,\%\) & \(50\,\%\) & \(12\,\%\) \\ \hline Trigger threshold & \(27\,\%\) & \(41\,\%\) & \(4\,\%\) & \(6\,\%\) \\ \hline Mirror reflectivity & \(2.4\,\%\) & \(15\,\%\) & \(8\,\%\) & \(23\,\%\) \\ \hline NSB & \(67\,\%\) & \(3-6\,\%\) & \(55\,\%\) & \(6\,\%\) \\ \hline PSF\({}_{80}\) & \(20\,\%\) & \(1\,\%\) & \(9\,\%\) & \(<1\,\%\) \\ \hline \end{tabular}
\end{table}
Table 1: Change of simulation parameters and their effects on the CT1-4 stereo participation and CT5 mono trigger rates. If the changes and effects differ between CT1-4 the average value is quoted. Presented are the absolute variations and their absolute effects on the trigger rates.
Figure 2: Simulated and real zenith-dependent trigger rates for CT1 (stereo) and CT5 (mono) with uncertainty bands highlighting the different investigated systematic uncertainties as discussed in the main text.
(11). Therefore, we adjust the pedestal width of the simulations to the median5 of observations' pedestal width. For this adjustment, the NSB should be set to a reasonable value consistent with calculations that reproduces the required pedestal width in simulations. This calculation entails folding the NSB flux at the site of the observatory with the collection area of the telescope and the pixel FoV. Provided an NSB value, sim_telarray handles the mirror and quantum efficiencies. However, possible reflections of NSB light from the ground back into the camera must be taken into account manually. Typical NSB levels are of the order 0.1 GHz for CT1-4 and 0.3 GHz for CT5.
Footnote 5: Mean is discouraged here, as it would be biased by high-value outliers, e.g. from pixels with a star in their Field of View (FoV) that then are automatically switched off.
## 3 Higher level and array validation
### Cleaning and background
In the analysis chain validated, shower images are cleaned after all calibration steps to avoid any major influence of noise on the image reconstruction. In H.E.S.S., cleaning is done using the tail cut method [12]. From the cleaned images the Hillas parameters [13] are derived to describe the image properties. To test the proper description of the hadronic background the image properties of proton simulations are compared to those of observation runs with no strong gamma source in the FoV (off-runs). The proton simulations were weighted by energy to represent the measured CR proton spectrum [6]. The off-runs were selected to have similar observing conditions as the simulation set, especially in terms of NSB, atmospheric transparency and zenith angle. As an example Figure 3 shows the Hillas length divided by shower intensity distributions for CT1 and CT5. There is a good agreement between data and simulations, that is also reflected in other image parameters. Thereby we conclude that the data-simulation consistency persists also after cleaning. This is crucial as the cleaned images are the basis for the energy & direction reconstruction in Hillas based analysis algorithms as well as for the gamma-hadron separation.
### High level validation with the Crab Nebula spectrum
Energy and direction reconstruction techniques as well as the gamma-hadron separation rely on matching gamma ray simulations. Further, instrument response functions such as effective area, energy dispersion matrix and point spread function are produced from them. To cross check the overall high level performance it is crucial to eventually compare the derived spectrum to a known steady source. For this purpose, the Crab Nebula is selected as the typical standard candle in VHE astrophysics. Its spectrum was approximated by a power-law model of the form \(\phi_{0}\left(\frac{E}{E_{0}}\right)^{-\Gamma}\) to be comparable with the published H.E.S.S.spectrum [12]. It can be derived under different atmospheric conditions using simulations including correspondingly different atmospheric transmission profiles. This was done already as a cross-check in the published paper on the nova RS Ophiuchi [14], for which the monoscopic analysis was based on the simulation configuration derived in this work. The resulting spectral parameters are shown in Figure 4. The monoscopic flux normalisation is consistent with respect to an atmosphere corrected stereo analysis on the same data sets as well as the reference Crab spectrum from [12] within 15 %. The index shows a systematic shift by \(\sim 0.2\)
that can be explained by the observed hardening of the Crab spectrum towards lower energies. This behaviour becomes more important for the mono analysis with its lower energy threshold.
Figure 4: The Crab spectral parameters for different atmospheric conditions as produced for [14]. The atmospheric conditions are quantified by the Cherenkov transparency coefficient [15]. The mono Crab spectra are a result of the work described here and can be compared to corrected stereo results on the same data sets as well as to the Crab spectrum from [12] with a reference energy of \(E_{0}=1\,\mathrm{TeV}\). The blue shaded areas correspond to different atmospheric conditions during the RS Ophiuchi data taking and are not relevant for this work.
Figure 3: The Hillas length divided by the shower intensity of cleaned images for several observation runs compared to proton simulations at 20 deg zenith. The off-run data set consists of 10 individual observation runs with zenith angles between 15 deg and 25 deg. The thick grey line corresponds to the summed distribution. The lighter grey correspond to the individual distributions to show potential run-to-run variability. All distributions were normalised to have a sum of 1.
## 4 Conclusion and outlook
We present a procedure to systematically validate all steps of a Monte Carlo simulation configuration for an array of IACTs by comparing simulated to measured properties. It was applied to an analysis chain of the H.E.S.S. experiment with great success. Using the presented method, simulations for future hardware iterations or phases can be adapted and validated with little effort. The presented procedures will be of use for the upcoming CTA observatory as well. When operational, it is possible to quickly assess whether the assumptions made in current MC simulation configurations are correct. Further, a validation of the MC configurations will be necessary e.g. when the hardware of CTA will be updated in the future. Care was taken that the entire framework is well documented for future application.
Additionally, to the procedure for validating MC configurations, we developed a scheme to assess the impact of short term changes in the atmospheric transmission on the response of atmospheric Cherenkov detectors and to finally correct for them. This scheme is described in detail in a separate work [16]. It allows to use more of the data obtained by Cherenkov detectors than before and to have a more reliable estimation of primary photon energies.
|
2310.17056 | Strategizing EV Charging and Renewable Integration in Texas | Exploring the convergence of electric vehicles (EVs), renewable energy, and
smart grid technologies in the context of Texas, this study addresses
challenges hindering the widespread adoption of EVs. Acknowledging their
environmental benefits, the research focuses on grid stability concerns,
uncoordinated charging patterns, and the complicated relationship between EVs
and renewable energy sources. Dynamic time warping (DTW) clustering and k-means
clustering methodologies categorize days based on total load and net load,
offering nuanced insights into daily electricity consumption and renewable
energy generation patterns. By establishing optimal charging and
vehicle-to-grid (V2G) windows tailored to specific load characteristics, the
study provides a sophisticated methodology for strategic decision-making in
energy consumption and renewable integration. The findings contribute to the
ongoing discourse on achieving a sustainable and resilient energy future
through the seamless integration of EVs into smart grids. | Mohammad Mohammadi, Jesse Thornburg | 2023-10-25T23:34:25Z | http://arxiv.org/abs/2310.17056v1 | # Strategizing EV Charging and Renewable Integration in Texas
###### Abstract
Exploring the convergence of electric vehicles (EVs), renewable energy, and smart grid technologies in the context of Texas, this study addresses challenges hindering the widespread adoption of EVs. Acknowledging their environmental benefits, the research focuses on grid stability concerns, uncoordinated charging patterns, and the complicated relationship between EVs and renewable energy sources. Dynamic time warping (DTW) clustering and k-means clustering methodologies categorize days based on total load and net load, offering nuanced insights into daily electricity consumption and renewable energy generation patterns. By establishing optimal charging and vehicle-to-grid (V2G) windows tailored to specific load characteristics, the study provides a sophisticated methodology for strategic decision-making in energy consumption and renewable integration. The findings contribute to the ongoing discourse on achieving a sustainable and resilient energy future through the seamless integration of EVs into smart grids.
Electric vehicles, Smart Grid +
Footnote †: publicationid: pubid: pubid: 978-1-6654-4421-7/21/531.00 ©2023 IEEE
## I Introduction
Grid-connected transportation systems have long been considered a promising solution for curbing greenhouse gas and pollutant emissions. Among these solutions, Battery Electric Vehicles (BEVs) emerge as particularly compelling. However, despite their environmental advantages, formidable technical barriers, such as low battery capacity, high costs, and the absence of a widespread charging infrastructure, continue to impede the widespread adoption of plug-in vehicles. [1]. The positive impact of electric vehicles on urban pollution is evident as emissions shift from urban centers to fossil fuel chimneys typically located outside cities. Additionally, power plant emissions are more effectively regulated compared to vehicle tailipipes, further contributing to the appeal of electric vehicles [2, 3].
The surge in Electric Vehicles (EVs) is not only fueled by environmental consciousness but is also buoyed by government incentives, exemplified by measures like the Inflation Reduction Act in the US. A global emphasis on sustainability and the reduction of greenhouse gas emissions further propels the rise of EVs. [4, 5] Recently, many countries have implemented policies, primarily in the form of tax incentives, aimed at bolstering the number of electric vehicles in their fleets. These policies seek to diminish pollutant emissions from traffic and enhance air quality, especially in urban areas where the population is significantly exposed to air pollutants, and traffic remains a major emission source. [6]
The repercussions of traffic emissions extend beyond immediate air quality issues, encompassing indirect human health effects resulting from rising atmospheric greenhouse gases and climate-related changes. Anticipated impacts include heightened public health risks due to climate change, extreme weather events, wildfires, and fluctuations in both indoor and ambient air pollutant levels.
While increasing the market share of EVs is a pivotal step toward achieving environmental goals, the actual benefits hinge on numerous factors, such as the energy pathway, energy generation profile, types of air pollutants and greenhouse gases, and the specific type of EV. Notably, EVs do not universally possess the potential to minimize all particulates and greenhouse gas emissions. [7] Studies, such as those conducted in the U.S. state of Texas by Nichols et al. (2015), highlight the nuanced impact of EVs. While they can reduce GHG emissions, NOx, and PM10, they may generate significantly higher emissions of SO2 compared to Internal Combustion Engine (ICE) vehicles [8].
Existing researches predominantly focuses on emissions reduction through the introduction of different fuels [9] or the implementation of new urban traffic policies [10]. Enrico's 2015 study, for instance, assumes that electric vehicles replace all vehicles, estimating a reduction of approximately 25.7% in NOx and 14.4% in carbon dioxide (CO2) emissions. Nevertheless, there remains a notable lack of knowledge regarding the estimation of pollutant concentration reduction when vehicles with electric engines are introduced either entirely or partially. Realizing substantial air quality improvement necessitates the widespread adoption of electric vehicles, encompassing both light cars and heavy-duty vehicles [6].
Given the fluctuating electricity generation mix, it is imperative to scrutinize the charging habits of BEV users. Analyzing and, when necessary, controlling these habits can minimize emissions and costs. Notably, recharging vehicles at home can increase emissions from the grid, especially during coincidences with afternoon peaks when drivers arrive home. This increased peak load may necessitate additional generating capacity, leading to higher costs and emissions associated with plug-in vehicles. [2]
Efforts to smooth and ideally flatten the aggregate demand through the strategic shifting of loads in time can assist utilities in avoiding blackouts or brownouts and maintaining a reliable service. [11] While Electric Vehicles (EVs) can function as shiftable loads or energy storage for the power grid, most utilities have traditionally treated them as simple loads. Uncoordinated charging, where EVs start charging immediately upon being plugged in, adds to the demand curve and poses a burden during high-demand times. However, with controlled and coordinated charging aligned with grid needs, EVs can serve as both flexible loads and energy storage. Without such coordination, the widespread adoption of EVs could
potentially create supply challenges for the electric power grid at scale. [4][12]
Looking beyond energy generation and consumption, the type of EV becomes a crucial factor influencing the environmental benefits of electric mobility. EVs are categorized into four main types: Hybrid Electric Vehicles (HEVs), primarily powered by gasoline with a small battery supporting the combustion engine; Plug-in Hybrid Electric Vehicles (PHEVs), powered independently by both gasoline and electricity; Battery Electric Vehicles (BEVs), solely powered by electricity; and Fuel Cell Electric Vehicles (FCEVs), powered by hydrogen. [7]
## II Background
In recent years, the electricity network has undergone a significant transformation driven by the increasing penetration of renewable energy (RE) generation and the modernization of transport infrastructure. Policymakers, engineers, and business leaders are actively seeking alternative energy solutions that are both economically viable and environmentally friendly, given strains on petroleum reserves and the pressing issue of GHG emissions. RE and EVs emerge as promising answers to provide energy cost savings and emission reductions [13]. In Texas specifically, the generation mix includes multiple RE. While biomass and hydro make modest contributions, nuclear power emerges as a significant player. Solar energy is substantial, and wind power stands out prominently, reflecting the states favorable conditions for wind energy generation and the investments therein. This mix of renewable sources showcases a forward-thinking approach, contributing not only to environmental sustainability but also ensuring a resilient and diversified energy portfolio.
Integrating RE sources into the electric grid presents challenges due to the intermittent nature of generation sources and their inconsistency with energy usage [14][15]. This complexity is a potential hurdle in transitioning to a more sustainable energy landscape. Moreover, the electric power generation and transportation sectors are major contributors to petroleum shortages and GHG emissions [13].
### _Smart Grid_
To overcome these challenges for a greener future, the concept of the smart grid has gained prominence. This next-generation power grid employs bidirectional power flow and a robust communication network to create a globally distributed and integrated energy distribution network. The smart grid can align power networks with environmental targets, accommodate EV adoption, and facilitate distributed energy generation with storage capabilities [16].
However, integrating EVs into the electric distribution network poses stability concerns, particularly with the increased penetration of EV charging. The charging patterns, as revealed by a study from the National Renewable Energy Laboratory (NREL), coincide with the grid's peak load demand profile, potentially leading to higher costs for EV owners during peak hours [17].
An EV's charging rate (i.e., the speed of charging the EV battery) is controllable by the EV owner, the grid, or both depending on the equipment. This controllability positions EVs as not only consumers but also as controllable loads in grid systems, allowing EVs to serve as distributed energy storage when charging is managed strategically and vehicle-to-grid (V2G) technology is in place [4].
The power grid faces challenges in meeting the rapid surge in electricity demand, emphasizing the need for higher reliability and efficiency. Current power grids are designed to cater to peak demands, resulting in underutilization and waste of natural resources. Fast-responding generators, such as those that burn fossil-fuels, come with significant costs and a substantial GHG footprint.
In response to these challenges, the evolution of the smart grid, coupled with the integration of RE sources, advanced metering infrastructure, smart meters, EVs, and dynamic pricing, has gained momentum [18]. Additionally, the widespread deployment of home energy management systems (HEMS) and communicating devices is expected to upgrade the existing power grid, transforming it into a more intelligent and decentralized system [19]. These advancements collectively contribute to addressing the demand-side management (DSM) problem efficiently, marking a significant stride toward a sustainable and resilient energy future [18].
### _Virtual Power Plants_
The advent of Virtual Power Plants (VPPs) presents a transformative opportunity to alleviate the load on power networks. By generating power locally and sharing it among participants without the need for long-distance transmission at high tension, energy loss factors are either minimized or eliminated. This marks a significant shift in energy dynamics, where participants are no longer passive users but active influencers within the power system, albeit within defined limits--participants are not tasked with directly controlling devices' on-off switches.
At the helm of the typical VPP is a computer system overseen by the Distribution System Operator (DSO), potentially organized on the foundation of an artificial neural network.
Fig. 1: Power Generation Mix in Texas, USA (2022)
Remarkably, even a household with as little as 1 kW of distributed generation capacity (be it PV, FC, or combined heat and power, for example) can support a VPP. The key lies in connecting all these sources and orchestrating their operation to achieve a state of self-balance in the most effective manner. The VPP emphasizes local generation, enabling central generation to operate under more stable conditions. DSOs can optimize peaks in heat and power demands more efficiently, with the incorporation of storage for heat or electricity further enhancing the VPP's operational conditions [20]. An example VPP configuration is shown in Figure 2.
### _Energy Storage Systems and Distributed Energy Resources_
As sporadic renewable generation increases, an energy storage system (ESS) become crucial for ensuring stable, reliable, and consistent EV charging throughout the day [16]. The ESS is a vital tool for adapting power demand variations to the level of power generation in real time, particularly given the weather-related intermittency of RE. These systems can serve as additional sources or energy buffers for non-dispatchable or stochastic generation, such as wind turbines or PV technologies, especially in weak networks [21].
EVs are evolving into distributed energy resources (DERs) that are poised to play a pivotal role in future smart power networks. What sets EVs apart from other DERs is their mobility, allowing them to connect at different points in the grid while maintaining consistent service quality.
VPP generation primarily relies on RE resources, whose production is challenging to perfectly forecast especially on a day-ahead schedule. In addition to storage units, EVs can contribute to balancing forecast errors by leveraging their capacity capabilities. Moreover, the charging of EVs can be strategically implemented during periods of high solar and wind generation [22].
Fleet owners typically charging their EVs with a negotiated flat electricity tariff, but they can unlock additional benefits by actively participating in the wholesale market. This market presence requires fleet owners to determine the quantities of electricity they are willing to store and sell back to the grid, along with setting minimum and maximum prices for each time slot one day ahead. These parameters are then matched with those of other buyers and sellers in the day-ahead electricity wholesale market [23].
However, managing a multitude of DERs under market conditions introduces new challenges. Weather-dependent DER technologies, like solar cells and wind turbines, present fluctuating and non-dispatchable outputs, limiting their contribution and incurring economic penalties. Additionally, isolated operation due to diverse ownership hampers cooperation and communication between neighboring DER units. Aggregating these units into a Virtual Power Plant (VPP) is a strategic solution to address these issues, promoting synergy and enhancing the capability of DERs to serve not only local needs but those of the entire grid [21].
## III Methods
The integration of electric vehicles (EVs) into future smart grids is crucial for successful energy transition. Under the Virtual Power Plant (VPP) concept, EVs can be viewed as moving battery storages, contributing to the efficiency of integrating renewable sources with their alternating generation. Managing EV charging is essential from various perspectives, including energy resilience and grid operations, as studies indicate a significant impact on the wholesale electricity market. [24]
When EVs are left uncoordinated in their charging, it poses challenges for the electric power grid. This is particularly evident when numerous EVs plug into chargers simultaneously, such as commuters arriving home after work, or when many EVs disconnect from chargers simultaneously. Unmanaged, these scenarios can lead to spikes or sharp drops in power demand, respectively. To address this, centralized or decentralized approaches, especially demand response (DR) strategies, can smooth demand shifts and reduce strain on the grid's existing supply and storage infrastructure. Therefore, effective EV and grid interactions are essential for a successful energy transition. [4]
The use of machine learning (ML) algorithms, based on historical data of charging load and user behavior, offers a valuable tool for training and learning trends and patterns. After the training phase, accurate predictions can be made, enhancing EV charging scheduling strategies. ML algorithms have proven capable of providing good forecasts for timeseries data, making them suitable for charging behavior predictions. These predictions, whether used independently or in conjunction with other algorithms, play a crucial role in optimizing EV charging and mitigating potential impacts on the electricity market. [25, 26]
In this paper, two approaches were employed to determine the optimal charging time for electric vehicles (EVs) in Texas, considering the grid situation. One approach is based on the grid load, and the other approach incorporates both grid load and renewable sources generation. Detecting the trend of electricity consumption throughout the day is crucial for identifying the optimal charging time. Recognizing that
Fig. 2: Example structure of a virtual power plant
one trend does not sufficiently characterize the entire year, the days were grouped based on similarity, and trends were identified for each group. The data set includes hourly load and renewable generation data for 2022.
To analyze load and renewable sources trends on different days, this research utilized the dynamic time warping (DTW) clustering model. The Electric Reliability Council of Texas (ERCOT) provided time-series data, capturing load and renewable generation in Texas. Measuring similarity in time-series data is challenging due to the need to consider the order of elements in the sequences. The DTW algorithm was chosen for its ability to measure the similarity between two temporal sequences, allowing for nonlinear warping in the time dimension to determine their similarity, irrespective of certain nonlinear variations in time [4][27].
DTW clustering is chosen for the following advantages over other methods, and it contrasts with traditional approaches like Euclidean distance. While Euclidean distance is suitable for spatial calculations, DTW excels in capturing temporal differences in time-series data. Assessing time series similarity requires a method that leverages both temporal and spatial characteristics [28]. To further analyze the trends identified for each cluster, thresholds were needed to determine the best time for charging and the optimal potential time for vehicle-to-grid (V2G) interactions. The K-means clustering model, an unsupervised machine learning approach, was employed to find these thresholds. K-means clustering attempts to group comparable observations by minimizing the Euclidean distance to centroids, providing valuable insights for optimizing EV charging and V2G interactions [29].
## IV Results
In this paper, we employed a robust methodology that leverages a comprehensive dataset encompassing the total load and renewable generation data for each hour throughout the entirety of 2022 in the state of Texas. The focal point of this analysis lies in the application of DTW clustering, a technique adept at discerning temporal patterns in sequences. Specifically, we utilized DTW clustering to categorize days into distinct clusters based on two pivotal metrics: total load and net load. The total load represents the aggregate electricity consumption in Texas for each hourly interval, while net load is derived by subtracting the total renewable generation from the total load. This innovative approach allows for a nuanced understanding of how daily electricity consumption and renewable energy generation vary over the course of the year.
Moreover, this approach goes beyond simple classification; it explores the identification of prevalent trends within each group. This stage is essential for gaining insights into the overarching patterns that define different periods throughout the year. To enhance the practicality of the results, we integrated k-means clustering to identify two thresholds within each cluster. These thresholds, revealed through k-means clustering, act as crucial markers to determine the optimal timing for both charging and V2G activities. Through the utilization of a dual clustering method--DTW for classification and k-means for threshold identification--we've developed a sophisticated methodology. This methodology not only categorizes days into meaningful groups but also extracts actionable information for strategic decision-making in the realm of energy consumption and renewable integration.
### _Total Load_
We employed the total load data for each hour throughout the entirety of 2022 in Texas to categorize days into 20 distinct clusters. Through extensive experimentation, our study determined that 20 clusters represent an optimal grouping for our analysis. For each cluster, a representative trend was identified, utilizing k-means clustering to establish two crucial thresholds represented by green and red horizontal lines. In Figure 3, we showcase six illustrative examples of these clusters, emphasizing the varied load ranges observed along the vertical axis. When the trend falls below the green line within a cluster, it designates off-peak times, suggesting this range of hours as ideal for EV charging. Conversely, when the trend exceeds the red line, it signifies peak times, advising against EV charging while presenting an opportunity for V2G activities during these intervals. This approach allows for a clear identification of optimal charging and V2G windows tailored to the specific load characteristics of each cluster.
### _Net Load_
In the Net Load approach, we calculate the net load by subtracting the renewable generation from the total load, representing the demand supported by non-renewable sources. This method offers a distinct advantage over the total load approach, particularly when considering EV charging. The potential issue with relying solely on the total load is that the peaks of renewable generation and load may coincide, alleviating pressure on nonrenewable sources and making V2G less crucial. To address this, the net load approach accounts for the demand supported by non-renewable sources, providing a more nuanced perspective.
To implement this approach, we utilize 20 clusters as determined to be optimal for our analysis. For each cluster, a representative trend is established, and two thresholds are identified to demarcate crucial ranges. Figure 4 illustrates six exemplar clusters, where ranges above the red line signify instances of high load and low renewable generation. Leveraging V2G for EVs as a virtual power plant during these periods can effectively alleviate stress on the power grid, promoting a more environmentally friendly power generation by optimizing the utilization of renewable energy sources. This approach thus enables targeted recommendations for V2G implementation during specific load and renewable generation scenarios within each identified cluster.
## V Conclusions and Future Work
Our exploration into the connection between EVs, smart grids, and renewable energy in Texas illuminates key pathways for sustainable energy transitions. This study reveals
the intricate relationship between the charging patterns of electric vehicles and the dynamics of the grid. It emphasizes the significance of strategic interventions to optimize energy utilization. Leveraging DTW clustering and k-means methodologies, we not only categorized daily energy consumption and renewable generation but also identified tailored windows for EV charging and V2G interactions. This understanding contributes actionable insights for users and grid operators alike, paving the way for a resilient, environmentally conscious energy future. Moving forward, collaborative efforts and policy measures must align to harness the full potential of EVs within smart grids, emphasizing predictive scheduling, VPPs, and a collective commitment to sustainable energy practices. This study guiding the integration of EVs into the smart grids and leading us toward a future where clean energy and grid stability are connected.
The work described is being developed and tested by Grid Fruit, LLC. Grid Fruit is a private software company that puts unused energy and operational data to work for the grid and its customers [30].
|
2303.10601 | Transfer learning method in the problem of binary classification of
chest X-rays | The possibility of high-precision and rapid detection of pathologies on chest
X-rays makes it possible to detect the development of pneumonia at an early
stage and begin immediate treatment. Artificial intelligence can speed up and
qualitatively improve the procedure of X-ray analysis and give recommendations
to the doctor for additional consideration of suspicious images. The purpose of
this study is to determine the best models and implementations of the transfer
learning method in the binary classification problem in the presence of a small
amount of training data. In this article, various methods of augmentation of
the initial data and approaches to training ResNet and DenseNet models for
black-and-white X-ray images are considered, those approaches that contribute
to obtaining the highest results of the accuracy of determining cases of
pneumonia and norm at the testing stage are identified. | Kolesnikov Dmitry | 2023-03-19T08:35:47Z | http://arxiv.org/abs/2303.10601v1 | # Transfer learning method in the problem of binary classification of chest X-rays
###### Abstract
The possibility of high-precision and rapid detection of pathologies on chest X-rays makes it possible to detect the development of pneumonia at an early stage and begin immediate treatment. Artificial intelligence can speed up and qualitatively improve the procedure of X-ray analysis and give recommendations to the doctor for additional consideration of suspicious images. The purpose of this study is to determine the best models and implementations of the transfer learning method in the binary classification problem in the presence of a small amount of training data. In this article, various methods of augmentation of the initial data and approaches to training ResNet and DenseNet models for black-and-white X-ray images are considered, those approaches that contribute to obtaining the highest results of the accuracy of determining cases of pneumonia and norm at the testing stage are identified.
Transfer learning Chest X-rays classification Pneumonia detection ResNet-18 DenseNet-121
## 1 Introduction
Pneumonia is one of the most common infectious diseases that affects millions of people worldwide, causing complications to the lungs and leading to significant morbidity and mortality [1]. Chest X-ray imaging is a crucial tool for diagnosing pneumonia, which can be a complex and time-consuming process. The current COVID-19 pandemic has further increased the workload of hospitals, making it even more challenging for doctors to accurately diagnose and treat pneumonia in a timely and efficient manner [2].
Interpreting chest X-ray images requires specialized medical training and experience, and the quality of the images can significantly affect the accuracy of the diagnosis [3]. The interpretation of chest X-rays is subjective and can lead to significant variability among radiologists, which can result in missed or delayed diagnoses [4]. Additionally, due to the high volume of patients with pneumonia and other lung infections, doctors may not have enough time to review all X-ray images in detail, leading to potential errors and delays in diagnosis.
To address these challenges, there is a growing interest in developing automated approaches to assist medical professionals in the interpretation of chest X-ray images. Such systems can reduce the time and workload for radiologists, improve the accuracy and consistency of diagnoses, and ultimately lead to better patient outcomes [5], [6].
Recent studies have shown promising results in developing automated approaches to classify chest X-ray images for pneumonia [7], [8]. For example, deep learning algorithms have been used to analyze X-ray images and differentiate between normal and abnormal cases, with high accuracy [9], [10]. Other studies have focused on detecting specific features associated with pneumonia, such as consolidation or infiltrates, to improve diagnostic accuracy [11], [11], [12].
Despite the promising results in automated approaches for pneumonia diagnosis using chest X-ray images, there are challenges that need to be addressed. One major challenge is the variability in image format and quality among different medical organizations, making it difficult to develop a universal approach for image analysis and classification.
To address this challenge, researchers are exploring approaches that can be tailored to the specific features of chest X-ray images in a given medical organization [13]. This requires the development of a system that can be trained on a limited number of images from that organization, while still achieving high levels of accuracy in diagnosis [14].
In conclusion, the need for accurate and efficient diagnosis of pneumonia is critical, especially during the current pandemic and other seasonal infectious disease outbreaks. Developing automated approaches to support medical decision-making in this area is essential to improve the quality and efficiency of care. Tailoring these approaches to the specific features of images in a given medical organization is essential for achieving high levels of accuracy in diagnosis. Further research in this field is necessary to optimize these systems and ensure their effectiveness in real-world clinical settings.
## 2 Related Work
In recent years, there has been an increasing interest in the use of artificial intelligence (AI) techniques for the automatic classification of chest X-ray images. This is due to the potential for AI to provide accurate and efficient diagnoses, especially in situations where there is a shortage of radiologists or a large number of images to be analyzed.
Several approaches have been proposed for the classification of chest X-ray images, including traditional machine learning algorithms such as support vector machines (SVM) and deep learning techniques such as convolutional neural networks (CNNs). While traditional machine learning algorithms have shown some success, deep learning techniques have emerged as the state-of-the-art approach for this task [15], [14].
However, one of the challenges of deep learning techniques is the need for large amounts of data to train a model. In the case of chest X-ray images, obtaining a large dataset can be difficult due to privacy concerns and the difficulty of obtaining labeled data. Therefore, transfer learning has emerged as the best approach for classifying images in the presence of a small number of training photos [14].
Transfer learning is a deep learning technique that involves using a pre-trained model, which has already been trained on a large dataset, and fine-tuning it for a new task with a smaller dataset. The pre-trained model is usually trained on a large dataset, such as ImageNet, which contains millions of labeled images. The model is then adapted to the new task by training it on a smaller dataset, such as a dataset of chest X-ray images, while keeping the earlier learned weights fixed [16].
The advantages of transfer learning include the ability to use a pre-trained model, which has already learned important features from a large dataset, and the ability to train a model with a smaller dataset. This approach can also help to reduce the risk of overfitting, which can occur when training a model with a small dataset [17].
However, the disadvantages of transfer learning include the possibility of the pre-trained model not being suitable for the new task, and the need for a large pre-trained model, which can be computationally expensive to train. Additionally, fine-tuning a pre-trained model requires careful selection of hyperparameters and can be a time-consuming process [16].
There has been a significant amount of research conducted in the field of transfer learning for classifying the presence of pneumonia in chest X-ray images. Deep learning specialists have explored various transfer learning techniques and achieved high levels of accuracy in the classification task [5], [18].
However, the challenge of working with a small number of source images still remains. To address this challenge, a unique approach was proposed in this study that involved a special approach to image preprocessing and augmentation, as well as the implementation of multiple approaches that differ in their use of pre-trained networks with three input channels, even when the dataset provided black-and-white photos.
The proposed approach builds upon the existing research in the field of transfer learning by incorporating novel techniques to overcome the limitations of a small dataset. Specifically, the study focuses on image preprocessing and augmentation, which are critical steps in the training process for deep learning models. By utilizing these techniques, the proposed approach aims to improve the accuracy of the classification task and mitigate the risk of overfitting.
The implementation of multiple approaches that differ in their use of pre-trained networks with three input channels is another key contribution of this study. This allows for a comparison of the effectiveness of different pre-trained networks and provides insights into which networks are best suited for this task.
This will allow applied specialists to immediately train models for the dataset of a particular hospital, bypassing the stage of long-term research and selection of architectures of the transfer learning approach.
## 3 Methods
To conduct the study, a publicly available (open source) dataset of X-ray images "Chest X-Ray Images Pneumonia" was taken from the Kaggle website. This dataset contains the train images section, which has 1341 images of norm and 3875 images with pneumonia.
### Image pre-processing and augmentation
More than 95% of the images in the dataset are black and white, but there are still, albeit rarely, three-channel RGB images. Therefore, the first stage of processing was the translation of all images into a single-channel format with averaging of channel intensities for color images.
In the validation dataset, only 6 images of each class were initially presented, which is disproportionately too small to assess the quality of the network when selecting hyperparameters. So the test and validation datasets were concatenated and then evenly divided in half. Thus, in the test and validation subsample, the number of images was 320 in each. At the same time, the proportion between classes is approximately the following: 60% class pneumonia and 40% class norm.
Since the number of images with pathology in the training dataset significantly exceeded the number of images without it, it was necessary to reduce such a disparity for better training. Since the initial goal of the work was to study the best principles of network training for classifying chest X-ray images on a small number of training images, therefore, the downsamling approach of objects from the pneumonia class was chosen. After the implementation of this method, the resulting dataset began to contain 1341 images of the norm and the same number of pathology.
After that, the values of the mean and standard deviation of intensity in the training sample were determined. These values will be used when implementing input data normalization during network training and testing.
In order to increase the size of the resulting training dataset of images, the augmentation method was employed. This technique involves generating new examples of data based on the existing ones by applying various transformations [19]. Specifically, four different types of transformations were implemented in this study to augment the dataset.
These types of transformations that have been implemented:
1. Changing the rotation angle of the image to a random angle from 0 to 20 degrees. Next, the image is compressed to a size of 224 by 224 pixels (to match the input sizes of ResNet and DenseNet networks). And at the last stage normalization of the resulting images.
Figure 1: Transformations for augmentation.
2. Symmetrical mapping around the vertical. Next, the image is compressed to a size of 224 by 224 pixels (to match the input sizes of ResNet and DenseNet networks). And at the last stage normalization of the resulting images.
3. Change the brightness, contrast and saturation parameters. Next, the image is compressed to a size of 224 by 224 pixels (to match the input sizes of ResNet and DenseNet networks). And at the last stage normalization of the resulting images.
4. First, the image is compressed to the size of 280 by 280, then a part of the size 224 by 224 is randomly cut out, then with a probability of 0.5 there will be a reflection around the horizontal as in transformation No. 2 and then a small random rotation from 0 to 5 degrees. And at the very end, normalization is implemented
Examples of the results of the described transformation are presented in Figure 1.
Different types of transformations have been implemented since the augmentation method has been shown to be an effective way of increasing the diversity of the dataset and improving the performance of machine learning models. However, it is important to note that using too many or inappropriate types of transformations can lead to overfitting and reduced performance. Therefore, the types and number of transformations used in this study were carefully selected to strike a balance between increasing the dataset size and maintaining the model's ability to generalize.
A study was conducted in which a combination of initial images on a training dataset was implemented together with each type of transformation separately. The highest quality indicators on the test dataset after training a custom network were shown by a network trained on a dataset consisting of initial training images and photos symmetrically reflected relative to the vertical (transformation \(N\)2). Therefore, it was decided to use this type of augmentation, which will increase the number of images in training by 2 times.
In total, after the preprocessing and augmentation stage, a training dataset was obtained, consisting of 5364 X-ray images, half of which are normal and the second half are pictures with pneumonia.
### Implementation of transfer learning and selection of hyperparameters and model's training structure
ResNet18 and DenseNet121 are both popular deep neural network architectures used in computer vision tasks such as image classification, object detection, and segmentation.
ResNet18 is a variant of the ResNet (Residual Network) architecture, introduced by Microsoft Research in 2015. ResNet18 consists of 18 layers of convolutional neural network (CNN) and has skip connections between layers, which allow the network to be trained more efficiently and to reduce the vanishing gradient problem. ResNet18 has been widely used in transfer learning, where the pre-trained model is fine-tuned on a new dataset with a small number of training images. One of the advantages of ResNet18 is that it can achieve high accuracy with a relatively small number of parameters, making it suitable for deployment on devices with limited computational resources [20].
DenseNet121, on the other hand, is a more recent architecture introduced by the University of Waterloo in 2016. The name "DenseNet" comes from the fact that the network is densely connected, meaning that each layer receives input not only from its previous layer but also from all previous layers in the network. This allows the network to learn more complex representations and achieve better performance with fewer parameters. DenseNet121 consists of 121 layers of CNN and has been shown to outperform other state-of-the-art architectures on various computer vision tasks [21].
Both of these convolutional neural network architectures were considered in this study on the classification of chest X-Ray Images.
The images that were used to train the networks were black and white, so it became necessary to come up with a concept for working with them, since the original architectures of the DenseNet and ResNet networks imply three-channel images with a size of 224 by 224 pixels at the input.
So 2 different approaches to working with source images were considered:
* Leave the images single-channel and redefine the first convolutional layer, making the value of the number of input channels equal to 1.
* Make the pictures three-channel by copying the value of one original black-and-white channel. That is, the artificial creation of two other channels.
To determine the best network training strategy, 3 experiments were conducted separately for the ResNet and DenseNet networks. At the same time, the pre-trained weights of neurons were initially uploaded to the network.
* Fixed all convolutional layers completely and only trained fully connected layers. At the same time, I changed the architecture of the fully connected \(fc\,_{new}\) layer. I made one hidden layer with the number of neurons, which will be used as a matching hyperparameter. The best hyperparameter was determined by \(n\,_{neurons}\) based on accuracy results on the validation sample. Since the photos are the original black and white, when creating training datasets, I duplicated the channel values so that the number of input channels of the convolutional network was 3 as in the original network (3, 224, 224).
* Fixed all convolutional layers except the first input, in which we set the number of input channels to 1 instead of 3 and trained fully connected layers. At the same time, I changed the architecture of the fully connected \(fc\,_{new}\) layer. I made one hidden layer with the number of neurons, which will be used as a matching hyperparameter. The best hyperparameter was determined by \(n\,_{neurons}\) based on accuracy results on the validation sample. Since I changed the number of channels at the network input, so I trained the network on ordinary black and white images, the size of (1, 224, 224).
* Fully trained network without changing anything in it (except for the output number of neurons in the fully connected part). Since the photos are the original black and white, when creating training datasets, I duplicated the channel values so that the number of input channels of the convolutional network was 3 as in the original network (3, 224, 224).
The layers available for training, that is, in which weights can change due to gradient descents, are represented in pink on Figure 2.
Figure 2: The transfer learning training strategies.
In experiments I and II, fully connected layers were modified with this architecture:
\[fc_{\mathit{new}}=Sequential(Dropout(p=0.2),\] \[Linear(n_{\mathit{input}},\ n_{\mathit{neurons}}),\] \[ReLU(),\] \[Linear(n_{\mathit{neurons}},\ 2))\]
Thus, there was an intermediate layer, which had a number of neurons taking the values of 10, 100 and 500. In the process of conducting these experiments for each of the ResNet-18 and DenseNet-121 networks, I selected the best given hyperparameter.
I also conducted an additional experiment in which I trained networks without using transfer learning, so I didn't use pre-trained weights. I thereby wanted to find out what would change with full training without pre-selected knowledge.
The size of the mini-batch was chosen to be equal to 30 images, so one epoch of training with the number of images equal to 5364 was equal to performing 179 gradient descents. During the training, the learning rate scheduler was used, which after every 5 epochs of training reduced the learning rate by 10 times, thereby increasing the chances of falling into the global minimum of the loss function, which was cross entropy.
## 4 Results
The solution of the project described in this article is presented on the open source platform GitHub in the public repository at the link:
[https://github.com/Koldim2001/transfer_learning_CNN](https://github.com/Koldim2001/transfer_learning_CNN)
As a result, 3 networks (one for each experiment) and 3 networks using DenseNet-121 were implemented and trained using the transfer learning method for ResNet-18. For the I and II experiments, the dependences of the accuracy metric change on validation and training for each of the three hyperparameter values were obtained.
As a result, for each experiment, the value of the hyperparameter and the number of the training epoch were found, on which it was possible to obtain the highest accuracy value on validation. As a result, a common table was formed for the best networks of each experiment with the resulting metric values for each class (1 - pneumonia, 0 - norm). The results can be seen in Figure 3.
Also, the results of ResNet-18 and DenseNet-121 network training have been added to the table with full training without pre-loaded weights, that is, without the use of transfer learning (no TL).
Graphs of the dependence of changes in the accuracy metric for validation and training on the number of training epochs were also obtained. In this way, 2 graphs were obtained for each of the ResNet-18 and DenseNet-121 networks. These graphs are presented in Figures 4,5,6 and 7. And will be analyzed in detail in the section of the article "Discussion".
Figure 3: Metrics of different models.
## 5 Discussion
### ResNet-18
As a result of the study, it was revealed that the best accuracy results on the test were achieved when using a model with a modified first convolutional layer (II experiment) and a fully connected part with the number of neurons of the intermediate layer equal to 100 (90% accuracy).
According to the Figure 5, in the first experiment (I), for any values of the hyperparameter, \(n_{neurons}\) behaves more stably during validation and has low variability relative to the trend during training (while the accuracy values themselves are consistently higher than in all other experiments). For this reason, it is possible say that the principle of training with the fixation of all convolutional layers has shown better results than all the other considered approaches to ResNet training for this binary classification.
So such a high result on the test in the second experiment may be caused, among other things, by a coincidence (the test sample is too small to consider the difference of 2% on accuracy statistically significant).
Figure 4: Accuracy on the training dataset for various types of ResNet training.
Figure 5: Validation accuracy in various types of ResNet training.
It was also found that the use of pre-trained weights of neurons significantly accelerates the learning rate. In additional experiment, I trained the ResNet18 network without pre-selected weights and received longer training on the train dataset (Figure 4) as well as lower accuracy rates on validation (Figure 5). This confirms the usefulness of using transfer learning technology - the application of knowledge gained in solving one problem to solve a similar one.
In addition, it was noted that at the 6th epoch of training, the quality of validation for most models reached a plateau and stopped jumping due to random coincidences of answers. However, with an increase in the number of neurons on the hidden layer of a fully connected network, the speed of overfitting increases. So the optimal value of learning epochs directly depends on this hyperparameter.
When creating a medical decision support system, an important parameter in assessing the quality of the model is f1 score and recall. It is important for us not to miss patients with pneumonia and to accurately name all patients as sick. For the best models of each of the experiments, the values f1, recall and precision for class 1 (presence of pneumonia) were determined.
When analyzing the results of the experiment, it can be seen that the model from the second experiment has the highest f1 score = 0.93 and recall = 0.98, while the other 2 transfer learning models have f1 score = 0.90 (recall of the first experiment 0.88, the second 0.98). Since the best model for the experiment was chosen based on the result of the highest accuracy during validation during training, and the validation sample was dominated by Class 1 photos, therefore, high recall values are predictable, so it would be correct to evaluate it by f1 score.
When training without pre-trained network weights, we obtained the lowest values of the f1 metric (0.86), which once again shows the advantage of transfer learning in this classification problem with a small train dataset size.
### DenseNet-121
As a result of the study, it was revealed that the best accuracy results on the test were achieved when using a pre-trained model with only a fully connected layer modified (experiment I) with the number of intermediate layers equal to 10 (92% accuracy).
According to the Figure 7, the first experiment at any values of the \(n_{neurons}\) hyperparameter behaves more stably during validation and has low variability relative to the trend during training (while the accuracy values themselves are consistently higher than those of all other experiments). For this reason, we can say that the principle of training with the fixation of all convolutional layers showed better results than all the other DenseNet training approaches considered for this binary classification.
It was also found that the learning speed in the second experiment (II) at any hyperparameter values is significantly lower than in the first and 3rd experiments (Figure 6). So a network of 2 experiments needs to be trained for at least 6 epochs in order to achieve consistently high results on validation. Up to 6 epochs, the accuracy value randomly jumps from 70 to 90, which does not say anything good about the quality of the model.
In addition, it was noted that with full training, there is a very rapid overfitting, which is why the accuracy value on the train becomes approximately 100% at the 6th epoch, while the results on validation are only beginning to decrease. For this reason, training the entire network is not the right solution.
As it was said earlier, when creating a medical decision support system, an important parameter in assessing the quality of the model is f1 score and recall for pneumonia class. When analyzing the results of the experiment, it can be seen that the model from the first experiment has the highest f1 score = 0.93, but at the same time recall is only 0.93, while all other models have f1 no higher than 0.9, but higher recall class pneumonia values. Since the best model for the experiment was chosen based on the result of the highest accuracy during validation during training, and the validation samples was dominated by Class 1 photos, therefore, high recall values are predictable, so it would be correct to evaluate it by f1 score.
## 6 Conclusion
Summing up, the application of the transfer learning method when training convolutional neural networks in the chest X-ray image classification task showed high accuracy values on the test.
As a result of verified research, it can be said that the DenseNet network showed the highest results compared to ResNet. Namely, when learning with a fixed convolutional part. And in order for black-and-white images to be run through such a network, it is first necessary to duplicate the values of a single channel in order to make the number of channels of the input layer equal to three (as the pre-trained network needs).
At the same time, for any values of hyperparameters (the number of neurons of the intermediate layer), fairly high results are obtained for such a network with an accuracy of more than 87%. In the current experiment conducted with this set of training data, the favorite was a network with the number of neurons in a fully connected layer equal to 10. This is due to the fact that such a network had a small number of selected weights and was less susceptible to overfitting Such a network learns fairly quickly and does not require more than 5 training epochs (179 gradient descents per epoch) to obtain consistently high test results.
|
2302.07054 | Compression and information entropy of binary strings from the collision
history of three hard balls | We investigate how to measure and define the entropy of a simple chaotic
system, three hard spheres on a ring. A novel approach is presented, which does
not assume the ergodic hypothesis. It consists of transforming the particles
collision history into a sequence of binary digits. We then investigate three
approaches which should demonstrate the non-randomness of these
collision-generated strings compared with random number generator created
strings: Shannon entropy, diehard randomness tests and compression percentage.
We show that the Shannon information entropy is unable to distinguish random
from deterministic strings. The Diehard test performs better, but for certain
mass-ratios the collision-generated strings are misidentified as random with
high confidence. The zlib and bz2 compression algorithms are efficient at
detecting non-randomness and low information content, with compression
efficiencies that tend to 100% in the limit of infinite strings. Thus
compression algorithm entropy is non-extensive for this chaotic system, in
marked contrast to the extensive entropy determined from phase-space integrals
by assuming ergodicity. | Matej Vedak, Graeme J Ackland | 2023-02-14T14:16:43Z | http://arxiv.org/abs/2302.07054v2 | Compression and information entropy of binary strings from the collision history of three hard balls
###### Abstract
We investigate how to measure and define the entropy of a simple chaotic system, three hard spheres on a ring. A novel approach is presented, which does not assume the ergodic hypothesis. It consists of transforming the particles' collision history into a sequence of binary digits. We then investigate three approaches which should demonstrate the non-randomness of these collision-generated strings compared with random number generator created strings: Shannon entropy, diehard randomness tests and compression percentage. We show that the Shannon information entropy is unable to distinguish random from deterministic strings. The Diehard test performs better, but for certain mass-ratios the collision-generated strings are misidentified as random with high confidence. The zlib and bz2 compression algorithms are efficient at detecting non-randomness and low information content, with compression efficiencies that tend to 100% in the limit of infinite strings. Thus "compression algorithm entropy" is non-extensive for this chaotic system, in marked contrast to the extensive entropy determined from phase-space integrals by assuming ergodicity.
+
Footnote †: : _Journal of Physics Communications_
## 1 Introduction
The validity of statistical physics is determined by its central assumption: a physical system must sample a representative area of its theoretical phase space. This leads to the question of the _ergodic hypothesis_: Points in a moving ergodic system will eventually visit all parts of the phase space that the system moves in. With respect to physical systems, ergodicity is the
assumption which means that thermodynamics properties can be calculated from phase-space integrals. Ergodicity implies that the average behaviour of the system can be deduced from the time-averaged trajectory of a 'typical' point. It is further assumed that sampling over a long time series is equivalent to sampling over a large phase space. Equivalently, ergodicity means that taking a sufficient amount of random samples of the process can represent the average statistical properties of the whole process. Many of the predictions of ergodic theory are proven for hard sphere systems, however, strict ergodicity is a sufficient _but not essential_ condition for this: it is not clear that a deterministic system will actually visit all available microstates [1][2].
Entropy is typically described in terms of the information content of a distribution, and can be measured in terms of either probabilities, or by counting the number of microstates in a macrostate. For a system exhibiting deterministic chaos, the true information content is low - everything is fully determined by the initial conditions. By contrast, the ergodic hypothesis is that the system will fully explore its phase space, so measures based on either probability or counting microstates suggest high entropy.
This is why explorations of ergodicity in relatively simple systems became an important area of research. It is of interest to find systems that are complex enough to exhibit non-trivial ergodicity but simple enough so we can actively examine the phase space in which they move. Many such systems have been explored, and of particular importance are hard-ball systems because of their simplicity (for example, [3]).
Another related concept is "stochastic simulation" - in a computer simulation this can be defined as any trajectory which involves random numbers, e.g. Langevin dynamics. This is distinct from deterministic chaos, for which a simulation maps to Newtonian mechanics and does not use a random number generator. It is possible to use chaotic or stochastic dynamics to explore the same phase space[4].
We investigate one of the simplest systems exhibiting deterministic chaos: three pointlike particles that move on a 1D periodic ring. This specific system was first investigated by one of us 20 years ago [5] as an example of the more general issue of how ergodicity, chaos and integrability allow statistical mechanic models to represent thermodynamic problems[6, 7]. We have previously shown [5, 4] that there are periodic trajectories in this system, such that some regions of the phase space cannot be accessed. The phase space of this system is six dimensional, with axes consisting of three positions and three momenta of the particles. Conservation of energy and momentum reduce the accessible region to a four-dimensional hypersurface within this six-dimensional phase space. Particles continuously explore position space, while coordinates in momentum space remain constant until a collision occurs, after which they take on new constant values. Labelling the initial momenta with \(p_{1}\) and \(p_{2}\) and the subsequent post collision momenta with \(q_{1}\) and \(q_{2}\) of particles with masses \(m_{1}\) and
the update formulae are:
\[q_{1}=\frac{m_{1}-m_{2}}{m_{1}+m_{2}}p_{1}+\frac{2m_{1}}{m_{1}+m_{2 }}p_{2}, \tag{1}\] \[q_{2}=\frac{2m_{2}}{m_{1}+m_{2}}p_{1}+\frac{m_{2}-m_{1}}{m_{1}+m_ {2}}p_{2}. \tag{2}\]
The outcome of every collision can be represented by the operation of a matrix on a vector of momenta \((p_{1},p_{2},p_{3})\), a state vector in the phase space. Any sequence of collisions can be mapped onto a product of such matrices [5].
The spatial probability distribution was found to be triangular for each of the three particles, going to zero where a triple collision occurs (Fig, 1). The analytical derivation of this result can be found here [8]. Momentum space distributions assume a vastly different shape. As outlined in [5], the shape of three particle momentum distributions is an arcsine distribution:
\[P(p_{i})\propto(p_{i,max}^{2}-p_{i}^{2})^{-1/2}. \tag{3}\]
This distribution is the projection of an ellipse onto one axis in the 3D momentum phase space (the ellipse being the region that satisfies the conservation of energy and momentum). The mean energy of each particle can be calculated [4]
\[E_{i}=\int\frac{p_{i}^{2}}{2m_{i}}P(p_{i})dp_{i}=\frac{M-m_{i}}{2M}E_{tot}. \tag{4}\]
It is proportional to the mass of the other two particles. Departure from the usual equipartition result is due to the fact that the system is in the'molecular-dynamics' ensemble of statistical mechanics where energy, volume, and the number of particles are conserved, and the total momentum is zero. [9]. A generalized version for any number N of particles can be found [5]:
\[\langle E_{i}\rangle=\frac{\sum_{j}m_{j}-m_{i}}{\sum_{j}m_{j}}\frac{E}{N-1}.\]
which satisfies equipartition in the high-N limit. Distributions of particle positions converge quickly to Gaussianity as the number of particles is increased. Similar results with non-trivial patterns are found in stochastic models in higher dimension[10, 11].
This system has been shown to be equivalent to a billiard problem in a triangular stadium [12]. With the triangular system in mind, it is easy to see that there are periodic orbits (example of a right triangle where the billiard bounces off the two non-hypotenuse sides at a right angle). Since one cannot escape the periodic orbit, and the dynamics are time-reversible, one cannot enter it either. In this way, the microstates of a periodic orbit remain inaccessible to the general case, which violates strict ergodicity. Explorations of whether non-ergodicity invalidates statistical mechanics have already been made [4], where it was shown that this chaotic system samples the phase space efficiently, such that thermodynamic
integrals converge to the same value as if it was ergodic. This paper focuses on information and entropy. We generate binary strings based on particle collisions and compare those strings to completely randomly generated ones. The more random the collision generated strings end up being, the stronger belief in the ergodicity hypothesis can be held and the system is better at exploring its phase space. To the best of our knowledge, this is a novel approach that has not been explored yet.
## 2 Methods
Start with three particles. Sorting by their starting position, we index them: \(0,1,2\). Meaning at the onset of the simulation \(x_{0}<x_{1}<x_{2}\) (\(x_{i}\in[0,1)\)). Collisions are simulated and say that the following collisions happen: particles 1 and 2 collide, particles 0 and 1 collide, particles 0 and 2 collide, particles 0 and 1 collide. In a simple notation, each collision is represented by the particle which does not collide, so those collisions would be written down as follows: 0212. However, after two particles collide there are only two options, so we investigated two encoding methods (EM1 and EM2) to convert this ternary string into a binary string.
1. EM1 After a collision, either the 'left' particle collides next with the third one, or the 'right' particle does. The trajectory is then mapped to a binary string in the following way: A 'left' particle collision is encoded by a 0, collision of a 'right' particle is encoded as 1. Using the above collisions as an example, we will explicitly write out the generated string. The first collision is between particles 1 and 2, and subsequently, particle 1 (on the 'left' of the collision) collides with the third particle, particle 0. At that point, the string we are generating will look like: \[s=0.\] The subsequent collision is between particles 2 and 0; again, 0 was the particle on the 'left' of the previous collision so the string is updated accordingly: \[s=00.\] The final collision in the example is \((0,1)\), where 0 was the particle on the 'right' in the previous collision, so a 1 is appended to the string: \[s=001.\]
2. ME2: Each entry in the ternary string is replaced by 1 is both adjacent entries are the same, or zero is they are different. So e.g. 0212 becomes.01. with the dot denoting entries which would require an extended string to generate
These encoding produce different binary string representing the same sequence. They are referred to as collision generated strings, as opposed to random strings, created using random
number generators (RNG, we used python's numpy.random module, so they are technically pseudo-random numbers).
Perhaps the most obvious first test to conduct is the inspection of entropy of the strings at hand. The information theory approach was used, and Shannon entropy was utilized:
\[H(X)=-\sum_{i=1}^{2}P(x_{i})\log_{2}(P(x_{i})).\]
In the binary string case, X is the binary string, \(P(x_{1})\) corresponds to the total proportion of 0s in the string, while \(P(x_{2})\) corresponds to the total proportion of 1s in the string. As shown in figure 3, the system maximises its Shannon entropy by equalising the number of "left" and "right" collisions. Thus Shannon entropy is unable to distinguish between low-information deterministic trajectories, and high-information random trajectories.
Furthermore, there are many other ways of randomness testing. In this context, we are interested in statistical randomness. A sequence of characters is statistically random if it does not contain any recognizable patterns or regularities. It is important to note that statistical randomness does not necessarily imply true randomness or objective unpredictability. This is due to the fact that any inspected and tested sequence must be finite. Any tests conducted on such a finite sequence can only prove 'local' randomness, even though we are interested in the true, 'global'.
The first randomness tests were published by M. G. Kendal and Bernard Babington Smith [13] and originally consisted of four tests:
* The frequency test checked whether the number of 0s, 1s, 2s, and other digits was roughly the same.
* The serial test did the same thing as the frequency test, but for sequences of two digits at a time, comparing the observed frequencies with the hypothetical predictions.
* The poker test tested for specific sequences of five numbers at a time based on hands in the game of poker.
* The gap test looked at the distance between zeroes (00 is a distance of 0, for example).
Since then, randomness tests have increased in complexity and have improved. Perhaps the most famous and popular are the so-called diehard tests [14], a battery of statistical tests that measure the quality of a random number generator [15].
Another way of testing how random a sequence of characters is is by using a compression algorithm and seeing how compressed the data turns out to be. Furthermore, comparing the compression percentage of collision generated strings with actual RNG strings provides with a meaningful comparison of randomness. This works because the compression algorithms
themselves try to exploit repetitiveness and correlations of the given data, and therefore they will create better compression on non-random data sequences.
Here, we used one diehard test - the Wald-Wolfowitz runs test [16]. It can be used to test the hypothesis that the elements of the sequence are mutually independent. It uses runs - a run is defined as a series of increasing values or a series of decreasing values. For example, the following sequence "0010" consists of two runs - there is a single change from "0" to "1" and a single value change from "1" to "0". The number of increasing or decreasing values is the length of the run. In a random data set, the probability that the next value is smaller or larger follows a binomial distribution, a fact that forms the basis of the runs test. The test's null hypothesis is that the sequence was produced in a random manner. Define the following values: \(n_{1}\) is the number of positive (increasing) runs, \(n_{2}\) is the number of negative (decreasing) runs. The expected number of runs is then given by
\[\overline{R}=\frac{2n_{1}n_{2}}{n_{1}+n_{2}}+1,\]
the standard deviation by
\[s_{R}^{2}=\frac{2n_{1}n_{2}(2n_{1}n_{2}-n_{1}-n_{2})}{(n_{1}+n_{2})^{2}(n_{1} +n_{2}-1)}.\]
The test statistic is
\[Z=\frac{R-\overline{R}}{s_{R}}\]
where R is the observed number of runs. The runs test rejects the null hypothesis if \(|Z|>Z_{1-\alpha/2}\) where \(\alpha\) is the desired significance level. For example, if we desire a 5% significance level, the test statistic with \(|Z|\) greater than 1.96 indicates non-randomness.
Finally, we use compression algorithms to detect information content: a truly random string is incompressible. Algorithms we used are zlib [17], bzip2 [18] and a custom run-length encoding algorithm. Zlib and bzip2 are very complex algorithms and we will not go into detail about their inner workings. The custom algorithm works by collecting subsequent occurrences of same characters into a number of occurrences and the character itself. It simply replaces same characters that occur in a series with the number of occurrences and the character itself. For example, the string \(aabaaabb\) is compressed to \(2a1b3a3b\).
## 3 Results & Discussion
We start with the verification of the previously mentioned results for a system of three particles on a ring. Example position and velocity distributions can be seen in figure 1. The triangular distribution in position space and the arcsine distribution in velocity space are nicely depicted.
Interesting to note that, following equation 4, the smaller the mass of the particle, the wider its velocity distribution will be, but its momentum distribution will be less spread out. In figure 2, we can see that the particle with the smallest mass has the narrowest momentum distribution but the widest velocity distribution. Due to momentum conservation, lighter particles must acquire higher velocities to offset more massive particles' momentum.
Moving on to string generation, Figures 3 and 4 show the information entropy and compression percentages of strings with differing lengths. Immedi
Figure 1: Position and velocity probability distributions for three particles on a ring. Random initial conditions. Position probability distributions follow the triangular distribution and each particle is confined to its region, restricted by points where a triple collision occurs. Velocity probability distributions are of the form \(P(v_{i})\propto\frac{1}{\sqrt{v_{i,max}^{2}-v_{i}^{2}}}\) where \(v_{i,max}\) is determined by the masses of the particles and the total energy of the system.
Figure 2: Sample momentum and velocity distribution shown on the same plot. Typical distribution results can be observed - small mass particles have narrow momentum distributions and broad velocity distributions, high mass particles have wide momentum distributions and narrow velocity distributions.
observed. The Shannon entropy of random strings quickly take on the value of 1 bit of entropy (\(H(x)=\ln 2\)), while the collision generated strings only converge at longer lengths, to 1 for EM1, and \(<1\) for EM2
A similar pattern is seen in the custom run-length encoding algorithm, where collision strings are compressed to \(\sim 70\%\) (EM1) but only \(\sim 93\%\) (EM2) while random strings see no compression at all. However, in this case string length has no substantial effect on the compression percentages.
The zlib and bz2 algorithms exploit these long runs of identical digits and other complex methods to achieve compression. They compress collision generated strings substantially better than random string, and become increasingly effective for large strings. Notably, they are equally good at compressing EM1 and EM2 strings.
These results are not surprising, as it is evident that for some mass configurations of the system there are periodic orbits. What is interesting is the fact that a 'general' system is much less 'random' than an actual RNG string. Perhaps certain combinations of masses produce strings that appear more random. To that end, we utilize the Wald-Wolfowitz runs test on strings generated by EM1 for a variety of mass ratios. In Figure 5 we can see that
Figure 3: Performance of the chosen algorithms on both collision generated strings from EM1 (blue) and random generated (orange) strings. Top left is the Shannon entropy comparison, top right is the zlib algorithm compression percentage, bottom left is the bz2 algorithm compression percentage, and bottom right is the custom compression algorithm compression percentage (example compression: \(aabccc\to 2a1b3c\)). Results are averaged over 40 runs of randomized initial starting conditions for positions, momenta and masses
the Runs test Z score is very sensitive to mass ratio. In most cases Z is quite high and collision strings would not be accepted as random binary strings. However, there are a few regions of low Z scores where the collision string would be assessed as coming from a random distribution.
This region of low Z scores was further investigated (Figure 6), and mass ratios that produce a 95% confidence of randomness were found. There are slight variations in the final results due to the inherent chaotic properties of the system. Different initial starting conditions (positions, velocities) will give out similar, but slightly different results. However there is a clear structure to the region. To emphasize this, a cut was made along the ratio \(m_{2}/m_{3}=0.87\). The minimum average Z score was found to be 2.08, which is slightly above the 5% confidence value of 1.96.
It is interesting to compare these randomness test results with the results of compression algorithms, Figure 7. The custom compression algorithm produces the same results as the Z score of the runs test does. A compression percentage of 100% is obtained in a similar area of the mass ratios. However, the bz2 algorithm does not conform to this result. The compression percentage is lower than the one found in Figure 3 for random strings. It does not increase anywhere in the mass ratio space and only gets smaller in certain areas. The bz2 algorithm probably exploits correlations within the string generation mechanism, correlations which the runs test cannot detect. What is interesting is that the areas in which
Figure 4: As fig 3 With EM2 collision genererated strings
the custom algorithm performs relatively better are the areas in which the bz2 algorithm performs relatively worse, and vice versa.
While none of this is definite proof of the ergodicity of the system it does tell us which mass ratios are relatively better at exploring the phase space of the whole system.
Figure 5: Results of the Wald-Wolfowitz runs test on collision generated strings with various mass ratios. For each combination of mass ratios, ten strings were produced from randomized initial positions and velocities, the results were averaged and shown in the figure. On the left, we see that most of the input space produces relatively high Z scores. There are a few valleys of low Z scores. Variance in the Z score is reasonably low, barring a few areas with close to two equal masses, enforcing the belief that mass ratios rather than initial velocities ‘control’ the randomness of the generated strings. The mass ratios were varied for a total of 20 equally spaced datapoints, ranging from 0.01 to 1. For each mass ratio combination, ten \(100,000\)-long strings were created, and their average compression ratios were calculated.
Figure 6: Z scores through an \(m_{2}/m_{3}\) cut of the mass ratios space are depicted on the left. Minimum value of \(Z=2.08\) occurs for \(m_{1}/m_{3}=0.097\). Although not small enough to confirm statistical randomness with 5% confidence, it is reasonable to assume that the surrounding area might contain some such mass ratios. Data were sampled from 50 mass ratios; 40 experiments with randomized initial conditions were conducted for each mass ratio. A zoomed in area of the runs test Z score for differing \(m_{1}/m_{3}\) and \(m_{2}/m_{3}\) ratios is depicted on the right. \(m_{1}/m_{3}\) ratio runs from 0.08 to 0.12 while \(m_{2}/m_{3}\) ratio goes from 0.8 to 0.9. For each combination of mass ratios, ten experiments were conducted with randomized initial positions and velocities. Mass ratios were varied from 0.01 to 1 for a total of 20 different mass ratios. No intrinsic structure in the distribution of the scores is readily visible.
## 4 Conclusion
We investigated the relationship between information content and entropy for a dynamical system of three particles on a ring. The method involves the creation of binary strings representing on the particles' collision history. These collision generated strings were compared to the Random Number Generator strings. This comparison gives us a 'benchmark' against which we can test all our results.
We started with the inspection of information theory entropy, showing that the Shannon entropy is the same for random and EM1 strings, but smaller for EM2 strings. The convergence with run length is noticably faster for the random strings. Since EM1 and EM2 encode the same information, it implies that the Shannon entropy is not a good measure of information content, presumably because it misses long-range correlations.
Quasi-randomness was tested using the Wald-Wolfowitz runs test. It was found that mass ratios \(\frac{m_{1}}{m_{3}}\in[0.1,0.2]\) and \(\frac{m_{2}}{m_{3}}\in[0.4,1]\) produce the lowest runs test scores and are the hardest to distinguish from true randomness. Further exploration of this area of interest produces some deterministic strings that pass the randomness test with 95% confidence.
We then applied various compression algorithms on differing string lengths. As expected, RNG strings do not get compressed at all. The custom compression algorithm achieved some compression, essentially independent of original string length, but significantly different between EM1 and EM2. On the other hand, bz2 and zlib become more effective at compression as the string length grows. This means that an "entropy" based on bz2/zlib compression efficiency is _non-extensive_. This non-extensivity is of a different nature from
Figure 7: Custom compression algorithm and bz2 algorithm compression results in the plane of mass ratios. The custom compression algorithm reproduces results similar to the runs test, while the bz2 algorithm does not. Bz2 algorithm compresses the strings much better regardless of the mass ratios, and the results do not look similar to the results obtained for RNG strings. For a total of 20 equally spaced datapoints, mass ratios were varied from 0.01 to 1. Ten strings of length \(100,000\) were produced for each mass ratio combination, and their compression ratios were averaged.
Tsallis or Renyi entropies which involve different ways of combining p\({}_{i}\) - which here are simply \(\frac{1}{2}\). Rather, "zlib-entropy non-extensivity" arises because of long-period repetitions in the string which require a different definition of the phase space to account for the fact that each bit depends on the previous one.
Because the trajectories are deterministic, the actual information content is just the initial conditions, thus asymptotic 100% compression is in principle possible at the limit of large enough strings. The compression by bz2 appears to be approaching 100% as string length goes to infinity.
Since the entropy of both collision generated strings and RNG strings are 1 bit, the non-randomness of the collision generated strings must come from the distribution of the digits within the string and not because of the unequal amounts of digits in the string. bz2 compression uses the Burrows-Wheeler transform and zlib uses Lempel-Ziv, both of which utilize repeated sequences of any length. It implies that compression is achieved due to the existence of patterns at all length scales, a hidden fractal nature emerging from the chaotic dynamics.
This highlights a stark contrast between our deterministic system and an equivalent stochastic system sampled in the same constant energy, length and momentum ensemble. Properties such as pressure and temperature can be measured directly from momentum transfer and kinetic energy; they can also be measured by phase space integrals assuming ergodicity, and the results are the same. By contrast, the information content of our trajectory - defined by compression of the collision sequence - is asymptotically zero, whereas a stochastic system has an extensive information content. Our use of the compression entropy gives a quantitative way to show how this chaotic system differs from an ergodic one. Like the FPU [6] paradox, we show that strict ergodicity is a sufficient, but not necessary condition to extract thermodynamic properties from a system.
Overall, we have shown that the deterministic, non-ergodic, three particle on a ring system samples phase space in a way that maximises the entropy, and produces the same energy and momentum distributions as those obtained by phase space integration[5, 12, 4]. The Shannon entropy is equivalent to the thermodynamic entropy from phase space integration, despite the fact that the information content of the strings is low, as demonstrated by the high compressibility. By contrast, although the bz2 algorithm demonstrates the low information content, it is non-extensive and therefore neither equivalent to thermodynamic entropy nor useful for deriving thermodynamic properties.
Compression algorithms are an attractive and simple way to estimate[19] information content in binary strings, however this information content cannot, in general, be related to a thermodynamic entropy. |
2301.11366 | Isomorphism classes of cut loci for a cube | We prove that a face of a cube can be optimally partitioned into connected
193 sets on which the cut locus, or ridge tree, is constant up to isomorphism
as a labeled graph. These are 60 connected open sets, curves bounding them, and
intersection points of curves. Polynomial equations for the curves are
provided. Sixteen pairs of sets support the same cut locus class. We present
the 177 distinct cut locus classes. | Donald M Davis, Manyi Guo | 2023-01-26T19:21:32Z | http://arxiv.org/abs/2301.11366v2 | # Isomorphism classes of cut loci for a cube
###### Abstract.
We prove that a face of a cube can be optimally partitioned into 193 connected sets on which the cut locus, or ridge tree, is constant up to isomorphism as a labeled graph. These are 60 connected open sets, curves bounding them, and intersection points of curves. Polynomial equations for the curves are given. Sixteen pairs of sets support the same cut locus class. We present the 177 distinct cut locus classes.
Key words and phrases:cut locus, ridge tree, geodesic, cube, star unfolding, Voronoi diagram 2000 _Mathematics Subject Classification_: 52B10, 53C22, 52C30
## 1. Introduction
The _cut locus_, or ridge tree, of a point \(P\) on a convex polyhdedron is the set of points \(Q\) for which there is more than one shortest path from \(P\) to \(Q\). Each cut locus is a tree whose leaves are corner points1 of the polyhedron. In [1] and [2], methods were developed for determining the cut locus of a point, which we describe in Section 3 and utilize.
Footnote 1: We reserve the term _vertex_ for vertices of the star unfolding and cut locus.
The cut locus of \(P\) varies continuously with \(P\) unless \(P\) is a corner point of the polyhedron, but its combinatorial structure can change abruptly. We think of a cut locus as a graph with some vertices labeled by corner points of the polyhedron. We define an equivalence relation for these labeled graphs by edge-preserving vertex bijections that preserves labels, and denote by \(\mathbf{L}\) the equivalence class of a cut locus \(L\).
In this paper, we consider the cut loci for a cube and find a complete decomposition of a face of a cube into connected subsets on which \(\mathbf{L}\) is constant. The subsets are connected open sets, curves bounding these sets, and single points where the curves intersect. These are accurately rendered in Figure 1.1; Figure 2.1 gives an expanded version of regions in the left quadrant of 1.1. We find that there are 193 connected
subsets altogether, but of these there are 16 pairs which have the same \(\mathbf{L}\) and so there are 177 distinct \(\mathbf{L}\) on a face of a cube.
**Figure 1.1. Decomposition of a face into subsets on which \(\mathbf{L}\) is constant**
In Section 2, we give a precise statement of results, including equations of the curves bounding the regions, and the labeled graphs for a representative set of \(\mathbf{L}\). In Section 3, we present some preliminary information and tools from [2] and [1] needed in our work. Section 5 sketches a proof that the regions and isomorphism classes of their cut loci are as described, and in Section 6, we complete the proof.
In Figure 1.2, we picture a cube and a typical cut locus on it. This shows the numbering of the corner points of the cube that we will use throughout. We use the back face of that cube as the domain for our points \(P\).
**Figure 1.2. A cube with labeled corner points, and the cut locus for the middle point of an edge highlighted**
One motivation for this work was [3], which considered geodesic motion-planning rules on a cube. Another was [1], which considered bounds for the number of equivalence classes of cut loci on a convex polyhedron.
## 2. Statement of results
In this section, we state our main result, an optimal decomposition of a face of a cube into 193 connected subsets on which the isomorphism class \(\mathbf{L}\) of the cut locus is constant, together with a depiction of these \(bL\). Proofs of all claims will appear in Sections 5 and 6. Because of the omnibus nature of our result, we do not organize it into "Theorems."
We show that a face of the cube is composed of 60 connected open sets on which \(\mathbf{L}\) is constant, together with 48 curves which bound these regions. Except for the boundary of the square and its diagonals, each of these curves is given by a 2-variable polynomial equation of degree 2 or 3 with integer coefficients. Some of these curves have constant \(\mathbf{L}\), while others are divided into two or three adjacent portions, on each of which \(\mathbf{L}\) is constant, yielding 96 curve portions with constant \(\mathbf{L}\). There are 58 distinct \(\mathbf{L}\)'s on the regions and 86 on the curves. There are 37 points of intersection of these curves, giving 33 additional \(\mathbf{L}\).
We find it convenient to use \(0\leq x\leq 8\) and \(-4\leq y\leq 4\) as the coordinates of \(P=(x,y)\) in our face. Figure 2.1 depicts the 15 open regions in the quadrant \(Q_{1}=\{(x,y):0\leq x\leq 4,|y|\leq 4-x\}\). In Figure 2.1, the \(x\)-axis is stretched by a factor of nearly 5 in order to better display the regions. Figure 1.1 depicts the whole square, illustrating how regions in the other three quadrants are copies of the regions
in the quadrant \(Q_{1}\) rotated around the center of the square. We will explain how the \(\mathbf{L}\) in the regions in these quadrants are obtained by permuting the corner numbers 1-8 in \(\mathbf{L}\).
**Figure 2.1. Regions in quadrant \(Q_{1}\)**
In Figure 2.2, we present the \(\mathbf{L}\) for points in regions \(A\)-\(I\) in Figure 2.1. The \(\mathbf{L}\) in the primed regions in Figure 2.1 are obtained by applying the permutation \(\tau=(1\ 4)(2\ 3)(5\ 8)(6\ 7)\) to the corner numbers in the \(\mathbf{L}\) of the corresponding unprimed region \(D\)-\(I\). Note that the graphs which appear in Figure 2.2 represent isomorphism classes of labeled graphs, and so whether an edge points to the left or right is irrelevant, as is the vertical orientation of the graph.
**Figure 2.2. L in regions.**
Each region \(R\) in the top quadrant in Figure 1.1 is obtained from the corresponding region \(R_{0}\) in quadrant \(Q_{1}\) by a clockwise rotation of \(\pi/2\) around the center of the square. The \(\mathbf{L}\) for \(R\) is obtained from that of \(R_{0}\) by applying the permutation \(\sigma=(1\;4\;3\;2)(5\;8\;7\;6)\) to the corner numbers at vertices. Similarly, regions along the right edge are a \(\pi\)-rotation of \(R_{0}\) and have their \(\mathbf{L}\) obtained using the permutation \(\sigma^{2}=(1\;3)(2\;4)(5\;7)(6\;8)\). Finally, a clockwise rotation of \(3\pi/2\) applies \(\sigma^{3}=(1\;2\;3\;4)(5\;6\;7\;8)\) to the numbers at vertices of \(\mathbf{L}\). One can check that, for the 15 regions \(R_{0}\) in \(Q_{1}\), the \(\mathbf{L}\) for \(\sigma^{i}R_{0}\), \(0\leq i\leq 3\), are distinct except that \(\sigma^{2+\varepsilon}\mathbf{L}_{A}=\sigma^{\varepsilon}\mathbf{L}_{A}\) for \(\varepsilon=0,1\), yielding 58 distinct \(\mathbf{L}\) for the regions on the face, each \(\mathbf{L}\) having six degree-3 vertices. The notation \(\mathbf{L}_{A}\) refers to the \(\mathbf{L}\) of points in the region \(A\).
There are five curves and their vertical reflections which bound pairs of regions in Figure 2.1. A single curve usually bounds more than one pair of regions. Its \(\mathbf{L}\) will be different for different pairs. In every case, the \(\mathbf{L}\) for the curve portion is obtained by collapsing to a point one edge of the \(\mathbf{L}\) for each region which it bounds.
For each curve, we list in (2.3) the pairs of regions which it bounds, followed by its equation. Then in Figure 2.4, we present the \(\mathbf{L}\) for the various curve portions. For example, the first curve, which appears almost horizontal in Figure 1.1 but is actually an arc of a circle with large radius, bounds regions B and D, and then has a short
portion bounding regions E and I, and its \(\mathbf{L}\) for each of these portions is presented in Figure 2.4. The intersection point of these two portions has a different \(\mathbf{L}\), which will be described, along with its coordinates, later in this section.
\[BD,EI \quad x^{2}+y^{2}-24y+16=0 \tag{2.3}\] \[DE,BI,CI^{\prime} \quad y^{3}+(3x+12)y^{2}+(x^{2}+40x-16)y+3x^{3}-44x^{2}+304x-192=0\] \[EF \quad y^{3}+(x-12)y^{2}+(x^{2}+8x-16)y+x^{3}-20x^{2}-240x+192=0\] \[FG,HA,CH^{\prime} \quad x^{3}-4x^{2}+(y^{2}+8y-80)x-4y^{2}+64=0\] \[GA,FH \quad x^{3}-12x^{2}+(y^{2}-24y+112)x+4y^{2}-64=0\]
**Figure 2.4. L on curves.**
\[\begin{array}{c|ccccccccc}\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\omit\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\omit\span\omit
\(h\) connecting the center of the face with the upper left corner. These are shown in Figure 2.5. These are the first cases where a corner point does not appear at a leaf, but rather at a degree-2 vertex of the cut locus. Applying the permutations \(\sigma\), \(\sigma^{2}\), and \(\sigma^{3}\) described earlier gives the \(\mathbf{L}\)'s on the other edges and half diagonals. However, \(\sigma^{2+\varepsilon}\mathbf{L}_{h}=\sigma^{\varepsilon}\mathbf{L}_{h}\). Combining with those described above yields 86 distinct \(\mathbf{L}\)'s on portions of curves.
**Figure 2.5.**\(\mathbf{L}\) on left edge and upper-left half diagonal.
Not including the \(y\)-axis, Figure 2.1 has eight intersection points of curves. Three below the \(x\)-axis are obtained by vertical reflection, and their \(\mathbf{L}\) is obtained using the usual permutation \(\tau\). We list the other five, denoting them by the regions abutting them, and include their coordinates.
\[BDEI (0.6413,0.7045)\] \[EFHCI (0.7085,0.7085)\] \[FGHA (0.8,1.6)\] \[BII^{\prime}C (0.6989,0)\] \[CHH^{\prime}A (0.7757,0)\]
More precisely, the \(0.7085\) is \(6-2\sqrt{7}\), while the \(0.7045\), \(0.6989\), and \(0.7757\) are roots of the polynomials \(37y^{4}-816y^{3}+304y^{2}-3456y+2560\), \(3x^{3}-44x^{2}+304x-192\), and \(x^{3}-4x^{2}-80x+64\), respectively. The \(\mathbf{L}\) for the five vertices are shown in Figure 2.6. The \(BD\) curve intersects the \(y\)-axis at \((0,0.685)\), but this point does not give a new \(\mathbf{L}\), since \(\mathbf{L}\) is constant on the \(y\)-axis (for \(|y|<4\)).
**Figure 2.6. L for intersection points.**
\(\begin{array}{ccccc}\includegraphics[height=56.905512pt]{L1-eps}&\includegraphics[ height=56.905512pt]{L2-eps}&\includegraphics[ height=56.905512pt]{L3-eps}&\includegraphics[ height=56.905512pt]{L4-eps}&\includegraphics[ height=56.905512pt]{L5-eps}&\includegraphics[ height=56.905512pt]{L5-eps}&\includegraphics[ height=56.905512pt]{L6-eps}&\includegraphics[ height=56.905512pt]{L7-eps}&\includegraphics[ height=56.905512pt]{L8-eps}&\includegraphics[ height=56.905512pt]{L9-eps}&\includegraphics[ height=56.
**Figure 3.1. Example of cuts with respect to \(P\)**
These decompose \(M\) as the union of eight polygons, with \(P\) at a vertex of each, and edges 12, 23, 34, 41 and 15, 26, 37, and 48 at far ends of the polygons. The _star unfolding_ of \(P\) is obtained by first gluing to the 1234 square the polygons with far edges 12, 23, 34, and 41. This will expose new edges 15, 26, 37, and 48, and we then glue the other four polygons to the corresponding edges. See Figure 3.2. This yields a polygon with eight vertices corresponding to corner points of the cube, and eight corresponding to occurrences of the point \(P\), which we number as in Figure 3.2. This is the star unfolding, \(S\), of the point \(P\).
**Figure 3.2. A star unfolding \(S\)**
Recall that our coordinates for the 5678 face of the cube are \(0\leq x\leq 8\) and \(-4\leq y\leq 4\). We will initially consider points \(P\) in the quadrant \(Q_{1}\) given by
\(-4\leq y\leq 4\) and \(0\leq x\leq 4-|y|\). Points in other quadrants will be considered later by rotating the cube.
We use \((v,w)\) as the coordinate system for the plane containing \(S\), with \((0,0)\) at the midpoint of segment 2-3 in Figure 3.2, and sides of the two squares having length 8. The coordinates of the points labeled 1-8 are, respectively, \((-8,4)\), \((0,4)\), \((0,-4)\), \((-8,-4)\), \((-8,12)\), \((8,4)\), \((8,-4)\), and \((-8,-12)\). The coordinates \((v_{\alpha},w_{\alpha})\) of the points \(P_{\alpha}\) are as in (3.3).
\[P_{1}=(-16-x,-y), P_{5}=(16-x,-y),\] \[P_{2}=(-12-y,12+x), P_{6}=(12-y,-12+x), \tag{3.3}\] \[P_{3}=(-8+x,16+y), P_{7}=(-8+x,-16+y),\] \[P_{4}=(12+y,12-x), P_{8}=(-12+y,-12-x).\]
For each point \(P_{\alpha}\), \(1\leq\alpha\leq 8\), its _Voronoi cell_\(C_{\alpha}\) is the set of points \(Q\) in \(S\) such that
\[d(Q,P_{\alpha})\leq d(Q,P_{\beta})\]
for \(1\leq\beta\leq 8\). The points of \(S\) which lie in more than one \(C_{\alpha}\) comprise the cut locus \(L_{P}\) of \(P\). In Figure 3.4, we show the Voronoi cells and cut locus of the point \(P\) in Figure 3.1.
**Figure 3.4. Voronoi cells and cut locus of \(P\)**
\(P_{3}\)\(P_{2}\)\(5\)\(1\)\(2\)\(2\)\(6\)\(1\)\(2\)\(2\)\(1\)\(2\)\(2\)\(1\)\(2\)\(2\)\(2\)\(2\)\(1\)\(2\
All segments in the cut locus are portions of perpendicular bisectors \(\bot_{\alpha,\beta}\) of the segments joining \(P_{\alpha}\) and \(P_{\beta}\). One needs to consider how various \(\bot_{\alpha,\gamma}\) intersect to decide when a portion of \(\bot_{\alpha,\beta}\) is closer to \(P_{\alpha}\) and \(P_{\beta}\) than to other \(P_{\gamma}\)'s. An example of this is discussed in Section 4.
## 4. Determination of a cut locus
We used Maple to help us find the cut locus for many points in quadrant \(Q_{1}\). We illustrate here with the top half of the cut locus of the point \(P=(x,y)=(1.5,0.5)\). Substituting these values into the equations (3.3), we obtain the coordinates of the points \(P_{\alpha}=(v_{\alpha},w_{\alpha})\) for \(1\leq\alpha\leq 8\). The equation of the perpendicular bisector, \(\bot_{\alpha,\beta}\), of the segment connecting points \(P_{\alpha}\) and \(P_{\beta}\) is
\[\begin{cases}w=\dfrac{w_{\alpha}+w_{\beta}}{2}+\dfrac{v_{\alpha}-v_{\beta}}{w_ {\beta}-w_{\alpha}}\big{(}v-\dfrac{v_{\alpha}+v_{\beta}}{2}\big{)}&\{\alpha, \beta\}\neq\{1,5\}\\ v=-x&\{\alpha,\beta\}=\{1,5\}.\end{cases} \tag{4.1}\]
Maple plots a selected batch of these lines \(\bot_{\alpha,\beta}\) in a specified grid. The grid in Figure 4.2 is \([-3.5,0.5]\times[0,3]\). Here we have included just those relevant for the top half of the cut locus, which appears in red. Other lines such as \(\bot_{2,4}\) and \(\bot_{2,5}\) would usually be considered for possible relevance. Trying to do this sort of analysis for the top and bottom halves of the cut locus together leads to an unwieldy collection of perpendicular bisectors. In Section 6, we show that it suffices to consider the top and bottom parts separately. When crucial intersection points are very close together, we can change the grid to effectively zoom in.
**Figure 4.2. Finding a cut locus.**
Points equidistant from \(P_{\alpha}\) and \(P_{\beta}\) lie on \(\bot_{\alpha,\beta}\). In Figure 4.2, the line \(\bot_{\alpha,\beta}\) is annotated with \(\alpha\) on one side and \(\beta\) on the other, indicating the side closer to \(P_{\alpha}\) or \(P_{\beta}\). The Voronoi cell for a point \(P_{\alpha}\) is bounded by portions of lines \(\bot_{\alpha,\beta}\) for various \(\beta\), with \(\alpha\) on the cell side of each \(\bot_{\alpha,\beta}\). For example, the Voronoi cell for \(P_{3}\) is bounded by portions of \(\bot_{3,2}\), \(\bot_{3,1}\), \(\bot_{3,5}\), and \(\bot_{3,4}\), reading from left to right in Figure 4.2.
Although we use the various \(\bot_{\alpha,\beta}\) to determine the cut loci, the eventual description of the cut locus is in terms of the corner points of the cube at certain vertices of the cut locus. As seen in Figure 3.2, the corner points on lines \(\bot_{1,2}\), \(\bot_{2,3}\), \(\bot_{3,4}\), and \(\bot_{4,5}\) are 1, 5, 2, and 6, respectively, and so the top half of the cut locus of the point \(P=(1.5,0.5)\) is as depicted in Figure 4.3.
**Figure 4.3. Top half of a cut locus.**
## 5. Proofs
In this section, we show how the regions and curves and their cut loci are obtained.
The coordinate systems are as described in Section 3. Let \([8]=\{1,2,3,4,5,6,7,8\}\). For \(P=(x,y)\in Q_{1}\) and \(\alpha\in[8]\), let \(P_{\alpha}\) be the point in the star unfolding described earlier. Its coordinates \((v_{\alpha},w_{\alpha})\) are linear expressions, (3.3), in \(x\) and \(y\). In Figure 3.2, we depict a typical star unfolding. The vertices \(P_{1},\ldots,P_{8}\) are the focal points for the Voronoi cells, while the vertices with numbered labels correspond to the corner points of the cube.
For \(\alpha,\beta\in[8]\), let \(\bot_{\alpha,\beta}\left(P\right)\) denote the perpendicular bisector of the segment connecting \(P_{\alpha}\) and \(P_{\beta}\). Its equation is (4.1). Note that \(\bot_{\alpha,\alpha+1}\) has as its extreme point in the star unfolding the corner point \(1\), \(5\), \(2\), \(6\), \(7\), \(3\), \(8\), and \(4\), for \(\alpha=1,\ldots,8\). Although our results about \(\mathbf{L}\) are described in terms of the corner points, our work is done in terms of the \(\alpha\).
For \(S=\{\alpha,\beta,\gamma\}\subset[8]\), let \(\pi_{S}(P)\) denote the intersection of \(\bot_{\alpha,\beta}\left(P\right)\) and \(\bot_{\beta,\gamma}\left(P\right)\) (and \(\bot_{\alpha,\gamma}\left(P\right)\), as \(\pi_{S}(P)\) is the center of the circle passing through \(P_{\alpha}\), \(P_{\beta}\), and \(P_{\gamma}\).). Let \(L_{P}\) denote the cut locus of \(P\). It is formed from portions of various \(\bot_{\alpha,\beta}\left(P\right)\) which are closer to \(P_{\alpha}\) and \(P_{\beta}\) than to any other \(P_{\gamma}\). The degree-\(3\) vertices of \(L_{P}\) are certain \(\pi_{S}(P)\). From now on, we will usually omit the \((P)\) and the set symbols in subscripts.
A transition from one isomorphism class of \(L_{P}\) to another as \(P\) varies will occur when \(\pi_{\alpha,\beta,\gamma}\) passes through another \(\bot_{\beta,\delta}\). This is illustrated in Figure 5.1. The references there to \(t\) and \(t_{0}\) will be used in Section 6. In the left side of the figure, \(\pi_{\alpha,\beta,\gamma}\) is part of \(L_{P}\) since it is closer to \(P_{\alpha}\), \(P_{\beta}\), and \(P_{\gamma}\) than to any other \(P\)-point, but as \(P\) changes and \(\pi_{\alpha,\beta,\gamma}\) moves across \(\bot_{\beta,\delta}\), it is now closer to \(P_{\delta}\), and so is not part of \(L_{P}\).
**Figure 5.1. Transition.**
We will show in Section 6 that this type of transition is the only way to change from one \(\mathbf{L}\) to another.
We assume first that \(\{\alpha,\beta,\gamma\}\) does not contain both \(1\) and \(5\). Then the \(v\)-coordinate of \(\pi_{\alpha,\beta,\gamma}\) is found by equating the right hand side of (4.1) for \((\alpha,\beta)\) and \((\beta,\gamma)\), using the formulas for \(v_{\alpha}\), \(w_{\alpha}\), etc., in terms of \(x\) and \(y\) given in (3.3). This yields a formula for \(v=v_{\alpha,\beta,\gamma}\) in terms of \(x\) and \(y\).
The relationship between \(x\) and \(y\) such that \(\pi_{\alpha,\beta,\gamma}\) and \(\pi_{\alpha,\beta,\delta}\) coincide (and hence a transition might occur) is the equation \(v_{\alpha,\beta,\gamma}=v_{\alpha,\beta,\delta}\). This yields a fourth-degree equation. We let Maple do the work. For example, we find the equation for \((\alpha,\beta,\gamma,\delta)=(2,3,4,5)\) as follows.
v[1]:=-16-x: w[1]:=-y: v[2]:=-12-y: w[2]:=12+x: v[3]:=-8+x: w[3]:=16+y: v[4]:=12+y: w[4]:=12-x: v[5]:=16-x: w[5]:=-y:
a:=2: b:=3: c:=4: d:=5:
A:=solve((w[a]+w[b])/2+(v[a]-v[b])/(w[b]-w[a])(v-(v[a]+v[b])/2) =(w[a]+w[c])/2+(v[a]-v[c])/(w[c]-w[a])(v-(v[a]+v[c])/2),v):
B:=solve((w[a]+w[b])/2+(v[a]-v[b])/(w[b]-w[a])(v-(v[a]+v[b])/2) =(w[a]+w[d])/2+(v[a]-v[d])/(w[d]-w[a])(v-(v[a]+v[d])/2),v):
simplify(numer(A)denom(B)-numer(B)denom(A)) Note that the expressions for \(A\) and \(B\) will be rational expressions, and so the last line gives a polynomial which equals 0.
For \((a,b,c,d)=(2,3,4,5)\), it yields
\[(y^{3}+(3x+12)y^{2}+(x^{2}+40x-16)y+3x^{3}-44x^{2}+304x-192)(4+y-x)(=0).\]
The cubic factor is the second of the five equations listed in (2.3). If we use \((a,b,c,d)=(1,2,3,4)\), we obtain the vertical reflection of the \(EF\) curve of (2.3).
For \(\{1,\beta,\gamma,5\}\), since \(\bot_{1,5}\) is the line \(v=-x\), we do A above for 1, b, and c, omit B, and simplify (numer(A)+x\(\cdot\)denom(A)). This yields (beginning a practice of often omitting commas)
\[1235 x^{3}-4x^{2}+(y^{2}+8y-80)x-4y^{2}+64(=0)\] \[1245 x(x^{2}+y^{2}+24y+16)(=0) \tag{5.2}\] \[1345 x^{3}-12x^{2}+(y^{2}+24y+112)x+4y^{2}-64(=0).\]
For each of these five cases, if 2, 3, and 4 are replaced by 8, 7, and 6, respectively, the equation is obtained by replacing \(y\) by \(-y\). Altogether we have ten equations. Compare with equations (2.3).
We describe \(\mathbf{L}_{P}\) for \(P\) in the top half of \(Q_{1}\) by the sets \(S\) for which \(\pi_{S}\) is a degree-3 vertex of \(\mathbf{L}_{P}\). Later in this section we will explain how we translate this description to the description involving corner points of the cube, which appeared in Section 2. For example, the case \(P=(1.5,0.5)\) considered in Section 4 has \(\pi_{S}\) for \(S=123\), 135, and 345 in its top half. Maple plotting of perpendicular bisectors shows that the bottom half of this \(\mathbf{L}_{P}\) is essentially a flip of the top half, so has \(\pi_{S}\) for \(S=178\), 157, and 567.
In Section 6, we show that the only possible transitions from one \(\mathbf{L}\) to another are of the type illustrated in Figure 5.1, where an \(\alpha\beta\gamma\delta\) intersection bounds one region whose \(\mathbf{L}\) has \(\pi_{\alpha,\beta,\gamma}\) and \(\pi_{\alpha,\gamma,\delta}\) vertices and another with \(\pi_{\alpha,\beta,\delta}\) and \(\pi_{\beta,\gamma,\delta}\). The point is that the 4-set2 defining the bounding curve must have two 3-subsets in each of the regions on either side of it. So, for example, the 1568 curve could not bound a region
containing \(\mathbf{L}_{(1.5,0.5)}\) because there are not two of the six 3-sets \(S\) for \(\mathbf{L}_{(1.5,0.5)}\) listed in the previous paragraph which are contained in \(\{1,5,6,8\}\).
Of the ten equations determined above, all except the ones corresponding to 1568 and 1245 intersect the top half of \(Q_{1}\) in a curve which we denote as \(x=\theta_{\alpha\beta\gamma\delta}(y)\) for \(0\leq y\leq 4\). Each \(y\) has three \(x\) values as solutions, but we neglect those that are complex or outside the region \(0\leq x\leq 4-y\). The equation for 1245 does not intersect this region, and the one for 1568 does so only for \(0.685\leq y\leq 1.07\).
Maple shows that, for \(1.6<y<4\),
\[\theta_{1345}(y)<\theta_{2345}(y)<\theta_{1578}(y)<\theta_{1678}(y)<\theta_{5 678}(y)<\theta_{1234}(y)<\theta_{1235}(y)<\theta_{1567}(y),\]
and that for \(0\leq y\leq 4\) all eight of these curves satisfy \(0\leq\theta_{\alpha\beta\gamma\delta}(y)\leq 0.83\). For \(P\) in quadrant \(Q_{1}\), \(\mathbf{L}_{P}\) has the type of the case \(P=(1.5,0.5)\) considered above, with degree-3 vertices corresponding to \(S=123\), 135, 345, 178, 157, and 567, until a transition occurs. This will define region \(A\) in Figure 2.1.
Now let \(1.6<y<4\). Since there are no \(\alpha\beta\gamma\delta\) intersections of the eight types in the above string of inequalities in the region \(\mathcal{R}=\{(x,y):0\leq y\leq 4,\,0.83\leq x<4-y\}\), and, as noted above, a 1568 intersection cannot affect \(\mathbf{L}_{(1.5,0.5)}\), we conclude that for all \((x,y)\) in \(\mathcal{R}\), \(\mathbf{L}_{(x,y)}=\mathbf{L}_{(1.5,0.5)}\), with degree-3 vertices 123, 135, 345, 178, 157, and 567. For this, we also need an observation in Section 6 that no other \(\alpha\beta\gamma\delta\) can have an effect.
As we move from the right, when the point \(P=(\theta_{1567}(y),y)\) is encountered, there is a transition from 157 and 567 to 156 and 167. This is region \(G\), with 123, 135, 345, 178, 156, and 167.
Next we encounter \(P=(\theta_{1235}(y),y)\), and this causes a transition to 125, 235, 345, 178, 156, and 167. This is region \(F\). The next two potential transitions at 1234 and 5678 do not effect a change, because neither of these 4-sets contain two 3-subsets which are vertices of region-\(F\) cut loci. Next at \(P=(\theta_{1678}(y),y)\) we have a transition, changing 178 and 167, leading to region \(E\) described by 125, 235, 345, 168, 156, and 678. The next potential transition, 1578, does not effect a change, but then 2345 does, to 125, 234, 245, 168, 156, and 678 in region \(D\). Finally, 1345 does not effect a change because it does not have two 3-sets of region \(D\).
Before we discuss other ranges of values of \(y\), we point out that when a curve is crossed, it gives a degree-4 vertex of the cut locus, as shown in the middle part of
Figure 5.1. Thus, for points \(P\) on the curve \(GA\) separating regions \(G\) and \(A\), \(\mathbf{L}_{P}\) has vertices abutting regions 123, 135, 345, 178, and 1567 of the star unfolding, and similarly for points on the other curves crossed in the above analysis. We also note that \(\theta_{1567}(1.6)=0.8=\theta_{1235}(1.6)\).
The same procedure is followed for other intervals of values of \(y\), arranging the 4-sets \(S\) according to the order of \(\theta_{S}(y)\), and then working from right-to-left to see whether the transitions are effective, i.e., whether \(S\) contains two 3-sets which are vertices of the region under consideration. For \(0.7085<y<1.6\), the only change from the above order which causes a different transition is that \(\theta_{1235}(y)\) is now greater than \(\theta_{1567}(y)\), so the 1235 change takes place first, leading to region \(H\) with vertices 125, 235, 345, 157, 567, and 178.
The most interesting point is \((6-2\sqrt{7},6-2\sqrt{7})\approx(.7085,.7085)\), which lies on all of \(\theta_{1567}\), \(\theta_{1678}\), \(\theta_{1568}\), \(\theta_{1578}\), and \(\theta_{5678}\).3 These five curves reverse their order at \(y=6-2\sqrt{7}\). For \(6-2\sqrt{7}<y<.715\),
Footnote 3: To see this remarkable fact, recall that these five curves are obtained by replacing \(y\) by \(-y\) in the polynomials in (5.2) and the paragraph preceding it. After doing this, let \(y=x\). Each of the resulting polynomials equals \(x^{2}-12x+8\) times a linear factor.
\[\theta_{2345}(y)<\theta_{1578}(y)<\theta_{1678}(y)<\theta_{5678}(y)<\theta_{15 67}(y)<\theta_{1568}(y)<\theta_{1234}(y)<\theta_{1235}(y),\]
which has the transitions described in the preceding paragraph, but for \(.7045<y<6-2\sqrt{7}\),
\[\theta_{2345}(y)<\theta_{1568}(y)<\theta_{1567}(y)<\theta_{5678}(y)<\theta_{16 78}(y)<\theta_{1578}(y)<\theta_{1234}(y)<\theta_{1235}(y),\]
which has a different order of transitions. Let \(.7045<y<6-2\sqrt{7}\). After the 1235 change, the next one is 1578, leading to region \(C\) with vertices 125, 235, 345, 158, 567, and 578. The next transition is due to 5678, leading to region \(I\) with vertices 125, 235, 345, 158, 568, and 678. The next transition is due to 1568, which brings us into region \(E\), with vertices 125, 235, 345, 168, 156, and 678, which were already seen when considering larger values of \(y\). Finally, a 2345 transition brings us into region \(D\) as above.
The 2345 and 1568 curves intersect at \(y\approx 0.7045\), so for \(y<0.7045\), the 2345 transition precedes the 1568 transition, leading to region \(B\) with vertices 125, 234,
245, 158, 568, and 678. For \(y>.685\), there will be a 1568 transition into region \(D\), but for \(y<.685\), there is no 1568 transition since \(\theta_{1568}(y)<0\) if \(y<.685\).
This completes the description of the regions of the top half of quadrant \(Q_{1}\) with constant \(\mathbf{L}\), described in terms of the Voronoi cells. Now we translate this description into one which has the cube's corner numbers at the leaves, which is the description given in Section 2, and is needed for giving permuted descriptions in other quadrants. In Figure 5.3, we show how the top half of the cut loci appear in terms of Voronoi cells, and list the regions in Figure 2.1 in which they appear. Each edge leading to a leaf is a perpendicular bisector separating Voronoi cells \(i\) and \(i+1\) for some \(i\) mod 8. For \(i=1,2,3,4\), the corner point at the end of this bisector is 1, 5, 2, 6, respectively, as can be seen in Figure 3.2. The reader can check that this labeled diagram is consistent with the \(\mathbf{L}\) in Figure 2.2.
**Figure 5.3. Top half of cut loci.**
\(123,\ 135,\ 345\)\(2\)\(3\)\(4\)\(5\)\(125,\ 235,\ 345\)\(2\)\(3\)\(4\)\(5\)\(1\)\(25,\ 234,\ 245\)\(3\)\(4\)\(2\)\(5\)\(1\)\(25,\ 234,\ 245\)\(3\)\(4\)\(2\)\(5\)\(1\)\(25,\ 234,\ 245\)\(2\)\(3\)\(4\)\(2\)\(1\)\(5\)\(2\)\(2\)\(1\)\(5\)\(2\)\(2\)\(2\)\(1\)\(5\)\(2\)\(2\)\(2\)\(1\)\(5\)\(
obtained from that of each of the two regions by collapsing a segment in which the two regions differ. For example, the \(\mathbf{L}\) for the \(BD\) curve in Figure 2.4 is obtained from those in region \(B\) or \(D\) in Figure 2.2 by collapsing the segment connecting the edges leading to corner points \(4\) and \(7\). Similarly, the \(\mathbf{L}\) for points of intersection of two curves is obtained by collapsing a segment in the \(\mathbf{L}\) of each. For example, the \(\mathbf{L}\) for point \(BDEI\) in Figure 2.6 is obtained from those of curves \(BD\) and \(EI\) in Figure 2.4 by collapsing in each the highest vertical interval.
The \(\mathbf{L}\)'s in Figures 2.5 and 2.7 are different from those seen previously in that they have a corner point labeling a degree-\(2\) vertex. In these cases, the choice of cuts is not unique, but, of course, the cut locus does not depend on the choice. We comment briefly on the \(\mathbf{L}\) in these cases.
If \(P\) is on the left edge of the cube, the \(\mathbf{L}\) is as seen in Figure 1.2. If \(P\) is at a corner point of the cube, the cut locus consists of segments from the corner point opposite \(P\) to each of the other corner points. If \(P\) is at the center of a face \(F\), the cut locus consists of the diagonals of the opposite face \(F^{\mathrm{op}}\) and the four edges of the cube connecting \(F\) and \(F^{\mathrm{op}}\).
If \(P=(x,4-x)\) with \(0<x<4\) is on the half-diagonal, then \(\bot_{4,5}\) is the line \(w=4\), which intersects the point in the star-unfolding corresponding to corner point \(2\). Then the short segment connecting the point \(\pi_{3,4,5}\) in Figure 3.4 with the point labeled \(2\) will have collapsed to a point. In the \(A\) diagram in Figure 2.2, this is the collapse of the vertical segment from the point labeled \(2\). This can be seen in terms of the Voronoi cells in the \(A\)-part of Figure 5.3. A similar thing happens to the vertical segment leading to the point labeled \(8\), as the equation of \(\bot_{7,8}\) is \(v=-8\). Also the lines \(\bot_{1,7}\), \(\bot_{3,5}\), \(\bot_{1,5}\), \(\bot_{1,3}\), and \(\bot_{5,7}\) all intersect at \((v,w)=(-x,4-x)\).
In Section 2, we discussed how a permutation \(\tau\) (resp. \(\sigma\)) applied to corner points yields \(\mathbf{L}\) in the vertical flip (resp. \(90\)-degree clockwise rotation) of a region or curve. Here we give a brief explanation of the reason for that. Such a motion applied to a point \(P\) in the region has the same effect on geodesics from \(P\), and hence on \(L_{P}\). Referring to Figure 3.1, we see that, for example, the corner point \(8\) in \(L_{P}\) will be replaced by \(5\) (resp. \(7\)), which expands to the asserted permutations.
## 6. No other transitions
In this section, we present a proof that there are no regions other than those described earlier.
Suppose \(\mathbf{L}_{P_{0}}\neq\mathbf{L}_{P_{1}}\). Let \(P(t)=(1-t)P_{0}+tP_{1}\), and, for any \(3\)-subset \(S\) of [8], let \(\pi_{S}(t)=\pi_{S}(P(t))\), a path in the \(vw\)-plane. For each \(S\) such that \(\pi_{S}(0)\) is a vertex of \(L_{P_{0}}\), let
\[t_{0}(S)=\sup\{t\in[0,1]:\pi_{S}(t^{\prime})\text{ is a vertex of }L_{P(t^{\prime})}\;\forall t^{ \prime}<t\},\]
and let
\[t_{0}=\min\{t_{0}(S):\pi_{S}(0)\text{ is a vertex of }L_{P_{0}}\}.\]
Finally, let \(S=\{\alpha,\beta,\gamma\}\) satisfy \(t_{0}(S)=t_{0}\). The first transition in moving from \(P_{0}\) to \(P_{1}\) will involve \(\pi_{S}(P(t_{0}))\).
**Proposition 6.1**.: _There exists \(\delta\in[8]-S\) and a decomposition of \(S\) as \(\{\beta\}\cup\{\alpha,\gamma\}\) such that \(\pi_{\{\alpha,\gamma,\delta\}}(0)\) is a vertex of \(L_{P_{0}}\), and \(\pi_{S}(t_{0})=\pi_{\{\alpha,\gamma,\delta\}}(t_{0})\) is a common vertex of \(L_{P(t_{0})}\)._
Proof.: There exists \(\varepsilon>0\) and \(\delta\in[8]-S\) such that for \(t\) in the interval \((t_{0},t_{0}+\varepsilon)\), \(\pi_{S}(t)\) is closer to \(P_{\delta}\) than it is to \(P_{\alpha}\), \(P_{\beta}\), and \(P_{\gamma}\). The path \(\pi_{S}\) crosses \(\bot_{\delta,\eta}\left(P(t_{0})\right)\) for some \(\eta\in[8]\), and \(\eta\) must equal \(\alpha\), \(\beta\), or \(\gamma\), since for \(t\) in some interval \((t_{0}-\varepsilon^{\prime},t_{0})\), \(\pi_{S}(t)\) is closer to \(P_{\alpha}\), \(P_{\beta}\), and \(P_{\gamma}\) than it is to any other \(P_{\eta}\). Without loss of generality, say \(\eta=\beta\). Then \(\pi_{S}(t_{0})\) intersects \(\bot_{\beta,\delta}\left(P(t_{0})\right)\), and so all six perpendicular bisectors from \(\{\alpha,\beta,\gamma,\delta\}\) intersect in \(L_{P(t_{0})}\). By minimality of \(t_{0}\), since \(\pi_{\{\alpha,\gamma,\delta\}}(t_{0})\in L_{P(t_{0})}\), we conclude that \(\pi_{\{\alpha,\gamma,\delta\}}(0)\in L_{P_{0}}\). See Figure 5.1 for a depiction of this transition.
**Theorem 6.2**.: _There are no transitions except those claimed earlier in the manuscript._
Proof.: Let \(S_{1}=\{2,3,4\}\), \(S_{2}=\{1,5\}\), and \(S_{3}=\{6,7,8\}\). Recall that all of our asserted regions in \(Q_{1}\) have \(\mathbf{L}\) with three vertices from \(S_{1}\cup S_{2}\) and three from \(S_{2}\cup S_{3}\). If \(\mathbf{L}_{P_{0}}\neq\mathbf{L}_{P_{1}}\) with \(\mathbf{L}_{P_{0}}\) in one of our regions, and \(t_{0}\) is as above, so that we are considering the first transition in moving from \(P_{0}\) to \(P_{1}\), then the set \(\{\alpha,\beta,\gamma,\delta\}\) involved in the transition must either contain \(S_{2}\) or else equal one of \(\{1,2,3,4\}\), \(\{2,3,4,5\}\), \(\{1,6,7,8\}\), or \(\{5,6,7,8\}\). This is true since sets with elements of type
\(S_{1}S_{2}S_{3}S_{3}\), \(S_{1}S_{1}S_{2}S_{3}\), \(S_{1}S_{1}S_{3}S_{3}\), \(S_{1}S_{1}S_{1}S_{3}\), or \(S_{1}S_{3}S_{3}S_{3}\) do not contain two 3-subsets of the type of the vertices of \(L_{P_{0}}\). In our earlier determination of the regions in \(Q_{1}\), we considered the four specific sets listed above (containing a single 1 or 5), and also all sets with elements of type \(S_{1}S_{1}S_{2}S_{2}\) and \(S_{2}S_{2}S_{3}S_{3}\). It remains to consider \(\{1,5,\alpha,\beta\}\) with \(\alpha\in S_{1}\) and \(\beta\in S_{3}\). If \(P\in Q_{1}\), we use Maple, similarly to (5.2), to see that, for \(\alpha\in S_{1}\) and \(\beta\in S_{3}\), \(\pi_{\{1,\alpha,\beta\}}(P)\) does not lie on \(\bot_{1,5}\left(P\right)\). Thus there can be no transitions other than the ones described earlier in the paper.
We explain briefly the Maple work that led to this conclusion. We follow steps that led to (5.2) but using one of \(\{\beta,\gamma\}\) in \(S_{1}\) and one in \(S_{3}\). We obtain equations similar to (5.2). We plot them and find that there are no solutions satisfying \(-4<y<4\), \(0<x<4-|y|\).
|
2310.07245 | Crowd Counting in Harsh Weather using Image Denoising with Pix2Pix GANs | Visual crowd counting estimates the density of the crowd using deep learning
models such as convolution neural networks (CNNs). The performance of the model
heavily relies on the quality of the training data that constitutes crowd
images. In harsh weather such as fog, dust, and low light conditions, the
inference performance may severely degrade on the noisy and blur images. In
this paper, we propose the use of Pix2Pix generative adversarial network (GAN)
to first denoise the crowd images prior to passing them to the counting model.
A Pix2Pix network is trained using synthetic noisy images generated from
original crowd images and then the pretrained generator is then used in the
inference engine to estimate the crowd density in unseen, noisy crowd images.
The performance is tested on JHU-Crowd dataset to validate the significance of
the proposed method particularly when high reliability and accuracy are
required. | Muhammad Asif Khan, Hamid Menouar, Ridha Hamila | 2023-10-11T07:22:37Z | http://arxiv.org/abs/2310.07245v1 | # Crowd Counting in Harsh Weather using Image Denoising with Pix2Pix GANs
###### Abstract
Visual crowd counting estimates the density of the crowd using deep learning models such as convolution neural networks (CNNs). The performance of the model heavily relies on the quality of the training data that constitutes crowd images. In harsh weather such as fog, dust, and low light conditions, the inference performance may severely degrade on the noisy and blur images. In this paper, we propose the use of Pix2Pix generative adversarial network (GAN) to first denoise the crowd images prior to passing them to the counting model. A Pix2Pix network is trained using synthetic noisy images generated from original crowd images and then the pretrained generator is then used in the inference engine to estimate the crowd density in unseen, noisy crowd images. The performance is tested on JHU-Crowd dataset to validate the significance of the proposed method particularly when high reliability and accuracy are required.
crowd counting, CNN, density estimation, GAN, Pix2Pix
## I Introduction
Crowd density estimation is a task in computer vision that aims to estimate crowd density and counting using crowd images and videos. Crowd counting has applications in several domains such as public safety, event planning, transportation, security & surveillance, resource allocation, urban planning, healthcare, and environmental monitoring. Although several methods have been applied traditionally [1, 2, 3, 4], convolution neural network (CNN) is the most widely used and effective approach. A CNN-based crowd-counting model is trained using a dataset of annotated crowd images. The images are point annotated by placing a dot on all head positions in the image. This produces a sparse localization map i.e., a matrix of zeros and ones where one represents the head position. To train the model, the localization map is transformed into a density map by applying a Gaussian kernel to the head position. The crowd model then regresses the target density maps. Over the years, several state-of-the-art crowd-counting CNN architectures are proposed [5]. Some examples include CrowdCNN [6], MCNN [7], CMTL [8], CSRNet [9], TEDnet [10], SANet [11], SASNet [12], MobileCount [13], LCDnet [14], and DroneNet [15]. These models are developed using numerous architectural improvements and training methods, resulting in incremental gains in counting accuracy over benchmark crowd counting datasets.
Nevertheless, the prediction performance of these models degrades when the crowd images are blurry and noisy. Noisy crowd images are common in many practical scenarios such as during rain, fog, dust storms, object motion, etc. When high accuracy is desired, it is inevitable that the crowd model is highly robust against such image distortions. To provide highly accurate counting performance, this work proposes to enhance the crowd image prior to passing it through the counting model. Although traditional image enhancement methods can be used for image denoising, more powerful methods such as generative adversarial networks (GANs) [16] offer superior capabilities for image enhancement over several image attributes (such as brightness, pixel distortion, illumination, contrast, etc.) and produces a clearer and enhanced version of the noisy crowd image. Pix2Pix [17] is a general-purpose image translation GAN model used in many tasks such as map-to-aerial photos, black and white-to-color photos, sketch-to-color photos, day-to-night photos, thermal-color, semantic-to-photos, etc. This paper proposes the use of Pix2Pix GANs for crowd image denoising and enhancement prior to passing to the crowd model for density estimation. Pix2Pix GANs have been applied previously in various image translation tasks [18, 19, 20] with significant improvements in results.
The contribution of this paper is as follows:
* A two-stage crowd density estimation method is proposed for applications requiring high accuracy and robustness against extreme weather and environmental conditions using integrating a Pix2Pix GAN and a density estimation model.
* A general-purpose Pix2Pix GAN network is trained over crowd datasets of pairs of (clear and noisy) images. We synthetically added several noise and distortion effects to mimic real-world harsh weather scenarios.
* The pretrained generator of the Pix2Pix GAN model is then used as a preprocessor in several crowd density estimation models. The prediction performance of the GAN-aided crowd-counting models is compared with the standard counting crowd models to validate the significance of the proposed scheme. |
2301.09529 | Implication in sharply paraorthomodular and relatively paraorthomodular
posets | In this paper we show that several classes of partially ordered structures
having paraorthomodular reducts, or whose sections may be regarded as
paraorthomodular posets, admit a quite natural notion of implication, that
admits a suitable notion of adjointness. Within this framework, we propose a
smooth generalization of celebrated Greechie's theorems on amalgams of finite
Boolean algebras to the realm of Kleene lattices. | Ivan Chajda, Davide Fazio, Helmut Länger, Antonio Ledda, Jan Paseka | 2023-01-23T16:27:11Z | http://arxiv.org/abs/2301.09529v1 | # Implication in sharply paraorthomodular and relatively paraorthomodular posets
###### Abstract
In this paper we show that several classes of partially ordered structures having paraorthomodular reducts, or whose sections may be regarded as paraorthomodular posets, admit a quite natural notion of implication, that admits a suitable notion of adjointness. Within this framework, we propose a smooth generalization of celebrated Greechie's theorems on amalgams of finite Boolean algebras to the realm of Kleene lattices.
**AMS Subject Classification:** 03G12, 03G25, 03G10, 03B47, 03B60, 06A11, 06C15
**Keywords:** Paraorthomodular poset, relatively paraorthomodular poset, sharply paraorthomodular poset, orthogonal poset, Kleene poset, weakly Boolean poset, Kleene lattice, logics of quantum mechanics, orthomodular lattice, orthomodular poset, amalgam of Kleene lattices, atomic amalgam, adjointness, Sasaki implication
## 1 Introduction
Janusz Czelakowski's early contributions concern quantum logic and the foundation of quantum mechanics. More precisely, he deepened Kochen and Specker's results on partial Boolean algebras (PBAs), which are partial algebras that generalize projection operators over separable Hilbert spaces [19]. Among other achievements, Czelakowski's work provided, on the one hand, a novel characterization of PBAs embeddable into Boolean algebras. On the other hand, it clarified why PBAs can be regarded as the algebraic semantic of a logic in its own right. See e.g. [10, 11, 12] for details.
The interest in partial Boolean algebras is motivated by their coping quite well with indeterminacy arising when non-commuting observables (self-adjoint linear operators) are considered. In fact, partial Boolean algebras are endowed with a binary operation \(\vee\) yielding the algebraic counterpart of a partial disjunction which is defined for pairs of commensurable observables. Therefore, although rooted in a common concrete model (i.e. projections over Hilbert spaces), Kochen and Specker's approach to the foundation of quantum mechanics differs from Birkhoff and von Neumann's quantum logic [3]. In fact, the former avoids interpretation difficulties stemmed by the latter, which assumes that conjunctions/disjunctions of quantum propositions always exist, even if they concern incompatible observables.
Quoting Czelakowski and Suppes,
"They (_Birkhoff and von Neumann_) require that the structure of experimental propositions be a lattice, and thus that the conjunction of two meaningful propositions in the lattice also be in lattice, but.their requirement seems too strict when one proposition expresses a possible result of measuring position and the other measuring momentum. The fact that the closed linear subspaces of a complex separable Hilbert space form such a lattice is not sufficient, in my view, to maintain that it expresses the more restricted logic of experimental propositions, in spite of the central importance of this lattice of subspaces in the formulation of the theory".
Indeed, as mentioned in [10, 21],
"What experimental meaning can one attach to the meet and join of two given experimental propositions".
Among classes of partial Boolean algebras, a prominent role is played by _transitive_ partial Boolean algebras (see e.g. [10]) which, in turn, are orthomodular posets whose compatibility relation is _regular_ (see e.g. [2, 18, 20] for details).
Let us recall that a bounded poset \(\mathbf{P}=(P,\leq,^{\prime},0,1)\) with a complementation which is an antitone involution is called orthomodular if it is orthogonal, i.e. \(x\lor y\) exists for \(x,y\in P\) whenever \(x\leq y^{\prime}\), and it satisfies the orthomodular law
1. \(x\leq y\) implies \(x\vee(y\wedge x^{\prime})=y\).
Moreover, if \(\mathbf{P}\) is lattice-ordered, i.e. it is an _orthomodular lattice_, then (OM) can be expressed in the form of an identity
1. \(x\vee\big{(}(x\lor y)\wedge x^{\prime}\big{)}\approx x\lor y\)
and its dual form
1. \(x\wedge\big{(}(x\wedge y)\lor x^{\prime}\big{)}\approx x\wedge y\).
A different abstract counterpart of the set of quantum events has been suggested within the so-called unsharp approach to quantum theory. Such structures (i.e. effect algebras or MV-algebras) represent a suitable algebraic counterpart of quantum effects, namely bounded self-adjoint linear operators on a Hilbert space that satisfy the Born's rule. However, since the canonical order induced by Born's rule on the set of effects on a Hilbert space is not a lattice in general, effect algebras do not lend themselves to a smooth logico-algebraic investigation.
For this reason, in 2016 Roberto Giuntini, Francesco Paoli and one of the present authors proposed the notion of paraorthomodular lattice as natural generalization of the concept of orthomodular lattice. These structures find a concrete realization in the set of effects equipped whit Olson's spectral ordering, see e.g. [15] or [16]. However, although paraorthomodular lattices allow to overcome difficulties arising from effect algebras framework in a very natural and physically motivated fashion, they may as well rise interesting philosophical questions, paralleling, in some sense, what happens to their "sharp relatives".
In [5] paraorthomodular posets were introduced as a framework in which the order-negation reduct of quantum structures ranging from orthomodular lattices, effect algebras and, of course, paraorthomodular lattices, can be treated into a common backdrop. In the same work, a preliminary investigation of sharply paraorthomodular posets was provided. These structures play for paraorthomodular lattices the role that orthomodular posets possess within the theory of orthomodular lattices. Therefore, the natural question arises whether paraorthomodular posets may be regarded as the semantic counterpart of a logic in its own right.
In this paper we address the issue of showing that several classes of partially ordered structures having paraorthomodular reducts, or whose "sections" (in the sense of section 5 below) may be regarded as paraorthomodular posets, admit a quite natural notion of implication.
In particular, this paper is organized as follows: after dispatching some preliminary notions in section 1, in section 2 we discuss a possible smooth notion of implication in sharply paraorthomodular posets. In section 4, we provide a generalization of celebrated R. Greechie's theorems [17] on amalgams of finite Boolean algebras to the realm of Kleene lattices. In sections 5 and 6, we discuss how the theory of paraortomodular posets carries over in two different degrees of freedom: _relatively_ paraorthomodular posets and _relatively_ paraorthomodular join-semilattices. Finally, in section 7, we address the question if there exists a binary connective that forms an adjoint pair with the implication formerly introduced.
## 2 Basic concepts
Let \(\mathbf{P}=(P,\leq)\) be a poset, \(A,B\subseteq P\) and \(a,b\in P\). We define \(A\leq B\) if and only if \(x\leq y\) for all \(x\in A\) and all \(y\in B\). Instead of \(A\leq\{b\}\), \(\{a\}\leq B\) and \(\{a\}\leq\{b\}\) we simply write \(A\leq b\), \(a\leq B\) and \(a\leq b\), respectively. The sets
\[L(A) :=\{x\in P\mid x\leq A\},\] \[U(A) :=\{x\in P\mid A\leq x\}\]
are called the _lower cone_ and _upper cone_ of \(A\), respectively. Instead of \(L(A\cup B)\), \(L(A\cup\{b\})\), \(L(\{a,b\})\) and \(L\big{(}U(A)\big{)}\) we simply write \(L(A,B)\), \(L(A,b)\), \(L(a,b)\) and \(LU(A)\), respectively. Analogously, we proceed in similar cases. On \(2^{P}\) we introduce four binary relations \(\leq\), \(\leq_{1}\), \(\leq_{2}\) and \(\approx_{2}\) as follows:
\[A\leq B\text{ if and only if for every }x\in A\text{ and for every }y\in B\text{ we have }x\leq y,\] \[A\leq_{1}B\text{ if and only if for every }x\in A\text{ there exists some }y\in B\text{ with }x\leq y,\] \[A\leq_{2}B\text{ if and only if for every }y\in B\text{ there exists some }x\in A\text{ with }x\leq y,\] \[A\approx_{2}B\text{ if and only if }A\leq_{2}B\text{ and }B\leq_{2}A.\]
Recall that \(A\leq y\) if and only if \(A\leq_{1}y\), and similarly \(x\leq B\) if and only if \(x\leq_{2}B\).
A unary operation \({}^{\prime}\) on \(P\) is called an _antitone involution_ if
\[x,y\in P\text{ and }x\leq y\text{ together imply }y^{\prime}\leq x^{\prime}\]
and if it satisfies the identity
\[x^{\prime\prime}\approx x.\]
Note that if \({\bf P}\) has an antitone involution \({}^{\prime}\) and a bottom element \(0\), then it has also a top element \(1\), namely \(0^{\prime}\). We will call \(x,y\in P\)_orthogonal_ to each other, shortly \(x\perp y\), if \(x\leq y^{\prime}\) or, equivalently, \(y\leq x^{\prime}\).
By an _orthogonal poset_ we mean a bounded poset \((P,\leq,{}^{\prime},0,1)\) with an antitone involution satisfying the following condition:
\[x,y\in P\mbox{ and }x\perp y\mbox{ together imply that }x\lor y\mbox{ exists.}\]
Here and in the following \(x\lor y\) denotes the supremum of \(x\) and \(y\).
The concept of a paraorthomodular poset was introduced in [5] as a generalization of the concept of a paraorthomodular lattice introduced in [16]. Recall that a bounded _poset_\({\bf P}=(P,\leq,{}^{\prime},0,1)\) with an antitone involution is called _paraorthomodular_ if it satisfies the following condition:
* \(x,y\in P\), \(x\leq y\) and \(x^{\prime}\wedge y=0\) together imply \(x=y\).
It should be remarked that \(x^{\prime}\wedge y=0\) is equivalent to \(L(x^{\prime},y)=0\). The second notation was used in our previous paper [5]. Moreover, if the paraorthomodular poset \({\bf P}\) is lattice-ordered then it is called a _paraorthomodular lattice_.
Given a poset \({\bf P}\), an element \(y\in P\) is said to be a _complement_ of \(x\in P\) if \(L(x,y)=L(P)\) and \(U(x,y)=U(P)\). \({\bf P}\) is said to be _complemented_ if each element of \(P\) has a complement in \({\bf P}\). Moreover, if \({\bf P}\) is a bounded poset with antitone involution which is orthogonal and \({}^{\prime}\) is a complementation, then \({\bf P}\) is an orthomodular poset. Namely, as shown e.g. in [18], for a complementation \({}^{\prime}\), (P) equivalent to (OM).
An orthogonal paraorthomodular poset is called _sharply paraorthomodular_ (see [5]). Of course, any paraorthomodular lattice is sharply paraorthomodular. We call a sharply paraorthomodular poset \({\bf P}\)_regular_ provided it satisfies the following identity, for any \(x,y\in P\):
* \(x\wedge x^{\prime}\leq y\lor y^{\prime}\).1 Footnote 1: It is worth observing that paraorthomodular lattices defined in [15] coincide with regular paraorthomodular lattices in our framework.
Note that, by virtue of orthogonality, (2) is well-defined. We call a distributive, regular paraorthomodular lattice a _Kleene lattice_.
A poset \({\bf P}\) is called _distributive_ if one of the following equivalent LU-identities is satisfied:
\[L\big{(}U(x,y),z\big{)} \approx LU\big{(}L(x,z),L(y,z)\big{)},\] \[U\big{(}L(x,z),L(y,z)\big{)} \approx UL\big{(}U(x,y),z\big{)},\] \[U\big{(}L(x,y),z\big{)} \approx UL\big{(}U(x,z),U(y,z)\big{)},\] \[L\big{(}U(x,z),U(y,z)\big{)} \approx LU\big{(}L(x,y),z\big{)},\] \[L\big{(}U(x_{1},x_{2},\ldots,x_{n}),z\big{)} \approx LU\big{(}L(x_{1},z),L(x_{2},z),\ldots,L(x_{n},z)\big{)},\] \[U\big{(}L(x_{1},x_{2},\ldots,x_{n}),z\big{)} \approx UL\big{(}U(x_{1},z),U(x_{2},z),\ldots,U(x_{n},z)\big{)}.\]
Recall that a _Boolean poset_ is a bounded distributive complemented poset. Since the complementation in a Boolean poset \(\mathbf{P}\) is unique and antitone \(\mathbf{P}\) is a poset with antitone involution.
An orthomodular poset \(\mathbf{P}\) is called _weakly Boolean_[22] if for every \(a,b\in P\) the condition \(a\wedge b=a\wedge b^{\prime}=0\) implies \(a=0\).
Let \(\mathbf{P}=(P,\leq)\) be a poset. In what follows, for every subset \(A\) of \(P\) let \(\operatorname{Max}A\) and \(\operatorname{Min}A\) denote the set of all maximal and minimal elements of \(A\), respectively. We say that \(\mathbf{P}\) is
1. _mub-complete_[1] if for every upper bound \(x\) of a finite subset \(M\) of \(P\) there is a minimal upper bound of \(M\) below \(x\),
2. _mlb-complete_ if for every lower bound \(x\) of a finite subset \(M\) of \(P\) there is a maximal lower bound of \(M\) above \(x\),
3. _mlub-complete_ if it is both mub-complete and mlb-complete.
If \(\mathbf{P}=(P,\leq,^{\prime},0,1)\) is a bounded poset with an antitone involution then \(\mathbf{P}\) is mub-complete if and only if it is mlb-complete.
A poset \(\mathbf{P}\) is said to satisfy the _Ascending Chain Condition_ (ACC) if it has no infinite ascending chains. The notion of _Descending Chain Condition_ (DCC) is defined dually. If \(\mathbf{P}\) satisfies the ACC then every non-empty subset of \(P\) has at least one maximal element. The dual statement holds for the DCC.
Evidently, if \(\mathbf{P}\) satisfies the ACC then \(\mathbf{P}\) is mlb-complete and the dual statement holds for the DCC. Moreover, every finite poset and every lattice is mlub-complete.
The bounded poset \(\mathbf{P}\) has the _maximality property_[22] if for every \(a,b\in P\) the set \(L(a,b)\) has a maximal element. Clearly, every mlb-complete poset has the property of maximality. Recall also that every weakly Boolean orthomodular poset with the property of maximality is a Boolean algebra [22, Theorem 4.2].
We sometimes extend an operator \(*\colon P^{2}\to 2^{P}\) to a binary operation on \(2^{P}\) by
\[A*B:=\bigcup_{x\in A,y\in B}(x*y)\]
for all \(A,B\in 2^{P}\).
## 3 What implication for sharply paraorthomodular posets?
Let \((P,\leq,^{\prime},0,1)\) be an orthogonal mlb-complete poset. If \(y\in P\) and \(A\subseteq P\) then we will denote the set \(\{y\lor a\mid a\in A\}\) by \(y\lor A\) provided that all the elements \(y\lor a\) exist. Analogously, we proceed in similar cases. Now we introduce the operator \(\to\colon P^{2}\to 2^{P}\) as follows:
1. \(x\to y:=y\vee\operatorname{Max}L(x^{\prime},y^{\prime})\).
Since \(\operatorname{Max}L(x^{\prime},y^{\prime})\leq y^{\prime}\), \(x\to y\) is correctly defined.
It is worth noticing that every _orthomodular poset_ (see e.g. [6], [8], [13] and [14]) is paraorthomodular, but there are examples of paraorthomodular posets (or even lattices, indeed) which are not orthomodular, see [16].
**Example 3.1**.: _The paraorthomodular posets depicted in Fig. 1 are neither orthomodular nor sharply paraorthomodular:_
[MISSING_PAGE_POST]
1. \(y\leq x\to y\)_,_
2. \(x\leq y\) _implies_ \(y\to z\leq_{1}x\to z\)_,_
3. \(x\to y\approx\left\{\begin{array}{ll}y\lor y^{\prime}&\text{ if }x\leq y,\\ y\lor(x^{\prime}\wedge y^{\prime})&\text{ if }x\perp y,\\ x^{\prime}\lor y&\text{ if }y\leq x,\end{array}\right.\)__
4. \((x\to y)\to y\approx y\vee\big{(}y^{\prime}\wedge\operatorname{Min}U(x,y) \big{)}\)_,_
5. \(\big{(}(x\to y)\to y\big{)}\to y\approx y\vee\Big{(}y^{\prime}\wedge\big{(}y \vee\operatorname{Max}L(x^{\prime},y^{\prime})\big{)}\Big{)}\)_._
Let us note that if \({}^{\prime}\) is a complementation (e.g., if \(\mathbf{P}\) is orthomodular) then the first line of (iii) can be written in the form
1. \(x\leq y\) implies \(x\to y=1\).
Implication in orthomodular lattices is usually defined by \(x\to y:=y\vee(x^{\prime}\wedge y^{\prime})\).
Proof of Theorem 3.2.:
1. follows directly from (I1).
2. Assume \(x\leq y\) and \(z\in P\). Then \(y^{\prime}\leq x^{\prime}\). Let \(u\in y\to z\). Then there exists some \(w\in\operatorname{Max}L(y^{\prime},z^{\prime})\) with \(z\lor w=u\). Since \(y^{\prime}\leq x^{\prime}\) we have \(w\in L(x^{\prime},z^{\prime})\). Thus there exists some \(t\in\operatorname{Max}L(x^{\prime},z^{\prime})\) with \(w\leq t\). Now \[u=z\lor w\leq z\lor t\in x\to z\] proving \(y\to z\leq_{1}x\to z\).
3. \(x\to y\approx y\vee\operatorname{Max}L(x^{\prime},y^{\prime})\approx \left\{\begin{array}{ll}y\lor y^{\prime}&\text{ if }x\leq y,\\ y\lor(x^{\prime}\wedge y^{\prime})&\text{ if }x\perp y,\\ y\lor x^{\prime}=x^{\prime}\lor y&\text{ if }y\leq x.\end{array}\right.\)
4. \[(x\to y)\to y \approx\bigcup_{z\in x\to y}(z\to y)\approx\bigcup_{z\in y\lor \operatorname{Max}L(x^{\prime},y^{\prime})}\big{(}y\vee\operatorname{Max}L(z ^{\prime},y^{\prime})\big{)}\approx\] \[\approx y\vee\big{(}y^{\prime}\wedge\operatorname{Min}U(x,y) \big{)}.\]
5. According to (iv) and (iii) we have \[\big{(}(x\to y)\to y\big{)}\to y \approx\Big{(}y\vee\big{(}y^{\prime}\wedge\operatorname{Min}U(x, y)\big{)}\Big{)}\to y\approx\] \[\approx\bigcup_{w\in\operatorname{Min}U(x,y)}\Big{(}\big{(}y\lor( y^{\prime}\wedge w)\big{)}\to y\Big{)}\approx\] \[\approx y\vee\Big{(}y^{\prime}\wedge(y\vee\operatorname{Max}L(x ^{\prime},y^{\prime})\big{)}\Big{)}.\]
If \(\mathbf{P}\) is, moreover, sharply paraorthomodular, we can prove a bit more.
**Lemma 3.3**.: _Let \((P,\leq,{}^{\prime},0,1)\) be a sharply paraorthomodular mlb-complete poset, \(\to\) defined by \((\mathrm{I}1)\) and \(a,b\in P\). Then the following hold:_
* \(b^{\prime}\to b=b\)_._
* _If_ \(a\perp b\) _and_ \(b\wedge b^{\prime}=0\) _then_ \(a\to b=b\) _if and only if_ \(a=b^{\prime}\)_._
* \(a\to b=1\) _implies_ \(a\leq b\)_._
Proof.:
* \(b^{\prime}\to b=b\vee(b\wedge b^{\prime})=b\).
* Assume \(a\perp b\) and \(b\wedge b^{\prime}=0\). First suppose \(a\to b=b\). Then \(b\vee\mathrm{Max}\,L(a^{\prime},b^{\prime})=a\to b=b\). Hence \(\mathrm{Max}\,L(a^{\prime},b^{\prime})\leq b\). This shows \(\mathrm{Max}\,L(a^{\prime},b^{\prime})\leq b\wedge b^{\prime}=0\), i.e. \(\mathrm{Max}\,L(a^{\prime},b^{\prime})=0\) and therefore \(L(a^{\prime},b^{\prime})=0\), i.e. \(a^{\prime}\wedge b^{\prime}=0\) which together with \(a\leq b^{\prime}\) and (P) yields \(a=b^{\prime}\). The other implication follows from (i).
* Assume \(a\to b=1\). Let \(c\in\mathrm{Max}\,L(a^{\prime},b^{\prime})\). Then \(c\leq b^{\prime}\) and \(b\lor c=1\), i.e. \(c^{\prime}\wedge b^{\prime}=0\). Applying (P) we conclude \(c=b^{\prime}\). Hence \[b^{\prime}=c\in\mathrm{Max}\,L(a^{\prime},b^{\prime})\subseteq L(a^{\prime},b ^{\prime})\subseteq L(a^{\prime})\] and thus \(b^{\prime}\leq a^{\prime}\), i.e. \(a\leq b\).
As shown above, the operator \(\to\) in sharply paraorthomodular posets satisfies several nice properties and hence it can be considered as the connective _implication_ in the logic formalized by means of paraorthomodular posets (see [5] for the connection of this logic with the logic of quantum mechanics, see also [13] and [16] for the lattice version). Hence the natural question arises whether this logic can be based on this connective only. Contrary to the case of orthomodular posets treated from this point of view in [8], this is not possible since we cannot derive the order \(\leq\) by means of the implication \(\to\) only. However, we can get a condition on an orthogonal poset \(\mathbf{P}\) equipped with \(\to\) formulated in this language such that \(\mathbf{P}\) becomes a sharply paraorthomodular one.
**Theorem 3.4**.: _Let \(\mathbf{P}=(P,\leq,{}^{\prime},0,1)\) be an orthogonal mlb-complete poset and \(\to\) defined by \((\mathrm{I}1)\). Then the following are equivalent:_
* \(\mathbf{P}\) _is paraorthomodular._
* _If_ \(x\to y=1\) _then_ \(x\leq y\)_._
Proof.: (i) \(\Rightarrow\) (ii):
Assume that \(x\to y=1\). Let \(z\in\mathrm{Max}\,L(x^{\prime},y^{\prime})\). Then \(y\lor z=1\), i.e., \(y^{\prime}\wedge z^{\prime}=0\). Since \(z\leq y^{\prime}\) we have according to (P) that \(z=y^{\prime}\), i.e., \(y^{\prime}\leq x^{\prime}\). We conclude \(x\leq y\).
(ii) \(\Rightarrow\) (i):
If \(x\leq y\) and \(x^{\prime}\wedge y=0\) then
\[y\to x=y^{\prime}\lor x=(x^{\prime}\wedge y)^{\prime}=0^{\prime}=1\]
which implies \(y\leq x\) by (ii) and together with \(x\leq y\) yields \(x=y\)
In the case of lattices, (I1) reduces to
* \(x\to y:=y\vee(x^{\prime}\wedge y^{\prime})\).
The next lemma shows that \(\to\) defined by (I2) is antitone in the first variable provided the poset in question is a lattice.
**Lemma 3.5**.: _Let \((L,\vee,\wedge,{}^{\prime})\) be a lattice with an antitone mapping \({}^{\prime}\) and \(\to\) defined by (I2). Then \(x\leq y\) implies \(y\to z\leq x\to z\)._
Proof.: \(x\leq y\) implies \(y^{\prime}\leq x^{\prime}\) and hence \(y\to z=z\vee(y^{\prime}\wedge z^{\prime})\leq z\vee(x^{\prime}\wedge z^{ \prime})=x\to z\).
## 4 Amalgams of Kleene lattices
In this section we provide a generalization of celebrated R. Greechie's theorems [17] on amalgams of finite Boolean algebras to the realm of Kleene lattices. Specifically, we will show how to obtain sharply paraorthomodular posets and paraorthomodular lattices by "gluing" together Kleene lattices pairwise sharing a common atomic subalgebra of at most four elements. Interestingly enough, we will show that _any_ amalgam of Kleene lattices \(\mathscr{K}\) yields a paraorthomodular poset \(\mathbf{K}\). Such a structure is also a sharply paraorthomodular poset (paraorthomodular lattice) provided that its building Kleene blocks do not form loops of order 3 (loops of order 3 or 4). Analogous results have been already obtained for (lattice) effect algebras in terms of pastings of MV algebras, see e.g. [9, 23].
Let \(\mathscr{K}=\{\mathbf{K}_{i}\}_{i\in I}\) be a family of finite Kleene lattices such that \(|K_{i}|\geq 6\), for any \(i\in I\). Moreover, let \(K_{i}\cap K_{j}\) (\(i\neq j\)) be either \(\{0,1\}\) or an atomic Kleene subalgebra of both \(K_{i}\) and \(K_{j}\) (on which operations coincide) such that \(|K_{i}\cap K_{j}|\leq 4\). Also, we assume that any \(x\in K_{i}\cap K_{j}\smallsetminus\{0,1\}\) is either an atom or a co-atom in both \(K_{i}\) and \(K_{j}\). We call \(\mathscr{K}\) a _pasted family_ of Kleene lattices, and any \(\mathbf{K}_{i}\in\mathscr{K}\) an _initial block_. Note that \(\mathbf{K}_{i}\cap\mathbf{K}_{j}\not\cong\mathbf{K}_{3}\), where \(\mathbf{K}_{3}\) is the three-elements Kleene lattice, since otherwise one would have \(\mathbf{K}_{i}\cong\mathbf{K}_{3}\cong\mathbf{K}_{j}\). In fact, assume by way of contradiction that \(\mathbf{K}_{i}\cap\mathbf{K}_{j}\cong\mathbf{K}_{3}\); let \(a\in K_{i}\smallsetminus K_{3}\). Clearly one must have \(a\parallel b\), where \(b\) is the fixed point of \(\mathbf{K}_{3}\). By regularity, one must have \(0=a\wedge a^{\prime}<b\), and so \(\mathbf{K}_{i}\) contains a sublattice isomorphic to \(\mathbf{N}_{5}\), which is impossible. Therefore, \(\mathbf{K}_{i}\cap\mathbf{K}_{j}\) must be orthoisomorphic either to the two-element Boolean algebra \(\mathbf{B}_{2}\), or to the four-element Boolean algebra \(\mathbf{B}_{4}\), or to the four-element Kleene lattice:
\(1=0^{\prime}\)\(x\)\(1=0^{\prime}\)\(x^{\prime}\)\(1=0^{\prime}\)\(x\)\(0\)Fig. 3
Let \(\mathscr{K}=\{{\bf K}_{i}\}_{i\in I}\) be a pasted family of Kleene lattices. Set \(K=\bigcup_{i\in I}K_{i}\), and let \(\leq\,\subseteq\,K^{2}\) be such that, for any \(x,y\in K\),
\[x\leq y\text{ if and only if there exists }i\in I\text{ such that }x\leq^{{\bf K}_{i}}y.\]
Customary arguments yield the following
**Lemma 4.1**.: _Let \(\mathscr{K}=\{{\bf K}_{i}\}_{i\in I}\) be a pasted family of Kleene lattices. Then \((K,\leq,0,1)\) is a bounded partially ordered set._
The following lemma is obvious
**Lemma 4.2**.: _Let \(\mathscr{K}=\{{\bf K}_{i}\}_{i\in I}\) be a pasted family of Kleene lattices. Upon defining, for any \(x\in K\), \(x^{\prime}=x^{\prime{\bf K}_{i}}\), for some \(i\in I\) such that \(x\in K_{i}\), one has that \((K,\leq,^{\prime},0,1)\) is a bounded poset with antitone involution._
Given a pasted family of Kleene lattices \(\mathscr{K}=\{{\bf K}_{i}\}_{i\in I}\), we will call the bounded poset with antitone involution \((K,\leq,^{\prime},0,1)\) obtained from a pasted family of Kleene lattices \(\mathscr{K}=\{{\bf K}_{i}\}_{i\in I}\) as by Lemma 4.1 and Lemma 4.2 an _atomic amalgam_ of Kleene lattices. Note that, if any \({\bf K}_{i}\in\mathscr{K}\) is a Boolean algebra (for any \(i\in I\)), then our concept coincides with the one developed by R. Greechie (cf. [2, p. 144]). The following result shows that, indeed, any atomic amalgam of Kleene lattices yields a paraorthomodular poset.
**Lemma 4.3**.: _Let \(\mathscr{K}\) be a pasted family of Kleene lattices, then \({\bf K}=(K,\leq,^{\prime},0,1)\) is a paraorthomodular poset._
Proof.: Of course, \((K,\leq,^{\prime},0,1)\) is a bounded poset with antitone involution by Lemma 4.1 and Lemma 4.3. Now, suppose towards a contradiction that \({\bf K}\) is not paraorthomodular. Then, by [5, Theorem 2.5], \({\bf K}\) contains a strong sub-poset orthoisomorphic to the Benzene ring \({\bf B}_{6}\).
\(1=0^{\prime}\)\(y\)\(x\)\(y\)\(1=0^{\prime}\)\(x^{\prime}\)\(y\)\(1=0^{\prime}\)\(x\)\(y\)\(1=0^{\prime}\)\(x\)\(y\)\(1=0^{\prime}\)\(x\)\(y\)\(1=0^{\prime}\)\(x\)\(y\)\(1=0^{\prime}\
**Lemma 4.5**.: _Let \(K=\bigcup_{i\in I}K_{i}\) be an atomic amalgam of a pasted family of Kleene lattices \(\{\mathbf{K}_{i}\}_{i\in I}\). Then, for any \(i,j\in I\) such that \(i\neq j\), \(K_{i}\cup K_{j}\) is the base set of a paraorthomodular lattice which is isomorphic to the amalgam pasting together the lattices \((K_{i},\wedge,\vee)\) and \((K_{j},\wedge,\vee)\) along the pasted sublattice \((K_{i}\cap K_{j},\wedge,\vee)\)._
Proof.: Note that, since \(K_{i}\cap K_{j}\) is finite, it is a closed plexus in the sense of [2, p.139]. Moreover, \(K_{i}\cup K_{j}\) is a regularly pasted amalgam of lattices which, by the first part of [2, Theorem 3.4] is the universe of a lattice upon defining greatest lower bounds and least upper bounds as in the proof of [2, Lemma 3.3]. Moreover, by Lemma 4.3, \((K_{i}\cup K_{j},\wedge,\vee,{}^{\prime},0,1)\) is a paraorthomodular lattice.
Let \(\mathbf{A}=(A,\leq)\) be a poset. For any \(x,y\in A\), we say that \(x\)_covers_\(y\), written \(x\lessdot y\), provided that, for any \(z\in A\) one has that \(x\leq z\leq y\) implies \(x=z\) or \(y=z\).
**Lemma 4.6**.: _Let \(\mathscr{K}=\{\mathbf{K}_{i}\}_{i\in I}\) be a pasted family of Kleene lattices and let \(x,y\in K\) be such that \(x\leq^{\mathbf{K}_{i}}y\) for some \(i\in I\). Then the following holds:_
1. _If_ \(x\lessdot y\) _in_ \(\mathbf{K}\) _then_ \(x\lessdot^{\mathbf{K}_{i}}y\) _in_ \(\mathbf{K}_{i}\)_._
2. _If_ \(y\neq x^{\prime}\) _then_ \(x\lessdot y\) _in_ \(\mathbf{K}\) _if and only if_ \(x\lessdot^{\mathbf{K}_{i}}y\) _in_ \(\mathbf{K}_{i}\)_._
Proof.:
1. If \(x\lessdot y\) in \(\mathbf{K}\) and \(x\lessdot^{\mathbf{K}_{i}}z<^{\mathbf{K}_{i}}y\) then \(x<z<y\) would hold also in \(\mathbf{K}\), a contradiction.
2. Let us suppose that \(x\lessdot^{\mathbf{K}_{i}}y\) but \(x<z<y\). Therefore, for some \(j,k\in I\), one has \(x<^{\mathbf{K}_{j}}z<^{\mathbf{K}_{k}}y\). We have \(x\in K_{i}\cap K_{j}\), \(z\in K_{j}\cap K_{k}\), and \(y\in K_{i}\cap K_{k}\). Observe also that \(i\neq j\) and \(i\neq k\). Let us distinguish several cases depending on the nature of \(x\) and \(y\). If \(x=0\), then \(y\) is an atom in \(\mathbf{K}_{i}\) and hence also in \(\mathbf{K}_{k}\). Similarly, if \(y=1\), then \(x\) is a co-atom in \(\mathbf{K}_{i}\) and in \(\mathbf{K}_{j}\). This means that \(z=x\) or \(z=y\), a contradiction. Therefore, let us suppose without loss of generality that \(x\neq 0\) and \(y\neq 0\). Hence \(x\) is an atom in both \(\mathbf{K}_{i}\) and \(\mathbf{K}_{j}\) and \(y\) is a co-atom in both \(\mathbf{K}_{i}\) and \(\mathbf{K}_{k}\). Now, if \(j\neq k\), then \(z\) must be either an atom in \(\mathbf{K}_{k}\) and hence in \(\mathbf{K}_{j}\), and in this case \(z=x\), or a co-atom in \(\mathbf{K}_{j}\) and hence in \(\mathbf{K}_{k}\), and so \(z=y\), which is impossible. So we must have \(j=k\). But this means that \(\mathbf{K}_{i}\cap\mathbf{K}_{j}=\{0,x,y=x^{\prime},1\}\), contradicting our assumptions.
**Remark 4.7**.: _Observe that, when one deals with the general case of Kleene lattices, the assumption of Lemma 4.6\((\mathrm{ii})\) is necessary. Indeed, let us consider the amalgam of Kleene lattices \(\mathbf{K}\)_
1
Fig. 5
_Note that \(a\lessdot a^{\prime}\) in the isomorphic copy of \(\mathbf{K}_{3}\times\mathbf{B}_{2}\)\(\{0,1,a,a^{\prime},b,b^{\prime}\}\), but \(a<c<a^{\prime}\) in \(\mathbf{K}\)._
Borrowing the analogous notion for Boolean algebras, given a natural number \(n\geq 3\), we say that initial blocks \(\mathbf{K}_{i_{1}},\ldots,\mathbf{K}_{i_{n}}\) of an atomic amalgam \(\mathbf{K}\) form an _atomic loop_ of order \(n\) for \(\mathbf{K}\) provided that the following conditions are satisfied:
1. For any \(1\leq j<n\), \(|K_{i_{j}}\cap K_{i_{j+1}}|=4\). Moreover, \(|K_{i_{n}}\cap K_{i_{1}}|=4\).
2. For any other pair of indexes \(1\leq j,k\leq n\) not mentioned in the above item, \(K_{i_{j}}\cap K_{i_{k}}=\{0,1\}\).
3. For any \(1\leq j<k<l<n\), \(K_{i_{j}}\cap K_{i_{k}}\cap K_{i_{l}}=\{0,1\}\).
Note that item (3) ensures that,for any \(1\leq j<n\), \(\{0,1\}=K_{i_{j}}\cap K_{i_{j+1}}\cap K_{i_{j+2}}=\{0,1,a_{j},a_{j}^{\prime}\} \cap\{0,1,a_{j+1},a_{j+1}^{\prime}\}\). Reasoning in a similar way one has that the atoms \(a_{1},\ldots,a_{n}\) must be pairwise distinct.
**Lemma 4.8**.: _Let \(\mathscr{K}=\{\mathbf{K}_{i}\}_{i\in I}\) be a pasted family of Kleene lattices which does not contain an atomic loop of order \(3\). If \(x,y\in K_{i}\), then \(x\vee^{\mathbf{K}_{i}}y\) is the least upper bound of \(x\) and \(y\) in \(\mathbf{K}\)._
Proof.: Let us assume without loss of generality that \(x\parallel y\). Let \(z\in K\) be such that \(x,y\leq z\). If \(z=1\) then \(x\vee^{\mathbf{K}_{i}}y\leq z\). Suppose now that \(z\neq 1\). Then there exists \(j,k\in I\) such that \(x<^{\mathbf{K}_{j}}z\) and \(y<^{\mathbf{K}_{k}}z\), \(x\) is an atom in both \(\mathbf{K}_{j}\) and \(\mathbf{K}_{i}\), \(y\) is an atom in both \(\mathbf{K}_{k}\) and \(\mathbf{K}_{i}\), and \(z\) is a co-atom in both \(\mathbf{K}_{j}\) and \(\mathbf{K}_{k}\). If \(j,i,k\) are pairwise distinct, then \((\mathbf{K}_{i},\mathbf{K}_{j},\mathbf{K}_{k})\) forms an atomic loop of order \(3\). Therefore, one must have \(i=j\) or \(j=k\). Therefore, by Lemma 4.5, \(K_{i}\cup K_{j}\cup K_{k}\) forms a paraorthomodular lattice and so we have again \(x\vee^{\mathbf{K}_{i}}y\leq z\).
**Remark 4.9**.: _Recall that in any Kleene lattice \(\mathbf{K}\) it holds that_
\[x\wedge y=0\text{ implies }x\leq y^{\prime}.\]
**Theorem 4.10**.: _An atomic amalgam of Kleene lattices is a sharply paraorthomodular poset if and only if it does not contain any loop of order \(3\)._
Proof.: Suppose that the atomic amalgam \(\mathbf{K}\) of Kleene lattices does not contain an atomic loop of order \(3\). For any \(x,y\in K\), if \(x\leq y^{\prime}\), then there exists \(i\in I\) such that \(x,y\in K_{i}\). But then \(x\lor y\) exists and equals \(x\vee^{\mathbf{K}_{i}}y\), by Lemma 4.8. Moreover, \(\mathbf{K}\) is a paraorthomodular poset by Lemma 4.3 and so it is sharply paraorthomodular. Conversely, let us assume that \(\mathbf{K}\) contains an atomic loop of order \(3\), say \((\mathbf{K}_{1},\mathbf{K}_{2},\mathbf{K}_{3})\). Assume that \(K_{1}\cap K_{2}=\{0,1,a_{1},a_{1}^{\prime}\}\), \(K_{2}\cap K_{3}=\{0,1,a_{2},a_{2}^{\prime}\}\), \(K_{3}\cap K_{1}=\{0,1,a_{3},a_{3}^{\prime}\}\), where \(a_{1},a_{2},a_{3}\) are pairwise distinct atoms. We show that \(a_{1}\lor a_{3}\) does not exist. Let us assume by way of contradiction that this supremum exists. Since \(a_{1}\wedge a_{2}=0=a_{2}\wedge a_{3}\), by Remark 4.9, one must have \(a_{1}\lor a_{3}\leq a_{2}^{\prime}\).
Now, let us suppose that \(a_{1}\vee^{\mathbf{K}_{1}}a_{3}\) equals \(a_{1}^{\prime}\) or \(a_{3}^{\prime}\). Let us assume without loss of generality that \(a_{1}\vee^{\mathbf{K}_{1}}a_{3}=a_{1}^{\prime}\). Then one has \(a_{1}\leq^{\mathbf{K}_{i}}a_{1}\lor a_{3}\leq^{\mathbf{K}_{j}}a_{1}^{\prime}\). If \(i=1\) or \(j=1\), then one must have \(a_{1}\lor a_{3}=a_{1}^{\prime}\) and so \(a_{1}^{\prime}\leq a_{2}^{\prime}\), i.e. \(a_{1}=a_{2}\), which is absurd. So we must
have \(i\neq 1\) and \(j\neq 1\). But then \(a_{1}\lor a_{3}\) must be either an atom, or a co-atom, which is equally impossible. Therefore, we can assume without loss of generality that \(a_{1}\vee^{\mathbf{K}_{1}}a_{3}\) is not equal to \(a_{1}^{\prime}\) or \(a_{3}^{\prime}\). Note that \(a_{1}\lessdot^{\mathbf{K}_{1}}a_{1}\vee^{\mathbf{K}_{1}}a_{3}\). In fact, if \(a_{1}<x<a_{1}\vee^{\mathbf{K}_{1}}a_{3}\), then the following would be a sub-lattice of \(\mathbf{K}_{1}\):
\(a_{1}\vee^{\mathbf{K}_{1}}a_{3}\)\(a_{1}\)\(a_{2}\)\(a_{3}\)\(a_{1}\)\(a_{3}\)\(a_{1}\)\(a_{2}\)\(a_{3}\)\(a_{1}\)\(a_{3}\)\(a_{2}\)\(a_{3}\)\(a_{1}\)\(a_{3}\)\(a_{3}\)\(a_{1}\)\(a_{2}\)\(a_{3}\)\(a_{1}\)\(a_{3}\)\(a_{2}\)\(a_{3}\)\(a_{1}\)\(a_{2}\)\(a_{3}\)\(a_{1}\)\(a_{3}\)\(a_{2}\)\(a_{3}\)\(a_{1}\)\(a_{
2. If \(x\) and \(y\) are in distinct blocks, then let \(x\leq^{\mathbf{K}_{i}}z\) and \(y\leq^{\mathbf{K}_{j}}z\) with \(z\neq 1\) and consider \(u\in K\) such that \(x\leq^{\mathbf{K}_{k}}u\), \(y\leq^{\mathbf{K}_{l}}u\) but \(z\not\leq u\) (and so \(u\neq 1\)). By hypothesis, one has \(i\neq j\). Therefore \(z\) must be a co-atom. Now, if \(j\neq l\) and \(i\neq k\), since necessarily \(k\neq l\), one has that \((\mathbf{K}_{i},\mathbf{K}_{j},\mathbf{K}_{l},\mathbf{K}_{k})\) is a loop of order \(4\), against our assumptions. Therefore, one must have \(j=l\) or \(i=k\). In both cases, it can be seen that one must have \(u=z\). Again a contradiction.
## 5 Relatively paraorthomodular posets
In the sequel we introduce posets closely related to paraorthomodular ones.
A _relatively paraorthomodular poset_ is an ordered quintuple \(\big{(}P,\leq,(^{x};x\in P),0,1\big{)}\) such that \((P,\leq,0,1)\) is a bounded poset and for every \(x\in P\), \(([x,1],\leq^{,x},x,1)\) is a paraorthomodular poset. Hence, a bounded poset \((P,\leq,0,1)\) is relatively paraorthomodular if in each principal filter \([x,1]\) there exists an antitone involution \({}^{x}\) such that
1. \(x\leq y\leq z\) and \(y^{x}\wedge z=x\) imply \(y=z\).
**Example 5.1**.: _An example of a poset \((\)even a lattice\()\) where every proper interval \([x,1]\) can be equipped with an antitone involution, but which is not relatively paraorthomodular, is visualized in Fig. 7._
_It is evident how to introduce the involution in every interval \([x,1]\); for example, in \([a,1]\) we have \(a^{a}:=1\), \((b^{\prime})^{a}:=b^{\prime}\) and \(1^{a}:=a\). However this poset is not paraorthomodular and hence also not relatively paraorthomodular since \(a\leq b^{\prime}\) and \(a^{\prime}\wedge b=0\), but \(a\neq b^{\prime}\). If, however, the involution in \([0,1]\) is defined as in Fig. 2 then this poset is relatively paraorthomodular._
**Example 5.2**.: _Examples of relatively paraorthomodular posets which are not lattices are visualized in Fig. 1._
It is a question whether also relatively paraorthomodular lattices and posets can be converted into a certain logic, i.e. whether we can introduce a connective implication \(\rightarrow\) having some expected properties.
For mub-complete relatively paraorthomodular posets we can define
1. \(x\to y:=\big{(}\operatorname{Min}U(x,y)\big{)}^{y}\)
where \(A^{y}:=\{a^{y}\mid a\in A\}\). Since \(y\leq U(x,y)\) we have \(\operatorname{Min}U(x,y)\subseteq[y,1]\) and hence \(x\to y\) is well-defined.
Several interesting properties of the operator \(\to\) defined by (I3) in a relatively paraorthomodular poset are listed in the following result.
**Theorem 5.3**.: _Let \(\mathbf{P}=\big{(}P,\leq,(\overset{z}{\cdot};z\in P),0,1\big{)}\) be a relatively paraorthomodular mub-complete poset, \(\to\) defined by (I3) and \(x,y\in L\). Then the following hold:_
* \(y\leq x\to y\)_,_
* \(x\leq y\) _if and only if_ \(x\to y=1\)_,_
* \(x\to y=\left\{\begin{array}{ll}1&\text{if }x\leq y,\\ (x\lor y)^{y}&\text{if }x\lor y\text{ exists},\\ x^{y}&\text{if }y\leq x,\end{array}\right.\)__
* \((x\to y)\to y\approx\operatorname{Min}U(x,y)\)_,_
* \(\big{(}(x\to y)\to y\big{)}\to y\approx x\to y\)_._
Proof.:
* follows directly from (I3).
* The following are equivalent: \(x\leq y\), \(\operatorname{Min}U(x,y)=y\), \(\big{(}\operatorname{Min}U(x,y)\big{)}^{y}=1\), \(x\to y=1\).
* \(x\to y=\big{(}\operatorname{Min}U(x,y)\big{)}^{y}=\left\{\begin{array}{ll}y^ {y}=1&\text{if }x\leq y,\\ (x\lor y)^{y}&\text{if }x\lor y\text{ exists},\\ x^{y}&\text{if }y\leq x.\end{array}\right.\)
* \((x\to y)\to y\approx\bigcup_{z\in x\to y}(z\to y)\approx\bigcup_{z\in \big{(}\operatorname{Min}U(x,y)\big{)}^{y}}\big{(}\operatorname{Min}U(z,y) \big{)}^{y}\approx\) \[\approx\bigcup_{w\in\operatorname{Min}U(x,y)}\big{(}\operatorname{Min}U(w^{y},y) \big{)}^{y}\approx\bigcup_{w\in\operatorname{Min}U(x,y)}w^{yy}\approx \operatorname{Min}U(x,y).\]
* According to (iv) and (iii) we have \[\big{(}(x\to y)\to y\big{)}\to y \approx\operatorname{Min}U(x,y)\to y\approx\bigcup_{z\in \operatorname{Min}U(x,y)}(z\to y)\approx\bigcup_{z\in\operatorname{Min}U(x,y) }z^{y}\approx\] \[\approx\big{(}\operatorname{Min}U(x,y)\big{)}^{y}\approx x\to y.\]
Surprisingly, the implication as defined by (I3) can be used for a characterization of paraorthomodularity, see the next theorem.
**Theorem 5.4**.: _Let \((P,\leq,0,1)\) be a bounded mub-complete poset, for all \(x\in P\) let \(([x,1],\overset{x}{\cdot},x,1)\) be a bounded poset with an antitone involution and let \(\to\) be defined by (I3). Then the following are equivalent:_
1. \((P,\leq,{}^{0},0,1)\) _is paraorthomodular._
2. \(x\leq y^{0}\) _and_ \(x\to y=y\) _together imply_ \(x=y^{0}\)_._
Proof.: The following are equivalent: \(x\to y=y\), \(\big{(}\operatorname{Min}U(x,y)\big{)}^{y}=y\), \(\operatorname{Min}U(x,y)=1\), \(U(x,y)=1\), \(x\lor y=1\), \(x^{0}\wedge y^{0}=0\). Hence (ii) is equivalent to (ii'):
1. \(x\leq y^{0}\) and \(x^{0}\wedge y^{0}=0\) imply \(x=y^{0}\).
But this is nothing else than (P) for \((P,\leq,{}^{0},0,1)\).
Our results can be illuminated by the following example.
**Example 5.5**.: _Consider the relatively paraorthomodular poset depicted in Fig. 1(a). In sections define antitone involutions as follows: \([a,1]\): \((a^{\prime})^{a}:=b^{\prime}\), \((b^{\prime})^{a}:=a^{\prime}\), \(a^{a}:=1\), \(1^{a}:=a\), \([b,1]\): \((a^{\prime})^{b}:=b^{\prime}\), \((b^{\prime})^{b}:=a^{\prime}\), \(b^{b}:=1\), \(1^{b}:=b\) and trivially in the two-element intervals \([a^{\prime},1]\) and \([b^{\prime},1]\). Then the table for the operator \(\to\) defined by \(x\to y:=\big{(}\operatorname{Min}U(x,y)\big{)}^{y}\) is as follows:_
\[\begin{array}{c|ccccc}\to&0&a&b&a^{\prime}&b^{\prime}&1\\ \hline 0&1&1&1&1&1&1\\ a&a^{\prime}&1&\{a^{\prime},b^{\prime}\}&1&1&1\\ b&b^{\prime}&\{a^{\prime},b^{\prime}\}&1&1&1&1\\ a^{\prime}&a&b^{\prime}&b^{\prime}&1&b^{\prime}&1\\ b^{\prime}&b&a^{\prime}&a^{\prime}&a^{\prime}&1&1\\ 1&0&a&b&a^{\prime}&b^{\prime}&1\end{array}\]
If we assume a certain compatibility condition, we can prove a stronger result.
**Theorem 5.6**.: _Let \((P,\leq,0,1)\) be a bounded mub-complete poset and for every \(x\in P\) let \({}^{x}\) be an antitone involution on \(([x,1],\leq)\) such that the following compatibility condition holds:_
1. \(x\leq y\leq z\) _implies_ \(z^{y}=z^{x}\lor y\)_._
_Moreover, let \(\to\) be defined by_ (I3)_. Then the following are equivalent:_
1. \(\mathbf{P}=\big{(}P,\leq,(^{x};x\in P),0,1\big{)}\) _is relatively paraorthomodular._
2. \(x\to y=1\) _implies_ \(x\leq y\)_._
Proof.: (i) \(\Rightarrow\) (ii):
This follows from Theorem 5.3.
(ii) \(\Rightarrow\) (i):
We want to prove that \(([x,1],{}^{x},x,1)\) is paraorthomodular. If \(x\leq y\leq z\) and \(y^{x}\wedge z=x\)
\[z\to y=\big{(}\operatorname{Min}U(z,y)\big{)}^{y}=z^{y}=z^{x}\lor y=(z\wedge y^ {x})^{x}=x^{x}=1\]
according to (C) and hence \(z\leq y\) according to (ii) which together with \(y\leq z\) yields \(y=z\)
Relatively paraorthomodular join-semilattices
More interesting results can be obtained if we assume our paraorthomodular poset to be a join-semilattice. Then definition (I3) of the implication reduces to the following one:
1. \(x\to y:=(x\lor y)^{y}\)
where \((x\lor y)^{y}\) denotes the result of applying the involution \({}^{y}\) in the interval \([y,1]\) to \(x\lor y\). Since \(x\lor y\in[y,1]\), \((x\lor y)^{y}\) is well-defined.
**Example 6.1**.: _Consider the lattice \(\mathbf{L}=(L,\leq,{}^{\prime},0,1)\) depicted in Fig. 8:_
\(a\)\(b\)\(c\)\(d\)\(1=0^{\prime}\)\(a^{\prime}\)\(b^{\prime}\)\(c\)\(d\)\(1=0^{\prime}\)Fig. 8
_Evidently, this poset is paraorthomodular. Consider e.g. the interval \([a,1]\). Then we define an involution \({}^{a}\) as follows: \(a^{a}:=1\), \((d^{\prime})^{a}:=d^{\prime}\), \(1^{a}:=a\); similarly for the intervals \([b,1]\), \([c,1]\) and trivially for two-element intervals \([d^{\prime},1]\), \([c^{\prime},1]\), \([b^{\prime},1]\) and \([a^{\prime},1]\). For \([d,1]\) we define_
\[d^{\prime}:=1,(a^{\prime})^{d}:=d^{\prime},(b^{\prime})^{d}:=c^{\prime},(c^{ \prime})^{d}:=b^{\prime},(d^{\prime})^{d}:=a^{\prime}\text{ and, of course, }1^{d}:=d.\]
_One can easily check that every interval of the form \([x,1]\) is a paraorthomodular poset and hence \(\mathbf{L}\) is a relatively paraorthomodular lattice._
**Remark 6.2**.: _Assume that \(\mathbf{L}=(L,\vee,\wedge,{}^{\prime},0,1)\) is an orthomodular lattice and \(a,b\in L\) with \(a\leq b\). It is well-known \((\)cf. e.g. [18]\()\) that then \(\big{(}[a,b],\vee,\wedge,x\mapsto(x^{\prime}\lor a)\wedge b,a,b\big{)}\) is an orthomodular lattice, too. Hence \(\mathbf{L}\) is relatively paraorthomodular and satisfies \((\mathrm{C})\) since_
\[x\leq y\leq z\text{ implies }z^{y}=z^{\prime}\lor y=z^{\prime}\lor x\lor y=z^{x}\lor y\]
_and \((\mathrm{I}4)\) has the form_
\[x\to y=(x\lor y)^{y}=\big{(}y\vee(x\lor y)^{\prime}\big{)}\wedge 1=y\vee(x^{ \prime}\wedge y^{\prime})\]
_since \(x\lor y\in[y,1]\). We see that this definition of \(x\to y\) coincides with \((\mathrm{I}2)\)._
**Lemma 6.3**.: _Let \(\big{(}L,\vee,({}^{x};x\in L),0,1\big{)}\) be a relatively paraorthomodular join-semilattice and \(\rightarrow\) be defined by \((\mathrm{I}4)\). Then \(x\leq y\) implies \(y\to z\leq x\to z\)._
Proof.: \(x\leq y\) implies \(x\lor z\leq y\lor z\) and hence \(y\to z=(y\lor z)^{z}\leq(x\lor z)^{z}=x\to z\).
Adjointness
Having defined the connective implication \(\rightarrow\), e.g., as shown in the previous sections, the question arises if there exists a binary connective \(\odot\) such that they form a so-called _adjoint pair_, i.e.
\[x\odot y\leq z\text{ if and only if }x\leq y\to z.\]
It is worth noticing that in case that \(\odot\) and \(\rightarrow\) are binary operations, any of the two operations \(\odot\) and \(\rightarrow\) uniquely determines the other one. Namely, for given \(x\) and \(y\), the element \(x\odot y\) is the smallest element \(z\) satisfying \(x\leq y\to z\), and for given \(y\) and \(z\), the element \(y\to z\) is the greatest element \(x\) satisfying \(x\odot y\leq z\).
Let us note that we do not require \(\odot\) to be associative or commutative. We will show that the requirement that \(\odot\) and \(\rightarrow\) form an adjoint pair is not realistic for paraorthomodular lattices or posets and hence we consider only the following two pairs of weaker conditions (A) and (B), and (A)\({}_{21}\) and (B)\({}_{12}\) respectively:
1. \(x\odot y\leq z\) implies \(x\leq y\to z\), (A)\({}_{21}\)\(x\odot y\leq_{2}z\) implies \(x\leq_{1}y\to z\),
2. \(x\leq y\to z\) implies \(x\odot y\leq z\). (B)\({}_{12}\)\(x\leq_{1}y\to z\) implies \(x\odot y\leq_{2}z\).
When we will work with posets instead of lattices then the expressions \(x\odot y\) and \(y\to z\) can be subsets of the poset in question and hence we sometimes use the relations \(\leq_{1}\) and \(\leq_{2}\) instead of the partial order relation \(\leq\).
In what follows, we will consider also another \(\rightarrow\), the so-called Sasaki implication.
To simplify our reasonings, we will use for an orthogonal mlb-complete poset \(\mathbf{P}\) or a lattice \(\mathbf{L}\) with an involution \({}^{\prime}\) the following notation:
\[x\rightarrow_{I}y=y\vee\operatorname{Max}L(x^{\prime},y^{\prime}),x \rightarrow_{S}y=x^{\prime}\vee\operatorname{Max}L(x,y),x\odot_{S}y=y\wedge \operatorname{Min}U(x,y^{\prime}).\]
Note that in the case when \(\mathbf{L}\) is a lattice with an antitone involution, \(x\odot_{S}y\) is the so-called Sasaki projection and \(x\rightarrow_{S}y\) is the so-called Sasaki implication. Moreover, \(x\rightarrow_{I}y=y^{\prime}\rightarrow_{S}x^{\prime}\).
In fact we have the following lemma.
**Lemma 7.1**.: _Let \(\mathbf{L}=(L,\vee,\wedge,{}^{\prime})\) be a lattice with an antitone involution \({}^{\prime}\) on \(\mathbf{L}\). Then_ (A) _and_ (B) _for the connectives \(\odot_{S}\) and \(\rightarrow_{S}\) are equivalent, respectively._
Proof.: Assume that (A) holds. Let \(a,b,c\in L\) and assume \(a\leq b\rightarrow_{S}c\). Then \(a\leq b^{\prime}\vee(c\wedge b)\). Put \(x=c^{\prime}\), \(y=b\), and \(z=a^{\prime}\). Hence \(z^{\prime}\leq y^{\prime}\vee(x^{\prime}\wedge y)\). We obtain \(x\odot_{S}y=y\wedge(x\lor y^{\prime})\leq z\). By (A) we have \(x\leq y\rightarrow_{S}z=y^{\prime}\vee(z\wedge y)\), i.e., \(c^{\prime}\leq b^{\prime}\vee(a^{\prime}\wedge b)\). We conclude that \(a\odot_{S}b=b\wedge(a\lor b^{\prime})\leq c\) and (B) holds. The reverse implication follows analogously.
Moreover, for the Sasaki implication \(\rightarrow_{S}\), we obtain an adjoint pair provided the lattice in question satisfies even weaker conditions, see the following result.
**Theorem 7.2**.: _Let \(\mathbf{L}=(L,\vee,\wedge,{}^{\prime})\) be a lattice with an involution \({}^{\prime}\). Then the following conditions are equivalent._
1. _The orthomodular identities_ (\(\operatorname{OI}_{\vee}\)) _and_ (\(\operatorname{OI}_{\wedge}\)) _hold in_ \(\mathbf{L}\)_._
2. \(x\odot_{S}y\leq z\) _if and only if_ \(x\leq y\to_{S}z\)_._
Proof.: (i) \(\Rightarrow\) (ii):
\(a\odot_{S}b\leq c\) implies
\[a \leq a\lor b^{\prime}\stackrel{{\rm(OI_{V})}}{{=}}b^{ \prime}\vee\big{(}(a\lor b^{\prime})\wedge b\big{)}=b^{\prime}\vee\big{(}b \wedge(a\lor b^{\prime})\wedge b\big{)}=b^{\prime}\vee\big{(}(a\odot_{S}b) \wedge b\big{)}\leq\] \[\leq b^{\prime}\vee(c\wedge b)=b\to_{S}c.\]
Similarly, \(a\leq b\to_{S}c\) implies
\[a\odot_{S}b =b\wedge(a\lor b^{\prime})\leq b\wedge\big{(}(b\to_{S}c)\lor b^{ \prime}\big{)}=b\wedge\big{(}b^{\prime}\vee(c\wedge b)\lor b^{\prime}\big{)}\] \[=b\wedge\big{(}(c\wedge b)\lor b^{\prime}\big{)}\stackrel{{ \rm(OI_{\wedge})}}{{=}}c\wedge b\leq c.\]
(ii) \(\Rightarrow\) (i):
Let \(a,b\in L\). Since \(a\odot_{S}b^{\prime}\leq a\odot_{S}b^{\prime}\) we obtain
\[a\leq b^{\prime}\to(a\odot b^{\prime})=b\vee\big{(}b^{\prime}\wedge[b^{\prime }\wedge(a\lor b)]\big{)}=b\vee\big{(}b^{\prime}\wedge(a\lor b)\big{)}.\]
Hence \(a\lor b\leq b\vee\big{(}b^{\prime}\wedge(a\lor b)\big{)}\). Because the converse inequality is evident we have
\[a\lor b=b\vee\big{(}b^{\prime}\wedge(a\lor b)\big{)}.\]
Similarly, since \(a\to_{S}b\leq a\to_{S}b\) we obtain
\[b\geq(a\to_{S}b)\odot_{S}a=a\wedge\big{(}a^{\prime}\vee[a^{\prime}\vee(a\wedge b )]\big{)}=a\wedge\big{(}a^{\prime}\vee(a\wedge b)\big{)}.\]
Clearly, \(a\wedge\big{(}a^{\prime}\vee(a\wedge b)\big{)}\leq a\wedge b\) and the converse inequality is evident. We conclude that
\[a\wedge\big{(}a^{\prime}\vee(a\wedge b)\big{)}=a\wedge b.\]
For the case of lattices, we can state and prove the following result showing that condition (A) for the connectives \(\odot_{S}\) and \(\to_{I}\) already yields orthomodularity.
**Theorem 7.3**.: _Let \(\mathbf{L}=(L,\vee,\wedge,^{\prime},0,1)\) be a bounded lattice with an antitone involution and assume_ (A) _for \(\odot_{S}\) and \(\to_{I}\). Then \(\mathbf{L}\) is orthomodular._
Proof.: For \(a,b\in L\) every of the following statements implies the next one:
\[a\odot_{S}b^{\prime} \leq a\odot_{S}b^{\prime},\] \[a \leq b^{\prime}\to_{I}(a\odot_{S}b^{\prime}),\] \[a \leq\big{(}b^{\prime}\wedge(a\lor b)\big{)}\vee\Big{(}b\wedge \big{(}b\vee(a^{\prime}\wedge b^{\prime})\big{)}\Big{)},\] \[a \leq\big{(}b^{\prime}\wedge(a\lor b)\big{)}\lor b,\] \[a\lor b \leq\big{(}b^{\prime}\wedge(a\lor b)\big{)}\lor b.\]
The converse inequality is evident. Thus \(a\lor b=\big{(}b^{\prime}\wedge(a\lor b)\big{)}\lor b\)
Our next goal is to obtain a result analogous to Theorem 7.2 but formulated for posets. Here we work with subsets of a given poset \(P\) instead of its elements, thus we should use the equality sign "\(\approx_{2}\)" instead of "\(=\)".
**Theorem 7.4**.: _Let \(\mathbf{P}=(P,\leq,{}^{\prime},0,1)\) be an orthogonal mub-complete poset. Then the following conditions are equivalent:_
* \(\mathbf{P}\) _is an orthomodular poset._
* \(\mathbf{P}\) _satisfies the identity_
* \(x\vee\big{(}\operatorname{Min}U(x,y)\wedge x^{\prime}\big{)}\approx \operatorname{Min}U(x,y)\)_._
* \(\mathbf{P}\) _satisfies the condition_ \((\operatorname{OM}_{UE})\;\;x\vee\big{(}\operatorname{Min}U(x,y)\wedge x^{ \prime}\big{)}\approx_{2}\operatorname{Min}U(x,y)\)_._
Proof.: (i) \(\Rightarrow\) (ii):
This follows immediately from \(x\vee(z\wedge x^{\prime})=z\) for all \(z\in\operatorname{Min}U(x,y)\).
(ii) \(\Rightarrow\) (iii):
It is transparent.
(iii) \(\Rightarrow\) (i):
Assume that \(x\leq y\). Then \(\operatorname{Min}U(x,y)=\{y\}\) and \(x\vee\big{(}\operatorname{Min}U(x,y)\wedge x^{\prime}\big{)}=\{x\vee(y\wedge x ^{\prime})\}\). Hence \(x\vee(y\wedge x^{\prime})\leq y\leq x\vee(y\wedge x^{\prime})\). We conclude that \(y=x\vee(y\wedge x^{\prime})\).
Now we are ready to formulate our result for posets. Let us note that the operation \(\odot\) defined for lattices is now replaced by a binary operator.
**Lemma 7.5**.: _Let \(\mathbf{P}=(P,\leq,{}^{\prime},0,1)\) be an orthogonal mlb-complete poset. Then_ (A) _and_ (B)_, and_ (A)_\({}_{21}\) _and_ (B)_\({}_{12}\) _for the connectives_ \(\odot_{S}\) _and_ \(\to_{S}\) _are equivalent, respectively._
Proof.: We only show that (A) and (B) are equivalent. The remaining equivalence follows the same proof steps.
Assume (A). Let \(a,b,c\in P\). Suppose that \(a\leq b\to_{S}c\). Then \(a\leq b^{\prime}\vee\operatorname{Max}L(c,b)\). Put \(x=c^{\prime}\), \(y=b\), and \(z=a^{\prime}\). Hence \(z^{\prime}\leq y^{\prime}\vee\operatorname{Max}L(x^{\prime},y)\). We obtain \(x\odot_{S}y=y\wedge\operatorname{Min}U(x,y^{\prime})\leq z\). By (A) we have \(x\leq y\to_{S}z=y^{\prime}\vee\operatorname{Max}L(z,y)\), i.e., \(c^{\prime}\leq b^{\prime}\vee\operatorname{Max}L(a^{\prime},b)\). We conclude that \(a\odot_{S}b=b\wedge\operatorname{Min}U(a,b^{\prime})\leq c\) and (B) holds.
Following the same reasoning we obtain the converse, i.e., (B) implies (A).
As for lattices in Theorem 7.2, we obtain, for orthogonal mlb-complete posets the following theorem:
**Theorem 7.6**.: _Let \(\mathbf{P}=(P,\leq,{}^{\prime},0,1)\) be an orthogonal mlb-complete poset. Then the following are equivalent:_
* \(\mathbf{P}\) _is orthomodular._
* \(x\odot_{S}y\leq_{2}z\) _if and only if_ \(x\leq_{1}y\to_{S}z\)
Proof.: (i) \(\Rightarrow\) (ii):
Let \(a,b,c\in P\). Suppose that \(a\leq_{1}b\to_{S}c\). Then \(a\leq_{1}b^{\prime}\vee\operatorname{Max}L(c,b)\). There is \(z\in\operatorname{Max}L(c,b)\) such that \(a\leq b^{\prime}\lor z\in U(a,b^{\prime})\). We compute:
\[a\odot_{S}b=b\wedge\operatorname{Min}U(a,b^{\prime})\leq_{2}b\wedge(b^{\prime }\lor z)=z\leq c.\]
We conclude that (B)\({}_{12}\) and by Lemma 7.5 also (A)\({}_{21}\) hold for \(\odot_{S}\) and \(\to_{S}\). Hence we conclude (ii).
(ii) \(\Rightarrow\) (i):
Assume that \(a\leq b\), \(a,b\in P\). Let us compute:
\[b\odot_{S}a^{\prime}=a^{\prime}\wedge\operatorname{Min}U(a,b)=\{a^{\prime} \wedge b\}\leq_{2}b.\]
Hence by (A)\({}_{21}\) we obtain \(b\leq_{1}a^{\prime}\to_{S}b=\{a\vee(a^{\prime}\wedge b)\}\leq b\), i.e., \(b=a\vee(a^{\prime}\wedge b)\).
The next result is analogous to Theorem 7.3 but formulated for posets.
**Theorem 7.7**.: _Let \(\mathbf{P}=(P,\leq,{}^{\prime},0,1)\) be an orthogonal poset and assume_ (A)\({}_{21}\)_for \(\odot_{S}\) and \(\to_{I}\). Then \(\mathbf{P}\) is orthomodular._
Proof.: Let \(a,b\in L\) and \(a\leq b\). Then \(b\odot_{S}a^{\prime}=a^{\prime}\wedge(b\lor a)=a^{\prime}\wedge b\) and \(\{b\odot_{S}a^{\prime}\}\leq_{2}\{b\odot_{S}a^{\prime}\}\). Therefore \(b\leq_{1}a^{\prime}\to_{I}(a^{\prime}\wedge b)=(a^{\prime}\wedge b)\vee \operatorname{Max}L(a,a\lor b^{\prime})=\{a\vee(a^{\prime}\wedge b)\}\leq b\). We conclude that \(a\vee(a^{\prime}\wedge b)=b\).
**Corollary 7.8**.:
* _If_ \(\mathbf{L}=(L,\vee,\wedge,{}^{\prime},0,1)\) _is a paraorthomodular lattice and if_ \(\odot_{S}\) _and_ \(\to_{I}\) _satisfy_ (A) _then_ \(\mathbf{L}\) _is orthomodular._
* _If_ \(\mathbf{P}=(P,\leq,{}^{\prime},0,1)\) _is a sharply paraorthomodular poset and if_ \(\odot_{S}\) _and_ \(\to_{I}\) _satisfy_ (A)\({}_{21}\) _then_ \(\mathbf{P}\) _is orthomodular._
**Proposition 7.9**.: _Let \(\mathbf{P}=(P,\leq,{}^{\prime},0,1)\) be an orthogonal mlb-complete poset and assume that there exists a binary operator \(\odot_{I}\) on \(P\) such that \(x\odot_{I}y\leq z\) if and only if \(x\leq y\to_{I}z\) for all \(x,y,z\in P\). Then_
* \(x\odot_{I}x^{\prime}=\{0\}\) _for all_ \(x\in P\)_,_
* \(\mathbf{P}\) _is an orthomodular poset,_
* \(x\odot_{I}y\leq_{1}\operatorname{Max}L(x,y)\leq x,y\) _for all_ \(x,y\in P\)_,_
* \(\mathbf{P}\) _is weakly Boolean._
Proof.: (i) Let \(x\in P\). Then \(x\leq\{x\}=0\vee\operatorname{Max}L(x,1)=x^{\prime}\to_{I}0\). Hence also \(x\odot_{I}x^{\prime}\leq\{0\}\), i.e., \(x\odot_{I}x^{\prime}=\{0\}\).
* Let \(x,y\in P\), \(x\leq y\). From (i) we know that \(y\odot_{I}y^{\prime}=\{0\}\leq x\). Therefore \(y\leq y^{\prime}\to_{I}x=x\vee\operatorname{Max}L(y,x^{\prime})=\{x\vee(y \wedge x^{\prime})\}\). Hence \(y\leq x\vee(y\wedge x^{\prime})\leq y\), i.e., \(y=x\vee(y\wedge x^{\prime})\).
* Let \(x,y\in P\). By (ii), Theorem 3.2, (i) and (iii), and the fact that every orthomodular poset is complemented we know that \(x\leq y\to_{I}x\) and \(x\leq y\to_{I}y=\{y\lor y^{\prime}\}=\{1\}\), respectively. Hence \(x\odot_{I}y\leq x\) and \(x\odot_{I}y\leq y\). We conclude \(x\odot_{I}y\leq_{1}\operatorname{Max}L(x,y)\).
* Let \(x,y\in P\) such that \(x\wedge y=0\) and \(x\wedge y^{\prime}=0\). From (iii) we conclude \(x\odot_{I}y=\{0\}\) and \(x\odot_{I}y^{\prime}=\{0\}\). By adjointness and Theorem 3.2 (iii) we have \(x\leq y\to_{I}0=\{y^{\prime}\}\) and \(x\leq y^{\prime}\to_{I}0=\{y\}\). Since \(\mathbf{P}\) is complemented we obtain \(x\leq y\wedge y^{\prime}=0\).
**Proposition 7.10**.: _Let \(\mathbf{P}=(P,\leq,{}^{\prime},0,1)\) be an orthogonal Boolean poset with maximality property. Then \(\mathbf{P}\) is a Boolean algebra._
Proof.: Recall that \(\mathbf{P}\) is orthomodular. Namely, let \(x,y\in P\), \(x\leq y\). Since \(\mathbf{P}\) is a distributive poset we can compute:
\[LU\big{(}x,L(x^{\prime},y)\big{)}=L\big{(}U(x,x^{\prime}),U(x,y)\big{)}=L \big{(}\{1\},U(y)\big{)}=LU(y).\]
Let us show that \(\mathbf{P}\) is a weakly Boolean poset. Suppose \(x,y\in P\) such that \(x\wedge y=0\) and \(x\wedge y^{\prime}=0\). Again by distributivity we obtain
\[L(x)=L\big{(}x,U(y,y^{\prime})\big{)}=LU\big{(}L(x,y),L(x,y^{\prime})\big{)}= LU\big{(}\{0\},\{0\}\big{)}=L(0).\]
From [22, Theorem 4.2] we conclude that \(\mathbf{P}\) is a Boolean algebra.
Finally, we are going to show that \(\odot_{I}\) and \(\to_{I}\) can form an adjoint pair in orthogonal mlb-complete posets only in a very specific case, i.e., if the poset in question is a Boolean algebra.
**Theorem 7.11**.: _Let \(\mathbf{P}=(P,\leq,{}^{\prime},0,1)\) be an orthogonal mlb-complete poset. Then the following conditions are equivalent:_
* _There exists a binary operator_ \(\odot_{I}\) _on_ \(\mathbf{P}\) _such that_ \[x\odot_{I}y\leq z\text{ if and only if }x\leq y\to_{I}z\] _for all_ \(x,y,z\in P\)_._
* \(\mathbf{P}\) _is a Boolean algebra._
Proof.: (i) \(\Rightarrow\) (ii):
From Proposition 7.9 we know that \(\mathbf{P}\) is a weakly Boolean poset which has the maximality property. From [22, Theorem 4.2] we obtain that \(\mathbf{P}\) is a Boolean algebra.
(ii) \(\Rightarrow\) (i):
Clearly, \(x\to_{I}y=y\vee(x^{\prime}\wedge y^{\prime})=(y\lor x^{\prime})\wedge(y\lor y^ {\prime})=y\lor x^{\prime}=(x^{\prime}\lor x)\wedge(y\lor x^{\prime})=x^{ \prime}\vee(x\wedge y)=x\to_{S}y\) and \(x\odot_{S}y=y\wedge(x\lor y^{\prime})=(y\wedge x)\vee(y\wedge y^{\prime})=x \wedge y\).
Since \(\mathbf{P}\) is an orthomodular lattice from Theorem 7.2 we immediately obtain that
\[x\odot_{S}y\leq z\text{ if and only if }x\leq y\to_{S}z\text{ if and only if }x\leq y\to_{I}z\]
for all \(x,y,z\in P\). Hence \(\odot_{I}\) exists, and \(\odot_{I}=\odot_{S}=\wedge\) and \(x\to_{I}y=x\to_{S}y=y\lor x^{\prime}\) realize the required adjointness.
**Corollary 7.12**.: _Let \(\mathbf{L}=(L,\vee,\wedge,{}^{\prime},0,1)\) be a bounded lattice with an antitone involution. Then the following conditions are equivalent:_
* _There exists a binary operation_ \(\odot_{I}\) _such that_ \(x\odot_{I}y\leq z\) _if and only if_ \(x\leq y\to_{I}z\) _for all_ \(x,y,z\in L\)_._
* \(\mathbf{L}\) _is a Boolean algebra._
## Acknowledgements
Ivan Chajda and Helmut Langer gratefully acknowledge the Austrian Science Fund (FWF), project I 4579-N, and the Czech Science Foundation (GACR), project 20-09869L, entitled "The many facets of orthomodularity". Antonio Ledda gratefully acknowledges the support of MIUR within the project PRIN 2017: "Logic and cognition. Theory, experiments, and applications", CUP: 2013YP4N3.
|
2308.06311 | Large Sums of Fourier Coefficients of Cusp Forms | Let $N$ be a fixed positive integer, and let $f\in S_k(N)$ be a primitive
cusp form given by the Fourier expansion $f(z)=\sum_{n=1}^{\infty}
\lambda_f(n)n^{\frac{k-1}{2}}e(nz)$. We consider the partial sum
$S(x,f)=\sum_{n\leq x}\lambda_f(x)$. It is conjectured that $S(x,f)=o(x\log x)$
in the range $x\geq k^{\epsilon}$. Lamzouri proved in arXiv:1703.10582
[math.NT] that this is true under the assumption of the Generalized Riemann
Hypothesis (GRH) for $L(s,f)$. In this paper, we prove that this conjecture
holds under a weaker assumption than GRH. In particular, we prove that given
$\epsilon>(\log k)^{-\frac{1}{8}}$ and $1\leq T\leq (\log k)^{\frac{1}{200}}$,
we have $S(x,f)\ll \frac{x\log x}{T}$ in the range $x\geq k^{\epsilon}$
provided that $L(s,f)$ has no more than $\epsilon^2\log k/5000$ zeros in the
region $\left\{s\,:\, \Re(s)\geq \frac34, \, |\Im(s)-\phi| \leq
\frac14\right\}$ for every real number $\phi$ with $|\phi|\leq T$. | Claire Frechette, Mathilde Gerbelli-Gauthier, Alia Hamieh, Naomi Tanabe | 2023-08-11T17:55:55Z | http://arxiv.org/abs/2308.06311v1 | # Large sums of Fourier coefficients of cusp forms
###### Abstract.
Let \(N\) be a fixed positive integer, and let \(f\in S_{k}(N)\) be a primitive cusp form given by the Fourier expansion \(f(z)=\sum_{n=1}^{\infty}\lambda_{f}(n)n^{\frac{k-1}{2}}e(nz)\). We consider the partial sum \(S(x,f)=\sum_{n\leq x}\lambda_{f}(x)\). It is conjectured that \(S(x,f)=o(x\log x)\) in the range \(x\geq k^{\epsilon}\). Lamzouri proved in [8] that this is true under the assumption of the Generalized Riemann Hypothesis (GRH) for \(L(s,f)\). In this paper, we prove that this conjecture holds under a weaker assumption than GRH. In particular, we prove that given \(\epsilon>(\log k)^{-\frac{1}{8}}\) and \(1\leq T\leq(\log k)^{\frac{1}{200}}\), we have \(S(x,f)\ll\frac{x\log x}{T}\) in the range \(x\geq k^{\epsilon}\) provided that \(L(s,f)\) has no more than \(\epsilon^{2}\log k/5000\) zeros in the region \(\left\{s:\operatorname{Re}(s)\geq\frac{3}{4},\,|\operatorname{Im}(s)-\phi| \leq\frac{1}{4}\right\}\) for every real number \(\phi\) with \(|\phi|\leq T\).
Key words and phrases:Modular forms, Sums of Fourier coefficients, zeros of \(L\)-functions, mean values of multiplicative functions 2010 Mathematics Subject Classification: Primary 11F30; secondary 11F11, 11F12, 11M41 The research of Claire Frechette is supported by NSF grant DMS-2203042, and the research of Alia Hamieh is supported by NSERC Discovery grant RGPIN-2018-06313.
Introduction
Let \(f\in S_{k}(N)\) be a primitive cusp form. Let \(\epsilon\) and \(T\) be real numbers with \(\epsilon\geq(\log k)^{-1/8}\) and \(1\leq T\leq(\log k)^{1/200}\). Suppose that for every real \(\phi\) with \(|\phi|\leq T\) the region_
\[\left\{s\,:\,\text{Re}(s)\geq\frac{3}{4},\,|\text{Im}(s)-\phi|\leq\frac{1}{4}\right\} \tag{1.1}\]
_contains no more than \(\epsilon^{2}\log k/5000\) zeroes of \(L(s,f)\). Then for all \(x\geq k^{\epsilon}\), we have_
\[\left|\sum_{n\leq x}\lambda_{f}(n)\right|\ll\frac{x\log x}{T}. \tag{1.2}\]
Proof.: We choose \(L=\epsilon^{2}\log k/8\) in Theorem 1.1. For any \(x\geq k^{\epsilon}\), we observe that the region given in (1.3) is contained in the region given in (1.4) since
\[\frac{L\log(k(\log k)^{1/\gamma})}{(\log x)^{2}}\leq\frac{2L\log k}{(\log x)^ {2}}=\frac{2}{8}\left(\frac{\epsilon\log k}{\log x}\right)^{2}\leq\frac{1}{4}.\]
**Remark 1.3**.: _Let \(N(T,f)\) be the number of zeros \(\rho=\beta+it\) of \(L(s,f)\) such that \(0\leq\beta\leq 1\) and \(|t|\leq T\). By [7, Theorem 5.38], we have_
\[N(T,f)=\frac{T}{\pi}\log\frac{NT^{2}}{(2\pi e)^{2}}+O\left(\log\left(N(T+k) \right)\right),\]
_for \(T\geq 2\). Hence, \(L(s,f)\) has \(O(\log k)\) zeros in the critical strip up to height \((\log k)^{\frac{1}{200}}\). Corollary 1.2 implies that if (1.2) is false for \(x=k^{\epsilon}\), then a positive proportion \(\gg\epsilon^{2}\) of these zeros lie off the critical line._
In this paper we adapt the strategy of proof employed in [4]. The main idea can be found in Proposition 4.3 where we prove that the inequality \(S(e^{y},f)\geq e^{y}y^{1-\frac{1}{100}}\) yields a lower bound for a certain sum taken over non-trivial zeros of \(L(s,f)\). This is accomplished by first employing Lemma 4.1 which yields a relation between \(S(e^{y},f)\) and \(S(e^{y},f,\phi)=\sum_{n\leq e^{y}}\lambda_{f}(n)n^{-i\phi}\) for some real number \(\phi\) via various applications of results from the theory of mean values of multiplicative functions such as Corollary 3.3 (Lipschitz Theorem) and Proposition 3.5 (Halasz's Theorem). Then we use Lemma 4.2 which is an application of Plancherel's formula relating an integral expression involving \(L(1-\gamma+i(\phi+\xi),f)\) (for some \(0<\gamma\leq\frac{1}{2}\)) as \(\xi\) varies in \(\mathbb{R}\) with an integral expression involving the twisted partial sums \(S(e^{y},f,\phi)\) as \(y\) varies in \(\mathbb{R}\). To tie these relations together and establish the conclusion of Proposition 4.3, we resort to Lemma 2.1 which uses the classical explicit formula for \(L(s,f)\) to furnish an upper bound for \(L(1-\gamma+i(\phi+\xi),f)\) in terms of a sum taken over the non-trivial zeros of \(L(s,f)\).
The paper is structured as follows: Section 2 provides some analytic tools and preliminaries. In Section 3, we delve into some key estimates regarding mean values of divisor-bounded multiplicative functions. These estimates are used in Section 4 to establish a couple of lemmas, which are crucial to proving Proposition 4.3. Finally, the proof of the main theorem is presented in Section 5.
### Conventions and Notation
In this work, we adopt the following conventions and notation. Given two functions \(f(x)\) and \(g(x)\) we write \(f(x)\ll g(x)\), \(g(x)\gg f(x)\) or \(f(x)=O(g(x))\) to mean there exists some positive constant \(M\) such that \(|f(x)|\leq M|g(x)|\) for \(x\) large enough. The notation \(f(x)\asymp g(x)\) is used when both estimates \(f(x)\ll g(x)\) and \(f(x)\gg g(x)\) hold simultaneously. We write \(f(x)=o(g(x))\) when \(g(x)\neq 0\) for sufficiently large \(x\) and \(\lim\limits_{x\to\infty}\frac{f(x)}{g(x)}=0\). The letter \(p\) will be exclusively used to represent a prime number.
## 2. Analytic Tools and Preliminaries
Throughout this paper, the set of all primitive cusp forms in \(S_{k}(N)\) is denoted as \(H_{k}(N)\). The \(L\)-function associated to \(f\in H_{k}(N)\) is given by the Dirichlet series
\[L(s,f)=\sum_{n=1}^{\infty}\frac{\lambda_{f}(n)}{n^{s}}\]
which is absolutely convergent for \(\operatorname{Re}(s)>1\). In this region, the \(L\)-function can be represented as the Euler product
\[L(s,f) =\prod_{p|N}\left(1-\lambda_{f}(p)p^{-s}\right)^{-1}\prod_{p|N} \left(1-\lambda_{f}(p)p^{-s}+p^{-2s}\right)^{-1}\] \[=\prod_{p}\left(1-\frac{\alpha_{1,f}(p)}{p^{s}}\right)^{-1}\left( 1-\frac{\alpha_{2,f}(p)}{p^{s}}\right)^{-1}, \tag{2.1}\]
where \(\alpha_{1,f}(p)\) and \(\alpha_{2,f}(p)\) are referred to as the \(p\)-th local parameters of \(f\). Since the Ramanujan conjecture for classical modular forms is known, thanks to Deligne's work, we have \(|\alpha_{j,f}(p)|=1\) for all \(p\nmid N\).
The completed \(L\)-function which we denote by \(\Lambda(s,f)\) (see below) can be analytically continued to an entire function of order \(1\) and satisfies a functional equation that relates its value at \(s\) to its value at \(1-s\). In particular, we have the Hadamard factorization
\[\Lambda(s,f): =N^{s/2}\cdot 2^{(3-k)/2}\sqrt{\pi}(2\pi)^{-s}\Gamma\left(s+\frac{k-1 }{2}\right)L(s,f) \tag{2.2}\] \[=e^{A+Bs}\prod_{\rho}\ \left(1-\frac{s}{\rho}\right)e^{s/\rho},\]
for some \(A,B\in\mathbb{C}\), where \(B\) satisfies the property \(\mathrm{Re}(B)=\sum_{\rho}-\mathrm{Re}\left(\frac{1}{\rho}\right)\). See [7, Equations 5.4, 5.23, and 5.86] for the details.
To conclude this section, we prove a useful lemma that will be instrumental in the upcoming sections.
**Lemma 2.1**.: _Suppose \(\gamma\) is a real number such that \(0<\gamma\leq\frac{1}{2}\) and \(t\) is any real number. Then_
\[|L(1-\gamma+it,f)|\ll\frac{1}{\gamma^{2}}\mathrm{exp}\left(\sum_{\rho}\frac{2 \gamma^{2}}{|1+\gamma+it-\rho|^{2}}\right).\]
Proof.: Let \(s_{0}=1+\gamma+it\) and \(s_{1}=1-\gamma+it\). We begin by considering the ratio of \(L\)-functions evaluated at \(s_{0}\) and \(s_{1}\) using the functional equation:
\[\left|\frac{L(s_{1},f)}{L(s_{0},f)}\right|=\left|\frac{\Lambda(s_{1},f)}{ \Lambda(s_{0},f)}\right|\left(\frac{N}{4\pi^{2}}\right)^{\gamma}\left|\frac{ \Gamma\left(s_{0}+\frac{k-1}{2}\right)}{\Gamma\left(s_{1}+\frac{k-1}{2}\right) }\right|.\]
An application of Stirling's formula, together with the Hadamard factorization (2.2), yields
\[\left|\frac{L(s_{1},f)}{L(s_{0},f)}\right|\asymp(N(k^{2}+t^{2}))^{\gamma}\prod _{\rho}\frac{|s_{1}-\rho|}{|s_{0}-\rho|}. \tag{2.3}\]
Rearranging, we get
\[\frac{|s_{1}-\rho|}{|s_{0}-\rho|}=\left(1-\frac{|s_{0}-\rho|^{2}-|s_{1}-\rho| ^{2}}{|s_{0}-\rho|^{2}}\right)^{1/2}=\left(1-\frac{4\gamma\mathrm{Re}(1-\rho )}{|s_{0}-\rho|^{2}}\right)^{1/2}\]
which is a truncation of the Taylor expansion for the exponential function, so
\[\frac{|s_{1}-\rho|}{|s_{0}-\rho|}\leq\exp\left(-\frac{2\gamma\mathrm{Re}(1- \rho)}{|s_{0}-\rho|^{2}}\right)=\exp\left(-2\gamma\mathrm{Re}\left(\frac{1}{s _{0}-\rho}\right)+\frac{2\gamma^{2}}{|s_{0}-\rho|^{2}}\right).\]
Substituting this into (2.3), we have
\[\left|\frac{L(s_{1},f)}{L(s_{0},f)}\right|\asymp(N(k^{2}+t^{2}))^{\gamma}\prod _{\rho}\exp\left(-2\gamma\mathrm{Re}\left(\frac{1}{s_{0}-\rho}\right)+\frac{ 2\gamma^{2}}{|s_{0}-\rho|^{2}}\right). \tag{2.4}\]
On the other hand, by taking the logarithmic derivative of (2.2) and applying Stirling's formula, we obtain
\[-\operatorname{Re}\left(\frac{L^{\prime}(s_{0},f)}{L(s_{0},f)}\right)=\frac{1}{2 }\log(N(k^{2}+t^{2}))-\sum_{\rho}\operatorname{Re}\left(\frac{1}{s_{0}-\rho} \right)+O(1). \tag{2.5}\]
To bound the left-hand side of (2.5), observe that
\[-\frac{L^{\prime}}{L}(s,f)=\sum_{n\geq 1}\frac{\Lambda_{f}(n)}{n^{s}},\]
where \(\Lambda_{f}\) is supported on prime powers and satisfies the identities
\[\Lambda_{f}(p)=\lambda_{f}(p)\log p\qquad\text{ and }\qquad\Lambda_{f}(p^{m})= \sum_{j=1}^{2}\alpha_{j,f}(p)^{m}\log p,\]
where \(\alpha_{j,f}(p)\) is the \(p\)-th local parameter of \(f\) as in (2.1). Therefore,
\[\left|\frac{L^{\prime}(s_{0},f)}{L(s_{0},f)}\right|\leq\sum_{n\geq 1}\left| \frac{\Lambda_{f}(n)}{n^{1+\gamma+it}}\right|\leq\sum_{p^{m}}\frac{|\alpha_{1,f}(p)^{m}+\alpha_{2,f}(p)^{m}|\log p}{p^{m(1+\gamma)}}\leq 2\sum_{p^{m}} \frac{\log p}{p^{m(1+\gamma)}}=2\sum_{n\geq 1}\frac{\Lambda(n)}{n^{1+\gamma}},\]
where \(\Lambda(n)\) is the Von Mangoldt function. Therefore, we may write
\[-\operatorname{Re}\left(\frac{L^{\prime}(s_{0},f)}{L(s_{0},f)}\right)\leq \frac{2}{\gamma}+O(1). \tag{2.6}\]
Applying this bound to (2.5) and taking the exponential of both sides give
\[(N(k^{2}+t^{2}))^{\gamma}\exp\left(-2\gamma\sum_{\rho}\operatorname{Re}\left( \frac{1}{s_{0}-\rho}\right)\right)\ll 1.\]
Going back to (2.4), we have
\[\left|\frac{L(s_{1},f)}{L(s_{0},f)}\right| \asymp(N(k^{2}+|t|^{2}))^{\gamma}\prod_{\rho}\exp\left(-2\gamma \operatorname{Re}\left(\frac{1}{s_{0}-\rho}\right)\right)\prod_{\rho}\exp \left(\frac{2\gamma^{2}}{|s_{0}-\rho|^{2}}\right)\] \[\ll\prod_{\rho}\exp\left(\frac{2\gamma^{2}}{|s_{0}-\rho|^{2}} \right).\]
Notice that
\[|L(s_{0},f)|\leq\sum_{n\geq 1}\frac{|\lambda_{f}(n)|}{n^{1+\gamma}}\leq\sum_{n \geq 1}\frac{\tau(n)}{n^{1+\gamma}}=\zeta^{2}(1+\gamma)=\left(\frac{1}{\gamma}+ O(1)\right)^{2},\]
and therefore we have the desired result
\[|L(s_{1},f)|\ll\frac{1}{\gamma^{2}}\prod_{\rho}\exp\left(\frac{2\gamma^{2}}{|s_{ 0}-\rho|^{2}}\right).\qed\]
## 3. Key Ingredients from Pretentious Number Theory
This section highlights crucial results regarding the mean values of divisor-bounded multiplicative functions. While the works of Granville-Harper-Soundararajan [2] and Mangerel [10] encompass many of these statements, we require specific variations to suit our setting.
For two multiplicative functions \(h,g:\mathbb{N}\to\mathbb{C}\), and \(x\in\mathbb{R}_{\geq 0}\), we define the distance \(\mathbb{D}(h,g;x)^{2}\) by
\[\mathbb{D}(h,g;x)^{2}=\sum_{p\leq x}\frac{1-Re(h(p)\bar{g}(p))}{p}.\]
In practice, we will only use this notion for \(h\) such that \(|h(n)|\leq\tau(n)\) and \(g(n)=n^{it}\). The distance function is related to the Dirichlet series \(H(s)=\sum_{n\geq 1}h(n)n^{-s}\) as follows. Let \(x\geq 3\), and let \(h\) be such that \(|h(p)|\leq 2\) for all \(p\) and \(h(n)\ll_{\epsilon}n^{\epsilon}\) uniformly in \(n\). Then
\[H\left(1+\frac{1}{\log x}+it\right)\asymp\log x\exp\left(-\mathbb{D}(h,n^{it} ;x)^{2}\right).\]
To see this, we use [9, Lemma 2.2.15] which states that
\[\log H\left(1+\frac{1}{\log x}+it\right)=\sum_{p\leq x}\frac{h(p)p^{-it}}{p}+O (1). \tag{3.1}\]
Here, the error term is independent of \(t\). The logarithm in (3.1) is taken with respect to the principal branch, so taking real parts gives
\[\log\left|H\left(1+\frac{1}{\log x}+it\right)\right|=\sum_{p\leq x}\frac{ \operatorname{Re}(h(p)p^{-it})}{p}+O(1)=\log\log x-\mathbb{D}(h,n^{it},x)^{2} +O(1),\]
where the last asymptotic follows from Mertens' estimate.
Using this distance function, Granville-Harper-Soundararajan prove the following version of Halasz's Theorem [2, Corollary 1.2].
**Theorem 3.1** (Halasz's Theorem).: _Let \(h\) be a multiplicative function such than \(|h(n)|\leq\tau(n)\) for all \(n\in\mathbb{N}\), and set \(H(s)=\sum_{n\geq 1}h(n)n^{-s}\). Then, for \(M=\max_{|t|\leq(\log x)^{2}}|H(1+\frac{1}{\log x}+it)|\), we have_
\[\frac{1}{x}\sum_{n\leq x}h(n)\ll(M+1)e^{-M}\log x+\frac{(\log\log x)^{2}}{ \log x}.\]
Throughout the remainder of this section, we will focus exclusively on the case of \(h(n)=\lambda_{f}(n)\), where \(f\in H_{k}(N)\). We will commence by stating a version of the so-called Lipschitz Theorem, based on the work of [2].
**Proposition 3.2** (Lipschitz Theorem).: _Suppose \(f\in H_{k}(N)\). Let \(\phi\) be a number in the range \(|t|\leq(\log x)^{2}\) for which the function \(t\mapsto|L(1+\frac{1}{\log x}+it,f)|\) reaches its maximum. Then for all \(1\leq w\leq x^{1/3}\) we have_
\[\left|\frac{1}{x}\sum_{n\leq x}\lambda_{f}(n)n^{-i\phi}-\frac{1}{x/w}\sum_{n \leq x/w}\lambda_{f}(n)n^{-i\phi}\right|\ll\log\left(\frac{\log x}{\log ew} \right)\left(\frac{\log w+(\log\log x)^{2}}{\log x}\right)^{2-\frac{4}{\pi}} \log x.\]
Proof.: The proof of this version differs only in very minor details from the proof of [2, Theorem 1.5] where the authors prove that the same upper bound holds for \(\left|\frac{1}{x^{1+i\phi}}\sum_{n\leq x}\lambda_{f}(n)-\frac{1}{(x/w)^{1+i\phi}} \sum_{n\leq x/w}\lambda_{f}(n)\right|\). The reader is referred to [2] for the detailed exposition.
We will apply the Lipschitz bound in Proposition 3.2 as follows.
**Corollary 3.3**.: _Let \(f\in H_{k}(N)\). Let \(\phi\) be a number in the range \(|t|\leq(\log x)^{2}\) for which the function \(t\mapsto|L(1+\frac{1}{\log x}+it,f)|\) reaches its maximum. Then for all \(x^{2/3}\leq z\leq x^{3/2}\)we have_
\[\left|\frac{1}{x}\sum_{n\leq x}\lambda_{f}(n)n^{-i\phi}-\frac{1}{z}\sum_{n\leq z }\lambda_{f}(n)n^{-i\phi}\right|\ll\left(\frac{1+\left|\log\frac{x}{z}\right| }{\log x}\right)^{2-\frac{4}{\pi}+o(1)}\log x.\]
Proof.: Let \(z=x/w\) in Proposition 3.2. Then if \(x^{2/3}\leq z\leq x\), we have
\[\left|\frac{1}{x}\sum_{n\leq x}\lambda_{f}(n)n^{-i\phi}-\frac{1}{ z}\sum_{n\leq z}\lambda_{f}(n)n^{-i\phi}\right| \ll\log\left(\frac{\log x}{\log\frac{ex}{z}}\right)\left(\frac{ \log\frac{ex}{z}+(\log\log x)^{2}}{\log x}\right)^{2-\frac{4}{\pi}}\log x\] \[\ll\log\log x\left(\frac{\log\frac{ex}{z}+(\log\log x)^{2}}{\log x }\right)^{2-\frac{4}{\pi}}\log x\] \[\ll\left(\frac{1+\left|\log\frac{x}{z}\right|}{\log x}\right)^{2 -\frac{4}{\pi}+o(1)}\log x.\]
For the interval \(x\leq z\leq x^{3/2}\), we repeat the argument, interchanging the roles of \(x\) and \(z\).
We will additionally state the following analogue of [10, Corollary 3.9] for later use.
**Lemma 3.4**.: _Let \(f\in H_{k}(N)\) and \(\phi\) be as in Proposition 3.2. Then_
\[\frac{1}{x}\sum_{n\leq x}\lambda_{f}(n)=\frac{x^{i\phi}}{1+i\phi}\cdot\frac{1} {x}\sum_{n\leq x}\lambda_{f}(n)n^{-i\phi}+O\left((\log x)^{-1+\frac{4}{\pi}}( \log\log x)^{5-\frac{8}{\pi}}\right).\]
Proof.: We will show the equivalent statement that
\[\frac{1}{x}\sum_{n\leq x}\lambda_{f}(n)n^{-i\phi}=\frac{1+i\phi}{x^{1+i\phi} }\sum_{n\leq x}\lambda_{f}(n)+O\left(|\phi|(\log x)^{-1+\frac{4}{\pi}}(\log \log x)^{5-\frac{8}{\pi}}\right).\]
By partial summation, we have
\[\frac{1}{x}\sum_{n\leq x}\lambda_{f}(n)n^{-i\phi}=\frac{1}{x}\int_{1}^{x}u^{- i\phi}d\left\{\sum_{n\leq x}\lambda_{f}(n)\right\}=\frac{1}{x^{1+i\phi}}\sum_{n \leq x}\lambda_{f}(n)+\frac{i\phi}{x}\int_{1}^{x}\frac{1}{u^{1+i\phi}}\sum_{ n\leq u}\lambda_{f}(n)du.\]
We split the integral into two pieces as follows:
\[\int_{1}^{x}\frac{1}{u^{1+i\phi}}\sum_{n\leq u}\lambda_{f}(n)du=\int_{1}^{x/( \log x)^{2}}\frac{1}{u^{1+i\phi}}\sum_{n\leq u}\lambda_{f}(n)du+\int_{x/(\log x )^{2}}^{x}\frac{1}{u^{1+i\phi}}\sum_{n\leq u}\lambda_{f}(n)du.\]
For the first integral, we use the trivial bound, so
\[\frac{i\phi}{x}\int_{1}^{x/(\log x)^{2}}\frac{1}{u^{1+i\phi}}\sum_{n\leq u} \lambda_{f}(n)du\ll\frac{|\phi|}{x}\int_{1}^{x/(\log x)^{2}}\frac{1}{u}\sum_{n \leq u}|\lambda_{f}(n)|du\leq\frac{|\phi|}{x}\int_{1}^{x/(\log x)^{2}}\log u\,du \leq\frac{|\phi|}{\log x}.\]
Since \(w=x/u\) is in the range of Proposition 3.2, the second integral is equal to
\[\frac{i\phi}{x}\int_{x/(\log x)^{2}}^{x}\left(\frac{1}{x^{1+i\phi }}\sum_{n\leq x}\lambda_{f}(n)+O\left(\log\left(\frac{\log x}{\log ew}\right) \left(\frac{\log w+(\log\log x)^{2}}{\log x}\right)^{2-\frac{4}{\pi}}\log x \right)\right)du\] \[= \frac{i\phi}{x}\left(\frac{1}{x^{1+i\phi}}\sum_{n\leq x}\lambda_ {f}(n)\right)\int_{x/(\log x)^{2}}^{x}du+\frac{i\phi}{x}\cdot O\left(\log \log x\left(\frac{(\log\log x)^{2}}{\log x}\right)^{2-\frac{4}{\pi}}\log x \right)\int_{x/(\log x)^{2}}^{x}du\] \[= \frac{i\phi}{x^{1+i\phi}}\sum_{n\leq x}\lambda_{f}(n)+O\left(| \phi|\frac{(\log\log x)^{5-\frac{8}{\pi}}}{(\log x)^{2-\frac{4}{\pi}}}\log x \right).\]
Combining the two integrals gives the desired result.
We conclude with a result that will play a pivotal role in the next section.
**Proposition 3.5**.: _Let \(f\in H_{k}(N)\). Let \(\phi\in[-(\log x)^{2},(\log x)^{2}]\) be the point at which the maximal value \(M=\max_{|t|\leq(\log x)^{2}}|L(1+\frac{1}{\log x}+it,f)|\) is attained. Then,_
\[\frac{1}{x}\sum_{\mathfrak{n}\leq x}\lambda_{f}(n)\ll\log x\left(\frac{(M+1)e ^{-M}}{1+|\phi|}+\frac{(\log\log x)^{5-\frac{8}{\pi}}}{(\log x)^{2-\frac{4}{ \pi}}}\right).\]
Proof.: Using Theorem 3.1, we have
\[\frac{1}{x}\sum_{n\leq x}\lambda_{f}(n)n^{-i\phi}\ll(M+1)e^{-M}\log x+\frac{( \log\log x)^{2}}{\log x}.\]
Applying Lemma 3.4, we get
\[\frac{1}{x}\sum_{n\leq x}\lambda_{f}(n) \ll\left|\frac{x^{i\phi}}{1+i\phi}\right|\left((M+1)e^{-M}\log x +\frac{(\log\log x)^{2}}{\log x}\right)+O\left(\frac{(\log\log x)^{5-\frac{8} {\pi}}}{(\log x)^{1-\frac{4}{\pi}}}\right)\] \[\ll\frac{(M+1)e^{-M}}{1+|\phi|}\log x+O\left(\frac{(\log\log x)^{ 2}}{\log x}\right)+O\left(\frac{(\log\log x)^{5-\frac{8}{\pi}}}{(\log x)^{1- \frac{4}{\pi}}}\right).\]
The first error term is smaller than the second, so it is subsumed into the second one.
## 4. Necessary Lemmas
In this section, we establish the key ingredients required for the proof of the main theorem; namely Lemmas 4.1 and 4.2, and Proposition 4.3. Across all the statements, we assume \(f\in H_{k}(N)\).
**Lemma 4.1**.: _Let \(y_{0}\geq 4\) and assume that \(|S(e^{y_{0}},f)|\geq y_{0}e^{y_{0}}y_{0}^{-1/100}\). There exists a real number \(\phi=\phi(y_{0})\) with \(|\phi|\ll y_{0}e^{y_{0}}/|S(e^{y_{0}},f)|\) such that, for any \(y\in\mathbb{R}\),_
\[\left|\frac{S(e^{y},f,\phi)}{e^{y}}-\frac{S(e^{y_{0}},f,\phi)}{e^{y_{0}}} \right|\ll\left(\frac{\log y_{0}+|y-y_{0}|}{y_{0}}\right)^{2-\frac{4}{\pi}+o( 1)}\max\{y,y_{0}\}.\]
_Moreover, for any \(\epsilon>0\), we have_
\[S(e^{y_{0}},f,\phi)=(1+i\phi)e^{-i\phi y_{0}}S(e^{y_{0}},f)+O(e^{y_{0}}y_{0}^{- 1+\frac{4}{\pi}+\epsilon}). \tag{4.1}\]
Proof.: We set \(x=e^{y_{0}}\) in Proposition 3.5. There exists \(\phi\in[-(y_{0})^{2},(y_{0})^{2}]\) such that
\[\frac{1}{e^{y_{0}}}\sum_{n\leq e^{y_{0}}}\lambda_{f}(n)\ll\frac{(1+M)e^{-M}y_ {0}}{1+|\phi|}+y_{0}^{-1+\frac{4}{\pi}}(\log y_{0})^{5-\frac{8}{\pi}}. \tag{4.2}\]
We first show that \(1+|\phi|\ll y_{0}e^{y_{0}}/|S(e^{y_{0}},f)|\). Starting from (4.2), we get
\[\frac{|S(e^{y_{0}},f)|}{y_{0}e^{y_{0}}}\leq C\left(\frac{(1+M)e^{-M}}{1+|\phi |}+\frac{1}{y_{0}^{2-\frac{4}{\pi}-\epsilon}}\right),\]
for some absolute positive constant \(C\). Rearranging the above equation yields
\[1+|\phi|\leq\frac{C(M+1)e^{-M}}{\frac{|S(e^{y_{0}},f)|}{y_{0}e^{y_{0}}}-Cy_{0} ^{-2+\frac{4}{\pi}+\epsilon}}\leq\frac{2C(M+1)e^{-M}}{\frac{|S(e^{y_{0}},f)|} {y_{0}e^{y_{0}}}}\]
where the last inequality holds as long as
\[\frac{|S(e^{y_{0}},f)|}{y_{0}e^{y_{0}}}\geq 2Cy_{0}^{-2+\frac{4}{\pi}+\epsilon},\]
which happens for all \(y_{0}\gg_{C}1\) since \(\frac{|S(e^{y_{0}},f)|}{y_{0}e^{y_{0}}}\geq y_{0}^{-1/100}\). Hence,
\[1+|\phi|\ll\frac{y_{0}e^{y_{0}}}{|S(e^{y_{0}},f)|}.\]
The first assertion follows from Corollary 3.3 with \(x=e^{y_{0}}\) and \(z=e^{y}\). The second assertion follows from Lemma 3.4 by taking \(x=e^{y_{0}}\).
**Lemma 4.2**.: _Let \(\phi\) be a real number, \(T\) a positive real number, and \(\gamma\) a real number such that \(0\leq\gamma\leq\frac{1}{2}\). Set \(S(x,f,\phi):=\sum_{n\leq x}\lambda_{f}(n)n^{-i\phi}\). Then_
\[\sqrt{2\pi T}\int_{-\infty}^{\infty}\frac{S(e^{y},f,\phi)}{e^{y}}\exp\left( \gamma y-\frac{T}{2}y^{2}\right)dy=\int_{-\infty}^{\infty}\frac{L(1-\gamma+i \phi+i\xi,f)}{1-\gamma+i\xi}\exp\left(-\frac{\xi^{2}}{2T}\right)d\xi.\]
Proof.: We apply Plancherel's formula with \(g(y)=\frac{S(e^{y},f,\phi)}{e^{y}}\exp(\gamma y)\) and \(h(y)=\exp\left(-\frac{T}{2}y^{2}\right)\). The result follows by computing
\[\widehat{g}(\xi) =\int_{-\infty}^{\infty}\frac{S(e^{y},f,\phi)}{e^{y}}\exp(\gamma y -i\xi y)dy\] \[=\sum_{n\leq 1}\lambda_{f}(n)n^{-i\phi}\int_{\log n}^{\infty}e^{y( \gamma-1-i\xi)}dy\] \[=\sum_{n\leq 1}\lambda_{f}(n)n^{-i\phi}\frac{n^{\gamma-1-i\xi}}{1- \gamma+i\xi}=\frac{L(1-\gamma+i\phi+i\xi,f)}{1-\gamma+i\xi},\]
and
\[\widehat{h}(\xi)=\int_{-\infty}^{\infty}\exp\left(-\frac{T}{2}y^{2}-iy\xi \right)dy=\sqrt{\frac{2\pi}{T}}\exp\left(-\frac{\xi^{2}}{2T}\right).\]
The next proposition combines the previous two lemmas to derive a lower bound for a certain sum over non-trivial zeros of \(L(s,f)\) under the assumption that \(S(x,f)\) is large.
**Proposition 4.3**.: _Let \(y_{0}\) be large with \(|S(e^{y_{0}},f)|=:y_{0}e^{y_{0}}/Q\geq y_{0}e^{y_{0}}y_{0}^{-1/100}\), and let \(\phi\) be as in Lemma 4.1. If \(CQ^{3}/y_{0}\leq\gamma\leq 2/5\) for a suitably large constant \(C\), then there exists \(|\xi|\leq 2\sqrt{\gamma\log(k^{\gamma}\log k)}/y_{0}\), such that_
\[\sum_{\rho}\frac{\gamma}{|1+\gamma+i(\phi+\xi)-\rho|^{2}}\geq\frac{y_{0}}{4}.\]
Proof.: Set \(T=\gamma/y_{0}\), and note that \(T\leq 1\). Applying Lemma 4.1, we have that
\[\sqrt{2\pi T}\int_{-\infty}^{\infty}\frac{S(e^{y},f,\phi)}{e^{y}} \exp\left(\gamma y-\frac{T}{2}y^{2}\right)dy\] \[\quad=\sqrt{2\pi T}\exp\left(\frac{\gamma y_{0}}{2}\right)\int_{ -\infty}^{\infty}\left(\frac{S(e^{y_{0}},f,\phi)}{e^{y_{0}}}+O\left(\frac{\log y _{0}+|y-y_{0}|^{2/3}}{y_{0}^{2/3}}y_{0}\right)\right)\exp\left(-\frac{T}{2}(y- y_{0})^{2}\right)dy\] \[\quad=\sqrt{2\pi T}\exp\left(\frac{\gamma y_{0}}{2}\right)\left( \frac{(1+i\phi)S(e^{y_{0}},f)}{e^{y_{0}(1+i\phi)}}+O\left(y_{0}^{1/3}\right) \right)\int_{-\infty}^{\infty}\exp\left(-\frac{T}{2}(y-y_{0})^{2}\right)dy\] \[\qquad\qquad\qquad+O\left(y_{0}^{1/3}\sqrt{2\pi T}\exp\left( \frac{\gamma y_{0}}{2}\right)\int_{-\infty}^{\infty}\left((\log y_{0}+|y-y_{0 }|^{2/3})\right)\exp\left(-\frac{T}{2}(y-y_{0})^{2}\right)dy\right).\]
Noting that
\[\int_{-\infty}^{\infty}\exp\left(-\frac{T}{2}(y-y_{0})^{2}\right)dy=\sqrt{ \frac{2\pi}{T}}\]
and
\[\int_{-\infty}^{\infty}\left((\log y_{0}+|y-y_{0}|^{2/3})\right)\exp\left(- \frac{T}{2}(y-y_{0})^{2}\right)dy=\log y_{0}\sqrt{\frac{2\pi}{T}}+\left(\frac {2}{T}\right)^{5/6}\Gamma\left(\frac{5}{6}\right),\]
the integral above equals
\[2\pi\exp\left(\frac{\gamma y_{0}}{2}\right)\left(\frac{(1+i\phi)S(e ^{y_{0}},f)}{e^{y_{0}(1+i\phi)}}+O\left(y_{0}^{1/3}+y_{0}^{1/3}\log y_{0}+\left( \frac{y_{0}}{T}\right)^{1/3}\right)\right)\] \[= 2\pi\exp\left(\frac{\gamma y_{0}}{2}\right)\left(\frac{(1+i\phi)S (e^{y_{0}},f)}{e^{y_{0}(1+i\phi)}}+O\left(\frac{y_{0}^{2}}{\gamma}\right)^{ \frac{1}{3}}\right).\]
Using our lower bound on \(\gamma\), we see that this integral is in magnitude \(\geq\pi y_{0}\exp\left(\frac{\gamma y_{0}}{2}\right)/Q\). So it follows from our assumption on \(S(e^{y_{0}},f)\) and Lemma 4.2 that
\[\frac{\pi y_{0}}{Q}\exp\left(\frac{\gamma y_{0}}{2}\right) \leq\int_{\mathbb{R}}\frac{|L(1-\gamma+i(\phi+\xi),f)|}{|1-\gamma+ i\xi|}\exp\left(-\frac{\xi^{2}}{2T}\right)\,d\xi\] \[\leq\left(\max_{\xi\in\mathbb{R}}\frac{|L(1-\gamma+i(\phi+\xi),f )|}{|1-\gamma+i\xi|}\exp\left(-\frac{\xi^{2}}{4T}\right)\right)\int_{\mathbb{ R}}\exp\left(-\frac{\xi^{2}}{4T}\right)\,d\xi.\]
Thus,
\[\max_{\xi\in\mathbb{R}}\frac{|L(1-\gamma+i(\phi+\xi),f)|}{|1-\gamma+i\xi|}\exp \left(-\frac{\xi^{2}}{4T}\right)\geq\frac{\pi y_{0}\exp\left(\frac{\gamma y_{0 }}{2}\right)}{Q}\frac{1}{2\sqrt{T\pi}}=\frac{y_{0}\exp\left(\frac{\gamma y_{0} }{2}\right)}{2Q}\sqrt{\frac{\pi y_{0}}{\gamma}}. \tag{4.3}\]
If \(\operatorname{Re}(s)=\sigma>1/2\), we have that
\[|L(s,f)| \leq\left|s\int_{1}^{\infty}\frac{S(x,f)}{x^{s+1}}\,dx\right|\leq \left|s\right|\int_{1}^{\infty}\frac{\min(x\log x,(xk)^{1/2}\log(xk)^{1/2})}{ x^{\sigma+1}}\,dx\] \[=\frac{\left|s\right|}{k^{\sigma-1}}\left(\frac{k^{\sigma}}{( \sigma-1)^{2}}-\frac{\log k}{\sigma-1}-\frac{1}{(\sigma-1)^{2}}+\frac{2\log k }{2\sigma-1}+\frac{2}{(2\sigma-1)^{2}}\right),\]
where we recall that \(k\) is the weight of \(f\). It follows that, if \(|\xi|>2\sqrt{\gamma\log(k^{\gamma}\log k)/y_{0}}\), we have
\[\frac{|L(1-\gamma+i(\phi+\xi),f)|}{|1-\gamma+i\xi|}\exp\left(- \frac{\xi^{2}}{4T}\right)\] \[\leq\left|\frac{1-\gamma+i(\phi+\xi)}{1-\gamma+i\xi}\right|k^{ \gamma}\left(\frac{k^{-\gamma}}{\gamma^{2}}+\frac{\log k}{\gamma}-\frac{1}{ \gamma^{2}}+\frac{2\log k}{1-2\gamma}+\frac{2}{(1-2\gamma)^{2}}\right)\exp \left(-\frac{\xi^{2}}{4T}\right)\] \[=\left|\frac{1-\gamma+i(\phi+\xi)}{1-\gamma+i\xi}\right|\left( \frac{k^{\gamma}\log k}{\gamma}+\frac{2k^{\gamma}\log k}{1-2\gamma}-\frac{k^{ \gamma}-1}{\gamma^{2}}+\frac{2k^{\gamma}}{(1-2\gamma)^{2}}\right)\exp\left(- \frac{\xi^{2}}{4T}\right)\] \[\leq\left|\frac{1-\gamma+i(\phi+\xi)}{1-\gamma+i\xi}\right| \left(\frac{k^{\gamma}\log k}{\gamma}+\frac{3k^{\gamma}\log k}{1-2\gamma} \right)\frac{k^{-\gamma}}{\log k}\] \[\leq\frac{7(1+2|\phi|)}{\gamma}.\]
Since \(CQ^{3}/y_{0}\leq\gamma\), the right-hand side of (4.3) is greater than
\[\frac{\pi^{1/2}C^{\frac{3}{2}}O\frac{7}{2}\exp(\frac{CQ^{3}}{2})}{\gamma^{2}}\]
which is larger than \((7+14Q)/\gamma\) with a suitably large \(C\). Therefore, the maximum on the left-hand side of (4.3) cannot be attained in this range of \(\xi\). Thus, there exists \(\xi\) with \(|\xi|\leq 2\sqrt{\gamma\log(k^{\gamma}\log k)}/y_{0}\), such that
\[|L(1-\gamma+i(\phi+\xi),f)|\geq\frac{y_{0}}{2Q}\exp\left(\frac{\gamma y_{0}}{2} \right)\sqrt{\frac{\pi y_{0}}{\gamma}}|1-\gamma+i\xi|\exp\left(\frac{\xi^{2}}{ 4T}\right)\geq\frac{y_{0}}{2Q}\exp\left(\frac{\gamma y_{0}}{2}\right)\sqrt{ \frac{\pi y_{0}}{\gamma}}.\]
By utilizing this bound in conjunction with Lemma 2.1, we obtain
\[\frac{1}{\gamma^{2}}\exp\left(\sum_{\rho}\frac{2\gamma^{2}}{|1+\gamma+i(\phi+ \xi)-\rho|^{2}}\right)\gg\frac{y_{0}}{2Q}\exp\left(\frac{\gamma y_{0}}{2} \right)\sqrt{\frac{\pi y_{0}}{\gamma}}.\]
Consequently,
\[\exp\left(2\gamma\left(\sum_{\rho}\frac{\gamma}{|1+\gamma+i(\phi+\xi)-\rho|^{ 2}}-\frac{y_{0}}{4}\right)\right)\gg\frac{\gamma^{2}y_{0}}{2Q}\sqrt{\frac{\pi y _{0}}{\gamma}}\geq\frac{1}{2}\sqrt{\pi}C^{\frac{3}{2}}Q^{\frac{7}{2}},\]
since \(\gamma y_{0}\geq CQ^{3}\). Choosing \(C\) large enough ensures the right-hand side is \(\geq 1\). Hence,
\[\sum_{\rho}\frac{\gamma}{|1+\gamma+i(\phi+\xi)-\rho|^{2}}-\frac{y_{0}}{4}\geq 0,\]
as desired.
## 5. Proof of Theorem 1.1
This section is devoted to the proof of Theorem 1.1. Let \(\phi,\gamma\), and \(\xi\) be as given in Proposition 4.3. Let \(Y=\sqrt{\frac{\log(k^{\gamma}\log k)}{y_{0}}}\), and suppose that \(|1+i\phi-\rho|\geq 100Y^{2}\). Then,
\[|1+\gamma+i(\phi+\xi)-\rho| \geq\left|1+50Y^{2}+i\phi-\rho\right|-\left(50Y^{2}+|\xi|\right)\] \[\geq\left|1+50Y^{2}+i\phi-\rho\right|-\left(50Y^{2}+2Y\right)\] \[\geq\frac{1}{2}\left|1+50Y^{2}+i\phi-\rho\right|.\]
Therefore, applying (2.5) and (2.6), along with the bound \(|\phi|\ll Q\leq y_{0}^{1/100}\), we have
\[\sum_{\begin{subarray}{c}\rho\\ |1+i\phi-\rho|>100Y^{2}\end{subarray}}\frac{\gamma}{|1+\gamma+i(\phi+\xi)- \rho|^{2}} \leq\frac{4\gamma}{50Y^{2}}\sum_{\begin{subarray}{c}\rho\\ |1+i\phi-\rho|>100Y^{2}\end{subarray}}\frac{1}{|1+50Y^{2}+i\phi-\rho|}\] \[\leq\frac{2\gamma}{25Y^{2}}\sum_{\begin{subarray}{c}\rho\\ |1+i\phi-\rho|>100Y^{2}\end{subarray}}\operatorname{Re}\left(\frac{1}{1+50Y^{2 }+i\phi-\rho}\right)\] \[\leq\frac{2\gamma}{25Y^{2}}\left(\frac{1}{2}\log(k^{2}+y_{0}^{1/5 0})+\frac{1}{100Y^{2}}+O(1)\right)\] \[\leq\frac{2y_{0}}{25\log k}\left(\frac{1}{2}\log(k^{2}+y_{0}^{1/5 0})+\frac{y_{0}}{100\gamma\log k}+O(1)\right).\]
Letting \(y_{0}=\log x\), with \(\sqrt{\log k}\leq\log x\leq\frac{1}{2}\log k\), we get
\[\sum_{\begin{subarray}{c}\rho\\ |1+i\phi-\rho|>100Y^{2}\end{subarray}}\frac{\gamma}{|1+\gamma+i(\phi+\xi)-\rho| ^{2}} \leq\frac{2\log x}{25\log k}\left(\frac{1}{2}\log(2k^{2})+\frac{( \log x)^{2}}{100CQ^{3}\log k}+O(1)\right)\] \[\leq\frac{2\log x}{25\log k}\left(\log k+\frac{\log x}{200CQ^{3}} +O(1)\right)\] \[\leq\frac{2}{25}\left(\log x+\frac{\log x}{400CQ^{3}}+O(1)\right) \tag{5.1}\] \[\leq\frac{2}{25}\left(\log x+A\log x\right)\leq\frac{9}{100}\log x,\]
where the last inequality follows from taking \(A\) small enough, which is possible by choosing \(C\) sufficiently large. Using Proposition 4.3 and (5) gives
\[\sum_{\begin{subarray}{c}\rho\\ |1+i\phi-\rho|\leq 100Y^{2}\end{subarray}}\frac{\gamma}{|1+\gamma+i(\phi+\xi)- \rho|^{2}} =\sum_{\rho}\frac{\gamma}{|1+\gamma+i(\phi+\xi)-\rho|^{2}}-\sum_{ \begin{subarray}{c}\rho\\ |1+i\phi-\rho|>100Y^{2}\end{subarray}}\frac{\gamma}{|1+\gamma+i(\phi+\xi)-\rho |^{2}}\] \[\geq\frac{\log x}{4}-\frac{9}{100}\log x=\frac{4}{25}\log x. \tag{5.2}\]
Since
\[\frac{\gamma}{|1+\gamma+i(\phi+\xi)-\rho|^{2}}\leq\frac{1}{\gamma},\]
the left-hand side of (5.2) is at most \(1/\gamma\cdot\#\{\rho\,:\,|1+i\phi-\rho|<100Y^{2}\}\). Recalling that
\[Y^{2}=\frac{\log(k^{\gamma}\log k)}{\log x}=\frac{\gamma\log(k(\log k)^{1/ \gamma})}{\log x},\]
we conclude that
\[\#\left\{\rho\,:\,|1+i\phi-\rho|<\frac{100\gamma\log(k(\log k)^{1/\gamma})}{ \log x}\right\}\geq\frac{4\gamma\log x}{25}.\]
The proof of the theorem is completed by setting \(L:=100\gamma\log x\).
## Acknowledgements
This project originated from Women In Numbers 6 Research Workshop that took place at Banff International Research Station in March 2023. The authors express their utmost gratitude to the organizers for the invaluable opportunity provided by the workshop.
|
2306.08848 | Datasheets for Machine Learning Sensors: Towards Transparency,
Auditability, and Responsibility for Intelligent Sensing | Machine learning (ML) sensors are enabling intelligence at the edge by
empowering end-users with greater control over their data. ML sensors offer a
new paradigm for sensing that moves the processing and analysis to the device
itself rather than relying on the cloud, bringing benefits like lower latency
and greater data privacy. The rise of these intelligent edge devices, while
revolutionizing areas like the internet of things (IoT) and healthcare, also
throws open critical questions about privacy, security, and the opacity of AI
decision-making. As ML sensors become more pervasive, it requires judicious
governance regarding transparency, accountability, and fairness. To this end,
we introduce a standard datasheet template for these ML sensors and discuss and
evaluate the design and motivation for each section of the datasheet in detail
including: standard dasheet components like the system's hardware
specifications, IoT and AI components like the ML model and dataset attributes,
as well as novel components like end-to-end performance metrics, and expanded
environmental impact metrics. To provide a case study of the application of our
datasheet template, we also designed and developed two examples for ML sensors
performing computer vision-based person detection: one an open-source ML sensor
designed and developed in-house, and a second commercial ML sensor developed by
our industry collaborators. Together, ML sensors and their datasheets provide
greater privacy, security, transparency, explainability, auditability, and
user-friendliness for ML-enabled embedded systems. We conclude by emphasizing
the need for standardization of datasheets across the broader ML community to
ensure the responsible use of sensor data. | Matthew Stewart, Pete Warden, Yasmine Omri, Shvetank Prakash, Joao Santos, Shawn Hymel, Benjamin Brown, Jim MacArthur, Nat Jeffries, Sachin Katti, Brian Plancher, Vijay Janapa Reddi | 2023-06-15T04:24:13Z | http://arxiv.org/abs/2306.08848v3 | # Datasheets for Machine Learning Sensors
###### Abstract
Machine learning (ML) sensors offer a new paradigm for sensing that enables intelligence at the edge while empowering end-users with greater control of their data. As these ML sensors play a crucial role in the development of intelligent devices, clear documentation of their specifications, functionalities, and limitations is pivotal. This paper introduces a standard datasheet template for ML sensors and discusses its essential components inuding: the system's hardware, ML model and dataset attributes, end-to-end performance metrics, and environmental impact. We designed, developed and provide example datasheet for two ML sensors performing computer vision-based person detection: one an open-source ML sensor designed and developed in-house, and a second commercial ML sensor. We discuss and evaluate the design and motivation for each section in detail. In particular, we highlight how these datasheets can facilitate better understanding and utilization of sensor data in ML applications, and we provide objective measures upon which system performance can be evaluated and compared. Together, ML sensors and their datasheets provide greater privacy, security, transparency, explainability, auditability, and user-friendliness for ML-enabled embedded systems. We conclude by emphasizing the need for standardization of datasheets across the broader ML community to ensure the responsible and effective use of sensor data.
## 1 Introduction
The recent emergence of tiny machine learning (TinyML), a branch of ML dedicated to ultra-low power devices, has opened the door to a myriad of new possibilities for intelligent sensing at the edge by leveraging embedded systems [44; 8; 26]. TinyML enables resource-constrained devices to perform complex computations with low latency and minimal energy consumption, making it particularly suitable for applications such as the Internet of Things (IoT), wearables, and smart sensors. However, integrating TinyML models into physical sensor systems can be complex, often requiring a deep understanding of ML algorithms and embedded systems. This knowledge barrier can hinder the widespread adoption of on-device intelligence.
To address these challenges, the "ML sensor" has been proposed as an innovative solution that tightly couples the TinyML model with the physical sensor, effectively offloading the computational burden from the application processor [45]. The ML sensor architecture introduces useful layers of abstraction both at the hardware level and at the level of the full integrated device, creating a fully self-contained intelligent sensor module. ML sensors, however, also present a new challenge: the lack of transparency [38; 36]. Unlike traditional sensors that come with datasheets providing hardware and operating characteristics, ML sensors lack such documentation. This absence hampers developers' ability to assess sensor suitability and independently evaluate performance.
To promote transparency and responsible innovation in embedded ML systems, in this work we design a template for developing ML sensor datasheets. Importantly, this template was developed as a collaboration between academia and industry to ensure its applicability and usefulness in real-world commercial settings. As such, through the lens of a case study that uses computer vision for person detection, we not only developed and manufactured the first open-source ML sensor and its datasheet, but also present a datasheet for a commercial sensor that respects intellectual property considerations of its manufacturer, and evaluate both sensors end-to-end performance through an experimental study.
Our datasheet template includes sections for traditional sensor specifications while also capturing ML model characteristics, dataset details, and other important considerations such as environmental impact, regulatory compliance, and end-to-end performance. As such, our datasheet template enables an ML sensor to offer transparency, auditability, and user-friendliness to system integrators and developers, simplifying, robustifying, and securing the deployment of TinyML into production embedded systems and applications. Furthermore, this approach allows developers to focus on designing and optimizing models without the need for extensive hardware expertise, thereby fostering rapid innovation and application of these new emerging technologies. This approach has already sparked an interest in the wider community, leading to ongoing and active discussion with other major organizations such as the Data Nutrition Project [22], Consumer Reports [1], as well as numerous other hardware developers outside of collaboration.
In summary, this paper's contributions are to promote **responsible innovation in ML sensors by:**
* Proposing and designing a comprehensive template for developing ML sensor datasheets, addressing the current transparency challenge and bridging the knowledge gap between ML algorithms and embedded systems.
* Pioneering the development of the first-ever open-source ML sensor, setting a benchmark for the industry and promoting accessibility and innovation in the TinyML community.
* Demonstrating the application of the datasheet by developing an open-source reference example of a datasheet for our open-source ML sensor (included in full as Appendix B).
* Showcasing the universal applicability of our datasheet template for both open-source and proprietary devices through collaboration with a commercial partner, for which we created a tailored datasheet that respects and incorporates intellectual property considerations (included in full as Appendix C).
* Evaluating the end-to-end performance of both open-source and commercial ML sensors in an IRB-approved experimental study, providing insights into their practical application and analysis.
## 2 Background and Related Work
Historically, datasheets have been instrumental in detailing the physical attributes of hardware, including sensors. These documents outline features like power consumption, operating temperature, and application-specific parameters such as detection limits and measurement frequency. This information is critical for developers to ascertain sensor suitability for their specific applications and serves as a reference for quality assurance, especially in performance-critical workflows.
The concept of a datasheet has been extended to other domains, including ML. Recent research underscores the importance of thorough documentation for ML datasets, covering data collection, cleaning, labeling, and intended use [9; 47; 41]. While these studies targeted specific datasets, datasheets for datasets was proposed as a more general framework for documenting dataset characteristics [18]. The data nutrition label offers a similar diagnostic framework presenting a standardized view of a dataset's important attributes [22; 11]. IBM has also introduced the idea of factsheets to document various features of ML services in order to bolster trust [7]. Beyond datasets, short documents accompanying trained ML models, known as model cards, have been proposed to provide benchmarked evaluations under diverse conditions relevant to intended application domains [31]. Efforts have also been made to include relevant privacy and security information to IoT devices [15; 16]. More recently, efforts have been made to characterize operational and embodied emissions of hardware devices, including TinyML, to help quantify their environmental impact in domains such as water usage, carbon emissions, and eutrophication potential [20; 37]. The growing trends in ML documentation and the
increasing use of ML have highlighted the need for ethical considerations [10]. The trend towards responsible innovation reinforces the importance of transparency, auditability, accountability, and socially responsible practices for developers creating devices that may pervade society at scale [34].
Table 1 highlights the novelty of the proposed ML sensor datasheet. Unlike prior works, ML sensors uniquely encompass integrated hardware, software, and machine learning elements, and as such an ML sensor amalgamates diverse concepts. Therefore, while our work builds (in part) on prior developments, we assert the need to augment a traditional sensor datasheet with vital ML elements into a comprehensive and extendable datasheet.
## 3 The ML Sensor Paradigm
An ML sensor is a self-contained system utilizing on-device ML to interpret a complex set of sensor data and relay high-level information through a straightforward interface to a wider system. The features of an ML sensor deviate from those of traditional on-device applications. As Figure 0(a) shows, rather than transmitting data to an application processor, the processing occurs directly within the sensor itself. This approach prioritizes data locality, bringing about enhanced privacy and security as raw data is never transmitted. Only summarized traits of the data extracted by an on-sensor ML model are conveyed off-sensor. This distinctive attribute signifies a fundamental shift in data handling, which makes ML sensors a significant evolution in sensor technology.
ML sensors present unique challenges due to their distinct architecture. Being devices that execute processing on the sensor itself, they demand an intricate balance of processing power, data handling, and privacy concerns. In traditional models, sensors and processors are distinct components, which allows for the division of labor. The sensor is primarily focused on data acquisition, while the processor handles the data processing. However, in the case of ML sensors, these roles are merged, putting a considerable load on the sensor. Another challenge is that of computational resource allocation. Sensors must be lightweight and energy-efficient while also being capable of deploying complex ML algorithms. Ensuring that an ML sensor has the requisite processing power without overwhelming its physical constraints is a significant design hurdle.
\begin{table}
\begin{tabular}{l||c c c c c c} \hline \hline Datasheets \(\downarrow\) & Env. Impact & Dataset & Model & Hardware & End-to-End & Privacy \& Security \\ \hline ML Sensors (our work) & **Y** & **Y** & **Y** & **Y** & **Y** & **Y** \\ \hline Model Cards [31] & **N** & **N** & **Y** & **N** & **N** & **N** \\ Traditional Sensor Datasheet & **N** & **N** & **N** & **Y** & **N** & **N** \\ Data Nutrition Label [22, 11] & **N** & **Y** & **N** & **N** & **N** & **N** \\ Datasheets for Datasets [18] & **N** & **Y** & **N** & **N** & **N** & **N** \\ IoT Security/Privacy Label [15, 16] & **N** & **N** & **N** & **N** & **N** & **Y** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of ML sensor datasheets with other datasheet types.
Figure 1: (a) The ML sensor paradigm. (b) Examples of existing ML sensors; _(top)_ Seeed Studio’s SenseCAP LoRaWAN sensor [39], _(bottom left)_ our own person detection sensor whose design is publicly accessible, and _(bottom right)_, Useful Sensor’s person sensor [40]. We will be using our person detection sensor as the prime example for how we developed the ML sensor datasheet.
Data privacy and security also present unique challenges. While ML sensors provide enhanced privacy due to on-device processing, they also need to ensure data security at the sensor level. This might include secure execution of ML models and secure transmission of processed data. Furthermore, implementing ML at the sensor level requires the development of models small enough to be deployed on the sensor, but still complex enough to accurately process the data they are designed to interpret. Designing these efficient yet effective models remains challenging. Finally, as the sensor needs to adapt to changes in the environment through ML, there is the challenge of continuous learning and model updates. Ensuring the ML sensor can handle this while upholding privacy and security assurances, as well as without increased power consumption or latency, is an area of ongoing research.
Figure 0(b) shows examples of existing ML sensors that can be used for person detection. These sensors can determine whether a person is present within view of the on-device camera. Such ML-enabled devices typically come with hardware specifications and information on utilizing the sensor's capabilities, but often have limited information on (1) what data the on-device models are trained on, (2) the architecture of the model and how it performs on various related benchmarks, (3) environmental impact, (4) how the device performs in response to changing environmental parameters anticipated during deployment, and (5) information related to privacy, security, and compliance.
## 4 The ML Sensor Datasheet
A datasheet is a document detailing the features and characteristics of a product. Datasheets are a standard component of commercially available sensors and enable users to evaluate device suitability. In this section, and as diagrammed in Figure 2, we present what we believe should be the sections of an ML sensor datasheet. Our format captures both traditional sensor components, as well as machine learning components, environmental impact, responsible AI analysis, and end-to-end system performance metrics. This datasheet format is designed to capture both best-practices from the literature, as well as our experience developing an open-source, commercially-relevant ML sensor. We next describe each section of the datasheet template along with select plots and figures drawn from our open-source person detection ML sensor datasheet which can be found in Appendix B. We also note that a full datasheet for the Useful Sensors commercial person detection ML sensor can be found in Appendix C. We discuss some of the key differences between the datasheets in Section 5.
Figure 2: Schematic of the proposed ML sensor datasheet encompassing 10 distinct sections that combine traditional sensor datasheet components with machine learning components, environmental impact and responsible AI analysis, and end-to-end system performance metrics.
### Description
_What are high-level characteristics of the sensor?_ The description section of the ML sensor datasheet provides an introduction to the device for both technical and non-technical audiences. On the technical side, it includes intricate details about the device's specifications, architecture, and operational principles. For non-technical readers, it offers a more accessible description, explaining the sensor's purpose and function in plain language. This section also highlights key features of the ML sensor, such as high sensitivity, low power consumption, robust data processing capabilities, and its adaptability to various environmental conditions. Additionally, it presents a list of common applications where the sensor could be beneficial, such as predictive maintenance in industrial settings, environmental monitoring, healthcare diagnostics, autonomous vehicles, and smart home systems. In the context of our person detection sensor (Figure 0(b)), the description would be "a device that predicts whether an individual is present in the view of the camera and outputs a corresponding signal response."
### Dataset Nutrition Label
_What data is the model trained on?_ To evaluate the dataset used in training the on-device model, we utilize the second-generation Dataset Nutrition Label [11; 22]. This label communicates high-level dataset information to end-users, including (1) the sources of the dataset (i.e., governmental, commercial, academic), (2) licensing details of the dataset, (3) data modality, and (4) context-specific information (e.g., human-labeled, contains information about human individuals), amongst other information. This label promotes transparency and accountability by providing detailed information about the context, content, and quality of dataset(s) used in training the ML model. As such, it fosters responsible development and deployment of models by making it easier for developers, researchers, and stakeholders to assess data quality and potential biases such as sampling, measurement, and label bias [30; 27]. The data nutrition label for the person detection sensor shown in Figure 0(b) highlights that the dataset, the Visual Wake Words dataset [12], is from an upstream source (MS-COCO [28]), contains information about humans obtained without consent, and that the dataset is not currently managed or updated by any entity.
### Model Characteristics
_What are the characteristics of the trained model?_ This section of the datasheet provides insights into the specific ML model operating within the sensor. This includes important details such as the type of the ML model used, the size of the model in terms of parameters, the type and size of input data it can process, and the nature of output it generates. This section also discusses the model's performance metrics, such as accuracy, precision, recall, F1 score, or receiver operating characteristics (ROC), measured on a relevant validation dataset. It may also address the model's robustness to variations in input data, its sensitivity to noise, and its generalization capabilities. This section is vital for users to understand the underlying technology of the sensor, its computational requirements, its performance under different operating conditions, and ultimately, its suitability for their specific use cases. Figure 3 shows some example model characteristics of the ML sensor running a MobileNetV1 architecture [23] trained for person detection. The ROC curve shows that the optimal threshold value lies around 0.52 to balance false positives and false negatives, which were valued equally. The confusion matrix shows the accuracy of the model on the test set using this specific threshold value.
Figure 3: ROC curve _(left)_, precision-recall curve _(center)_, and confusion matrix _(right)_ for the person detection ML sensor evaluated on a test set. The confusion matrix was calculated with the optimal threshold value of 0.52.
### Sensor Performance Analysis
_How does the device perform as a whole with changing environmental parameters?_ The end-to-end performance analysis section of the datasheet provides an encompassing evaluation of the sensor's performance from data acquisition to data processing and output generation. This holistic performance analysis may include metrics such as data collection rate, latency, power consumption, and accuracy of the sensor's outputs under a range of conditions. Additionally, it highlights the performance of the ML model when deployed on the sensor hardware, taking into account aspects such as data preprocessing, inference speed, and model accuracy. The analysis could also encompass how the sensor's performance scales with changes in workload or environmental conditions. This section is crucial as it helps potential users understand not only the isolated performance of the sensor's components but also how they work together to provide a coherent service. This understanding is vital when integrating the sensor into larger systems or evaluating its fit for particular use-cases.
We present an exemplary case study of end-to-end performance analysis on our open-source person detection sensor in Figure 4, using data from three sensors to capture device variability. For a detailed description of the experimental study please see Appendix A. In particular, we assess our device's performance under a set of different lighting conditions and distances on a diverse group of volunteers with a range of skin tones and genders. These analysis provide examples of both device efficiency under changing environmental conditions, a common type of analysis on standard sensor datasheets, as well as possible demographic biases embedded within the ML model. Figure 3(a) shows that lighting conditions had little impact on performance, likely as a result of the high contrast testing environment, while Figure 3(b) shows that performance degraded sharply when distance increased from 3-5 meters. Figures 3(c), and 3(d) show that the model performed slightly better on men than on women and demonstrated a skin tone bias which favored lighter skin tones, warning of potential biases in the open-source pipeline used to develop the particular model on the device. We note that in particular, the diversity of clothing worn by study participants was not captured in this data and may have had a significant effect on our results.
Figure 4: End-to-end performance analysis of the ML sensor tested on 38 volunteers under controlled laboratory conditions. Skin tone was estimated using the Monk Skin Tone (MST) Scale [14].
### Security and Privacy
_What security and privacy features does the ML sensor have?_ The IoT security and privacy label is aimed at enhancing consumer awareness and facilitating informed decision-making when purchasing smart devices [15]. This label aims to promote transparency and empower consumers, allowing them to make well-informed choices in an increasingly connected world. The label is structured in two distinct layers: a primary layer, which conveys essential privacy and security information in a concise and easily digestible manner, and a secondary layer, which delves into further detail for experts and more technically inclined users [16]. The label covers privacy-related aspects such as data collection, retention and transmission practices, security mechanisms (e.g., encryption, automatic security updates), as well as the types of sensors present on the device and their associated data modalities. The primary layer is intended for display on product packaging or online shopping platforms, while the secondary layer is accessible through the QR code on the primary layer, offering further valuable insights for those seeking a deeper understanding of a device's potential risks and safeguards. For our ML sensor, the IoT security and privacy label shows that there is only a camera on the device collecting data continuously, but this data is not being stored or transmitted off-device.
### Device Diagrams
_What does the device size, shape, and layout look like?_ The device diagram section of the ML sensor datasheet provides visual depictions and physical dimensions of the device. It includes detailed diagrams that illustrate the sensor's internal components and their interconnections, offering insights into the design and operation of the sensor. For non-technical audiences, these diagrams can provide a more intuitive understanding of the device, beyond what text descriptions can offer. These diagrams include form factor information which describes the physical shape, size, and layout of the sensor, since this data is crucial in planning the sensor's integration into various systems and devices. Details for our open-source person detector are shown in Figure 5, again with the full datasheets for both devices provided in Appendix B and C.
### Hardware Characteristics
_How do I use, power, and interface with the device?_ This section of the datasheet provides an overview of the physical and functional attributes of the device. It contains specifics about the sensor's integral hardware components, including the processor type, memory capacity, power requirements, and durability under different environmental conditions. In addition, it includes detailed information about the communication protocols supported by the sensor, such as Wi-Fi, Bluetooth, or cellular connectivity, along with data transfer rates. This data is crucial in determining the sensor's compatibility with existing hardware infrastructure. For instance, it can inform whether the sensor can efficiently transmit data over a specific network or whether it can endure specific environmental conditions. Figure 5_(left)_ shows our ML sensor with a square form factor and dimensions \(27.2\)mm \(\times\)\(27.7\) mm. It employs the industry-standard Inter-Integrated Circuit (I\({}^{2}\)C) interface via a Quviic connector [3], allowing a data transfer rate of up to 100 kB/s. Figure 5_(middle)_ and _(right)_ show the data standard and open-source schema we developed for communication [2]. The sensor communicates through a single byte with values from 0 to 255. The device can accept voltages in the range 3.5-5.5 V with a 40 mA operating current.
Figure 5: _(left)_ Device diagram of person detection ML sensor, _(middle)_ standard for data communication, and _(right)_ schema for communication of data off-sensor.
### Environmental Impact
_How does the device affect the environment during its lifecycle?_ There are currently around 15 billion IoT devices with projections of billions more to come each year [42]. However, embedding smart computing into everyday objects has looming environmental consequences through increased electronic waste [21]. With this widespread deployment, it is essential to consider the environmental impact of these devices. Therefore, another component we advocate to be included in the datasheet is an "environmental impact" section that outlines the device's footprint. As such, we generated a sample of what this section might look like as part of the datasheet for our sensor specifically.
We captured the carbon footprint of our ML sensor using the methodology of Prakash et al. [37]. The calculator includes fields for processing, sensing, power supply, memory, and more, enabling us to input specifications from our bill of materials. Furthermore, we also capture the carbon footprint for the ML sensor's model training, transport, and three-year use. While training costs can be amortized over multiple sensor deployments, we consider them separately to provide a conservative carbon footprint estimate. The total footprint of our ML sensor, including embodied and operational carbon, is approximately \(2.34\) kg CO\({}_{2}\)-eq. Figure 6 shows that the majority of the footprint is attributable to the power supply and camera sensor. We note that other important environmental impact indicators, such as freshwater eutrophication, should ideally also be included in future datasheets. However, this would require broader information about upstream products and manufacturing processes which are not freely available. To address this, compliance and certification mechanisms could provide an avenue for incorporating a broader range of factors into the environmental impact analysis.
### Compliance and Certification
_Which international regulations and industry standards does the device conform to?_ The compliance and certification section of the ML sensor datasheet catalogs the sensor's alignment with various regulatory and industry standards. It lists the certifications the sensor has achieved, signifying thorough testing and validation by recognized certification bodies. These may encompass international data privacy regulations like GDPR [35], radio frequency usage guidelines like FCC regulations, or industry-specific requirements like HIPAA [4] or FDA standards in healthcare. This section could also showcase adherence to voluntary industry-specific best practices, such as ISO 26262 standard [25] for autonomous vehicles or IEC 61508 [24] for industrial automation systems.
Beyond simply listing certifications, this section offers an in-depth understanding of the sensor's capabilities and boundaries, denoting aspects like its Ingress Protection rating or compatibility with certain environmental conditions. Compliance with these standards vouches for the sensor's reliability, safety, and overall quality, instilling confidence in developers and end-users about its dependable operation. It communicates that the sensor has been meticulously designed and manufactured to meet or surpass specific standards, providing assurance in its performance and longevity. This section serves as a key reference for users evaluating the sensor's suitability for their needs. While our own ML sensor has not obtained specific certifications or verification of compliance to standards, this would be appropriate for commercial devices. Currently, no certification body or standards exist specifically for ML sensors, but a mechanism could be implemented that is tied to existing non-profit entities (e.g., the TinyML Foundation or the International Telecommunication Union).
Figure 6: Carbon footprint breakdown by component of our ML sensor _(left)_. Units are in kg CO\({}_{2}\)-eq. Using the TinyML Footprint Calculator from Prakash et al. [37]_(right)_, we compute the footprint including the environmental cost for sensor transportation and ML model training. The total carbon footprint, including embodied and operational footprint, is approximately 2.34 kg CO\({}_{2}\)-eq.
Discussion and Limitations
Our datasheet template finds relevance in numerous practical applications, including predictive maintenance in industrial settings [32], environmental monitoring [33; 19], healthcare diagnostics [43], autonomous vehicles [13], and smart homes [46]. By detailing the hardware characteristics and conformity with industry and regulatory standards, the datasheet provides developers and users with a dependable tool to assess sensor suitability for their specific use-cases. In the remainder of this section we discuss the differences between open-source and commercial ML sensor datasheets, limitations to our current approach that will require improvements as new sensors emerge, as well as additional directions for future work. Importantly, we find that our high-level template can be easily adapted for both a wide range of current and future applications but additional development is needed to specify detailed metrics and domain-specific requirements.
### Open-Source vs. Commercial Comparison
At a high-level, the datasheet template was found to be applicable for both our open-source sensor, as well as the commercial sensor, with changes only necessary in a limited number of sections. Sections where changes were necessary were mainly in the data nutrition label and the model characteristics in order to obfuscate aspects of the commercial partner's intellectual property, such as proprietary datasets, models, and training procedures. This obfuscation was critical to enable industrial collaboration and care will need to be taken in the future to ensure that the level of obfuscation balances transparency and intellectual property.
That said, we found it challenging to directly compare end-to-end results from both devices due to the differing approaches utilized by the two sensors. In particular, the commercial sensor utilized a face detection bounding-box model with a detection threshold set at \(\sim 0.6\), whereas our open-source sensor focused on person detection within the full image. This along with differences in camera specification meant that the open-source device was better at detecting individuals over longer distances as compared to the commercial sensor, but the commercial sensor had a wider angle of detection over the open-source sensor. This suggest that future reseach is needed to design and build methods to fairly compare and evaluate different output types for similar applications in order to provide relevant comparisons as the diversity of ML sensor devices grows.
### Limitations
One limitation of our datasheet is the need to test the adaptability of the template to diverse sensor types, like those implementing basic neural network operations [29] or for event cameras used in VR/AR [6; 17]. While event-based cameras have different properties than CMOS cameras and utilize alternative approaches such as spiking networks over convolutional networks, datasheets for sensors using either camera type will retain similar sections such as optical properties of the camera and the network training process. We believe that incremental refinement of the datasheet template will address this challenge. A second limitation is that the datasheet relies on the accuracy and honesty of the information provided by the manufacturers or developers, with the potential risk of misinformation, misinterpretation, or lack of updates to the datasheet after product updates. That said, as mentioned in Section 4.9, oversight mechanisms such as certification from a trusted third-party entity could resolve this concern, and the use of blockchain technologies could aid in auditability [5].
As mentioned earlier, to ensure the generalizability of our approach, we are actively collaborating with the Data Nutrition Project, Consumer Reports, and hardware developers to construct a more extensible template with standardized metrics to help make the datasheet more broadly applicable across a wider set of sensors and use cases. In this way, we hope to support the unique requirements of myriads of real-world use-cases which will both enhance the datasheet's effectiveness and applicability, and ensure it remains a robust tool for developers, integrators, and end-users going forward.
### Future Work and Directions
As such, in looking towards the future, our datasheet template opens up numerous avenues for further exploration and specification. For example, in the field of healthcare, refining the template to accommodate the unique needs and regulations for medical devices could be incredibly valuable. This could involve detailing the sensor's biocompatibility, sterilization procedures, or patient data
privacy protocols. Similarly, for industrial applications such as predictive maintenance or process control, future research could focus on expanding the datasheet's sections on durability, reliability under extreme conditions, or integration with industrial control systems. In the realm of autonomous vehicles, the datasheet could be optimized to elaborate on aspects such as real-time performance, resilience to hacking, or interoperability with other vehicle components. For consumer applications like smart home systems, the datasheet could be further simplified and made more accessible to non-technical users, while retaining key information about data privacy, power consumption, or cross-device compatibility.
Finally, despite our focus on improving transparency into the privacy and security implications of ML sensors, fool-proof methods to eliminate harmful applications do not exist, necessitating careful consideration when designing these devices. To facilitate **responsible innovation in ML sensors**, five key underlying principles should be considered as we continue to collaboratively develop and refine our datasheet template and build out additional concrete examples:
1. **Encourage Transparency.** Require publicly available datasheets detailing essential properties of ML sensors, allowing product integrators and end-users to be aware of limitations.
2. **Address Ethical Challenges.** Recognize that traditional ML ethical concerns persist in the ML sensor paradigm, with additional considerations needed for running ML locally.
3. **Establish Third-Party Audits and Certifications.** Collaborate with organizations to develop recognized standards, certification processes, and third-party auditing mechanisms.
4. **Prioritize Privacy and Security.** Implement built-in safeguards against accessing personal data and ensure secure hardware to prevent potential misuse by malicious actors.
5. **Minimize Risks.** Limit factors like connectivity and updatability to mitigate potential risks, while acknowledging the impossibility of eliminating all possible harmful applications.
## 6 Statement of Ethics
Our datasheets were developed in accordance to stringent ethical standards, with careful data collection and processing in line with legal and ethical guidelines. Throughout the study (i.e., previous sections) we have aimed to demonstrate unbiased data representation and transparency regarding sensor attributes. Aware of potential dual-use implications, we have proposed a set of principles to foster responsible innovation of ML sensors and their datasheets to help mitigate possible misuse.
## 7 Conclusion
The advent of ML sensors has brought forward the necessity for transparent, comprehensive, and standard documentation of edge ML systems. This paper has introduced a new datasheet template tailored for ML sensors, synthesizing essential aspects of traditional hardware datasheets with key elements of machine learning and responsible AI. Our template provides a detailed account of ML sensor attributes such as hardware, ML model, dataset, end-to-end performance, and environmental impact. These datasheets are designed to empower end-users and developers with a thorough understanding of ML sensors' capabilities and limitations, thereby fostering responsible and effective use. Datasheets for two real-world ML sensors were designed and developed to illustrate the practical application of these datasheets, highlighting their potential to enhance transparency, auditability, and user-friendliness in ML-enabled systems for both open-source and commercial devices. Moving forward, it is crucial for the ML community to recognize the value of these datasheets and work towards their widespread adoption and standardization. We hope that this research can catalyze further discussion and exploration in this critical area of ML technology.
## Acknowledgements
The authors would like to thank the team at Useful Sensor for providing us with their proprietary ML sensors to evaluate. We also give special thanks to Eliza Grinnell, for her help with designing the end-to-end performance study environment, to Matt Taylor, for his assistance with the data nutrition labels, to facilities management in the Harvard SEC for helping us to set up the experimental study environment, and to all of the participants involved in the experimental study. |
2307.07852 | Determination of Lens Mass Density Profile from Strongly-Lensed
Gravitational-Wave Signals | As the interferometers detecting gravitational waves are upgraded, improving
their sensitivity, the probability of observing strong lensing increases. Once
a detection is made, it will be critical to gain as much information as
possible about the lensing object from these observations. In this work, we
present a methodology to rapidly perform model selection between differing mass
density profiles for strongly lensed gravitational wave signals, using the
results of the fast strong lensing analysis pipeline GOLUM. We demonstrate the
validity of this methodology using some illustrative examples adopting the
idealised singular isothermal sphere and point mass lens models. We take
several simulated lensed signals, analyse them with GOLUM and subject them to
our methodology to recover both the model and its parameters. To demonstrate
the methodology's stability, we show how the result varies with the number of
samples used for a subset of these injections. In addition to the analysis of
simulations, we also apply our methodology to the gravitational wave event pair
GW191230--LGW200104, two events with similar frequency evolutions and sky
locations, which was analysed in detail as a potential lensing candidate but
ultimately discarded when considering the full population and the uncertain
nature of the second event. We find a preference for the singular isothermal
sphere model over the point mass, though our posteriors are much wider than for
the lensed injections, in line with the expectations for a non-lensed event.
The methodology developed in this work is made available as part of the
Gravelamps package of software. | Mick Wright, Justin Janquart, Martin Hendry | 2023-07-15T17:02:55Z | http://arxiv.org/abs/2307.07852v2 | # Determination of Lens Mass Density Profile from Strongly-Lensed Gravitational-Wave Signals
###### Abstract
As the interferometers tasked with the detection of gravitational waves are upgraded and their sensitivity is improved, the probability of observing strong lensing will also increase. Once such a detection is made, it will be critical to gain as much information as possible about the lensing object from the observations. To that end, in this work we present a methodology to rapidly perform model selection between differing mass density profiles for strongly lensed gravitational wave signals, using the results of the fast strong lensing analysis pipeline GOLUM. We demonstrate the validity of this methodology by taking several simulated injections of lensed signals that have been analysed using GOLUM and seek to recover the model used to create the injection as well as the parameters used in that model. Using these injections, we also demonstrate our method's stability depending on the number of samples used with the inclusion of every hundredth sample being the approximate limit at which stability is reached. In addition to the analysis of simulations, we also apply our methodology to the gravitational wave event pair GW191230-LGW200104, two events with similar frequency evolutions and sky locations, as expected of lensed pairs, which was analysed in detail as a potential lensed candidate but ultimately discarded when considering the full population and the uncertain nature of the second event. We find a preference for the the singular isothermal sphere model over the point mass one, though our posteriors are much wider than for lensed injections, in line with the non-lensed status of the event. The methodology developed in this work is made available as part of the Gravelamps package of lens mass profile model selection software.
## 1 Introduction
In the wake of a rapidly increasing number of gravitational wave (GW) detections, with 90 confirmed detections as of the latest catalog of events, GWTC-3 (The LIGO Scientific Collaboration et al., 2021), and with the coming years set to deliver significant improvements in the sensitivity of the existing network of ground-based GW detectors, currently comprising Advanced LIGO (Aasi et al., 2015), Advanced Virgo (Acernese et al., 2015), and KAGRA (Akutsu et al., 2021; Aso et al., 2013; Somiya, 2012), the opportunities are manifest for the nascent field of GW astronomy to pursue a wide variety of scientific investigations.
One example of a particularly active area of GW research is that of GW lensing (Hannuksela et al., 2019; Abbott et al., 2021; The LIGO Scientific Collaboration et al., 2023; Janquart et al., 2023). Analogously to the deflection of light by a strong gravitational field (Einstein, 1936)--which formed one of the first major tests of Einstein's general relativity (GR) (Dyson et al., 1920)--GW signals may be lensed by intervening objects such as stars, black holes, galaxies and galaxy clusters (Ohanian, 1974; Wang et al., 1996; Takahashi and Nakamura, 2003; Smith et al., 2017; Mishra et al., 2021).
As of the end of third observing run of the current ground-based detector network, searches for lensed GW signals have not yet yielded a confirmed detection (Abbott et al., 2021; The LIGO Scientific Collaboration et al., 2023; Janquart et al., 2023). However, with the increased detection rate of GW events expected during the next observing run, there is a signficant probability that one or more of these detections may be identified as _strongly lensed_(Smith et al., 2017, 2018; Ng et al., 2018; Li et al., 2018; Wierda et al., 2021; Xu et al., 2022; Smith et al., 2023). In this scenario the GW signal would be lensed by an intervening galaxy (Dai et al., 2017; Ng et al., 2018; Robertson et al., 2020) or galaxy cluster (Smith et al., 2017, 2018; Robertson et al., 2020), forming multiple distinct "images". These images would be potentially magnified, time shifted, and overall phase shifted copies of the original signal that may be detected as repeated events (Dai and Venumadhav, 2017; Janquart
et al., 2021; Liu et al., 2021; Lo & Magana Hernandez, 2021; Janquart et al., 2023).
Detection of such a lensed event would not only be of enormous scientific interest in itself, but would also immediately leverage additional scientific benefit from the information carried by lensed GW signals from compact binary coalescence events--much as the detection of GW signals from such events in the first place has opened the door to completely new ways in which to investigate these phenomena. Examples of the kinds of investigations that could be enabled by lensed GW signals include, but are not limited to, improving the localisation of detected GW events (Hannuksela et al., 2020; Wempe et al., 2022), precision cosmography (Sereno et al., 2011; Liao et al., 2017), or tests of the speed of gravity (Fan et al., 2017), or general relativity itself (Ezquiaga & Zumalacarregui, 2020; Goyal et al., 2021).
Moreover, investigations of lensed GW systems may also reveal import information about the nature of the lens. For example, it has been demonstrated that the mass density profile of the lens will influence the properties of lensed GW signals (Takahashi & Nakamura, 2003; Cao et al., 2014), so that observation of the latter may allow the former to be constrained. Investigations have therefore been carried out into how effectively GW observations can allow the characterisation of the signal and thus constraint of the lens model parameters in both the microlensing (Mishra et al., 2021)--where lenses would be individual compact objects such as stars or black holes--and strong lensing regimes (Herrera-Martin et al., 2019; Hannuksela et al., 2020; Wempe et al., 2022; Tambalo et al., 2022) regimes for specific models of lens. Previous work has also taken the additional step of performing model selection for microlensed GW signals (Wright & Hendry, 2022). However, it is important to note that within that work, the presented methodology was applicable to only signle images and this proved to be insufficient to distinguish between lensing models when observing the individual images of strongly-lensed GW signals.
In this work, we present a new methodology for rapidly performing model selection on the results of model-agnostic joint parameter estimation analyses of strongly lensed multiplets. We demonstrate the effectiveness of this methodology by using it to analyse 65 simulated pairs of lensing signals and, in addition, investigate the pair GW191230 and LGW200104 which were considered by the LVK lensing search to be the pair of events with the highest likelihood of lensing--albeit we stress that the non-lensed hypothesis was still preferred and that LGW200104 is disfavoured as a real GW event by its extremely low \(p_{\rm astro}\)(Janquart et al., 2023).
The work is structured as follows. In Section 2, we introduce the basics of strong lensing including the theoretical basis, some potential models of lens, as well as the means by which strong lensing may be searched for model-agnostically in GW data. Section 3 then outlines the methodology by which we may perform model selection on the results of this model agnostic analysis. Section 4 demonstrates the testing that has been done to verify the validity of the methodology both on the set of injections and on the trigger data outlined above. Finally, we present the conclusions and future outlook from this work in Section 5.
## 2 Strong lensing of gravitational waves
In general, lensing of GWs is an amplification process. A standard, non-lensed GW signal--which is described by the phase amplitude \(\phi_{obs}(w,\boldsymbol{\eta})\) where \(w\) is the dimensionless frequency and \(\boldsymbol{\eta}\) is the displacement of the source from the optical axis--experiences an _amplification_ to become the observed lensed signal, \(\phi^{L}_{obs}(w,\boldsymbol{\eta})\). The ratio between these wave amplitudes is termed the _amplification factor_ (\(F\)) and is such that the relation between the unlensed (\(h(f)\)) and lensed (\(h^{L}(f)\)) GW strains is:
\[h^{L}(f)=F(f)\times h(f). \tag{1}\]
This amplification factor varies depending upon the mass density profile of the lensing object. However, it can be calculated for any profile using a single general expression (Schneider et al., 1992; Takahashi & Nakamura, 2003):
\[F(w,y)=\frac{w}{2\pi i}\int\mathrm{d}^{2}x\exp\left[iwT\left(x,y\right)\right], \tag{2}\]
where \(y\) is a dimensionless form of the displacement from the source (\(\boldsymbol{\eta}\) as described above) and the function \(T(x,y)\) yields the dimensionless time delay.
In the case of strong lensing, the mass of the lensing object is very high (which yields \(w\gg 1\)) and the geometric optics approximation is valid. This results in only the stationary points of the time delay function contributing to the integral, reducing it to a summation over these points (Nakamura & Deguchi, 1999). These stationary points correspond to the individual images produced in the strong lensing process and the amplification of the \(j^{\rm th}\) such image is given by (Takahashi & Nakamura, 2003; Dai & Venumadhav, 2017; Ezquiaga et al., 2021)
\[F_{j}(f)=\sqrt{|\mu_{j}|}\exp\left[iwft_{j}-i\pi n_{j}\,{\rm sign}(f)\right]. \tag{3}\]
As can be seen, this yields three separate observable effects on the lensed signals: the magnification (\(\mu_{j}\)) of the signal's amplitude, the time delay (\(t_{j}\)) between the signals due to the difference in path length between the images, and the Morse phase (\(n_{j}\)). This final property is an overall phase shift of the waveform that may take one of three distinct values, \(n=0,1/2,1\) depending on whether the image corresponds to a minimum, saddle point, or maximum of the time delay function respectively.
### Lensing Models
As has been mentioned above, the general functional form of the amplification factor is given by Eq. (2), but its final expression will vary for each different model for the mass density profile of the lens. There are many such models that have been considered to describe possible lenses. Here, we briefly describe the two models that have been used in this work as examples, to demonstrate the validity of the methodology. These models were chosen due to their ability to be solved analytically which allows for direct testing of the method as well as the fact that they produce two and only two images which again allows a complete treatment of these two models as testing cases. In follow-up work we will expand our analysis to consider more general mass density profiles that provide large multiplet image sets, so that the effect of detecting a subset of images may be taken into consideration.
#### 2.1.1 Point Mass
The simplest model of the lensing object is that of a point mass. Such a lens produces two images, regardless of the lens-source configuration. One image will be a minimum of the time delay function, and one will be at a saddle point. Consequently, following on from Eq. (3), the amplification factor is (Takahashi and Nakamura, 2003)
\[F_{geo}^{\text{PM}}(w,y)=|\mu_{+}|^{1/2}-|\mu_{-}|^{1/2}\,e^{iw\Delta T}. \tag{4}\]
The image magnifications for the point mass case are given in terms of the dimensionless source position, \(y\), by \(\mu_{\pm}=1/2\pm(y^{2}+4)/(2y\sqrt{y^{2}+4})\) and the time delay between the two images is given by \(\Delta T=y(\sqrt{y^{4}+2})/2+\ln\Big{(}(\sqrt{y^{2}+4}+y)/(\sqrt{y^{2}+4}-y) \Big{)}\).
#### 2.1.2 Singular Isothermal Sphere
As a step up in complexity from the point mass profile, the Singular Isothermal Sphere (SIS) is widely used to describe the dark matter halos of galaxies, due to its ability to describe the flat rotation curves observed for these systems (Binney and Tremaine, 1987). The SIS profile models the galaxy as an extended luminous matter object embedded within a larger dark matter halo. However, the weakness of this profile is that it suffers from a non-physical central singularity.
For an SIS lens, the configuration of source and lens may alter the number of images produced, with in one case a single image produced at higher source positions, and in the other case two produced at lower. Consequently, the amplification factor--again following Eq. (3)--is piecewise and given by (Takahashi and Nakamura, 2003)
\[F_{geo}^{\text{SIS}}(w,y)=\begin{cases}|\mu_{+}|^{1/2}-i|\mu_{-}|^{1/2}e^{iw \Delta T}&\text{if }y<1\\ |\mu_{+}|^{1/2}&\text{if }y\geq 1\end{cases}, \tag{5}\]
where in this case the magnifications are given by \(\mu_{\pm}=\pm 1+1/y\) and the time delay is given by \(\Delta T=2y\).
### Strong Lensing Identification
There are a number of means by which one may identify a strong lensing multiplet. These range from examining the overlap between the GW source posteriors derived from the individual images, as outlined in Haris et al. (2018), which is speedy but does not provide confirmation of lensing status--and more importantly in our context does not provide detailed information on the observed lensing parameters--to full joint parameter estimation of the two image candidates using the method described in Liu et al. (2021); Lo and Magana Hernandez (2021); Janquart et al. (2023), which is complete but computationally expensive. In this work, we focus on the methodology presented in Janquart et al. (2021, 2023) implemented in the python package GOLUM (Janquart et al., 2022) as a middle ground in which events are provided with sufficiently accurate estimates on the observable lensing parameters, as well as providing confirmation of lensing status. Whilst we refer the reader to Janquart et al. (2021, 2023) for a complete explanation of how this identification is performed, we briefly outline the methodology here--for simplicity considering the two-image case only, although the method is valid for any number of images.
A pair of lensed images, \(h_{L}^{j}\left(t_{j};\boldsymbol{\theta},\boldsymbol{\Lambda}_{j}\right)\), are described in terms of the binary parameters, \(\boldsymbol{\theta}\) and the individual image parameters, \(\boldsymbol{\Lambda}_{j}\) for the \(j^{\text{th}}\) image. When examined under the lensing hypothesis, \(\mathcal{H}_{L}\), under which the binary parameters should be identical, the joint evidence neglecting selection effects is given by (Liu et al., 2021; Lo and Magana Hernandez, 2021)
\[\begin{split} p\left(d_{1},d_{2}|\mathcal{H}_{L}\right)& =\int p\left(d_{1}|\boldsymbol{\theta},\boldsymbol{\Lambda}_{1} \right)p\left(d_{2}|\boldsymbol{\theta},\boldsymbol{\Lambda}_{2}\right)\\ &\quad\times p\left(\boldsymbol{\theta},\boldsymbol{\Lambda}_{1}, \boldsymbol{\Lambda}_{2}\right)\text{d}\theta\text{d}\boldsymbol{\Lambda}_{1} \text{d}\boldsymbol{\Lambda}_{2}\end{split}, \tag{6}\]
where the term \(p\left(\mathbf{\theta},\mathbf{\Lambda}_{1},\mathbf{\Lambda}_{2}\right)\) is the prior, and the terms \(p\left(d_{j}|\mathbf{\theta},\mathbf{\Lambda}_{j}\right)\) are the individual likelihoods (Veitch and Vecchio, 2010). This may be compared with the joint evidence in the unlensed hypothesis, \(\mathcal{H}_{U}\), which is the product of the individual likelihoods: \(p(d_{1},d_{2}|\mathcal{H}_{U})=p(d_{1}|\mathcal{H}_{U})p(d_{2}|\mathcal{H}_{U})\). The two are compared in the "_coherence ratio_":
\[\mathcal{C}_{U}^{L}=\frac{p(d_{1},d_{2}|\mathcal{H}_{L})}{p(d_{1}|\mathcal{H}_ {U})p(d_{2}|\mathcal{H}_{U})}\,. \tag{7}\]
Calculation of this joint evidence and coherence ratio in full is the basis for complete joint parameter estimation such as that outlined in Liu et al. (2021); Lo and Magana Hernandez (2021).
However, the process of calculating the coherence ratio may be sped up by instead considering the _conditional_ evidence alongside the individual evidences. This allows acceleration of computation by means of importance sampling and a look-up table--the full details of which are described in Janquart et al. (2021, 2023). To outline how this is done, the joint evidence is rewritten in terms of the conditional evidence as
\[p\left(d_{1},d_{2}|\mathcal{H}_{L}\right)=p\left(d_{1}|\mathcal{H}_{L}\right) p\left(d_{2}|d_{1},\mathcal{H}_{L}\right). \tag{8}\]
The conditional evidence \(p\left(d_{2}|d_{1},\mathcal{H}_{L}\right)\) may be evaluated as a "marginalised" likelihood of the form
\[p\left(d_{2}|d_{1},\mathcal{H}_{L}\right)=\int\left\langle p\left(d_{2}|\mathbf{ \Theta},\mathbf{\Phi}\right)\right\rangle_{p\left(\mathbf{\Theta}|d_{1}\right)}p(\mathbf{ \Phi})\mathrm{d}\mathbf{\Phi} \tag{9}\]
where the binary parameters, \(\mathbf{\theta}\), have been replaced with the effective parameters, \(\mathbf{\Theta}\) which absorb the lensing magnification into the observed luminosity distance (\(D_{L}^{\text{obs}}=D_{L}/\sqrt{\mu}\), where \(D_{L}\) is the source luminosity distance) and the time delay into the observed coalescence time (\(t_{c}^{\text{obs}}=t_{c}+t\), where \(t_{c}\) is the coalescence time without lensing and \(t\) is the time delay), and the individual image parameters \(\mathbf{\Lambda}_{j}\) have been replaced with the relative image parameters \(\mathbf{\Phi}\), i.e. the parameters relative to the first of the images.
The first term of the integrand is simply the likelihood of the second event averaged over the posterior samples of the first event. Whilst this term would already be faster to compute than the full joint evidence from Eq. (6), it is further sped up as the reuse of the posterior samples from the first event allows the construction of a lookup table allowing speedy computation of Eq. (9).
This culminates in the full calculation of the coherence ratio as
\[\mathcal{C}_{U}^{L}=\frac{p\left(d_{1}|\mathcal{H}_{L}\right)}{p\left(d_{1}| \mathcal{H}_{U}\right)}\frac{p\left(d_{2}|d_{1},\mathcal{H}_{L}\right)}{p\left( d_{2}|\mathcal{H}_{U}\right)}. \tag{10}\]
We note that this coherence ratio is not a full Bayes factor as it does not include the selection effects outlined in Lo and Magana Hernandez (2021). Nevertheless, their inclusion can be done straightforwardly as a post-processing step to the GOLUM analysis.
## 3 Model Selection of Lensed Gravitational Wave Signals
Strong lensing identification methodologies as outlined above have created a framework to answer the question of: _"for a given pair of images, are these images lensed?_" but have generally not answered the question of "_by what_?". In large part this is because the search methodologies are model-agnostic as to not misidentify an event pair because the method assumes an incorrect model. Some initial steps towards answering the question of the lensing object have been made, however, such as in Janquart et al. (2022) where the authorial suggestion was to reweight the detection statistic using model information from various catalogs built from differing models such as those found in Haris et al. (2018); More and More (2022), or Wierda et al. (2021). Similarly, the methodology presented in Lo and Magana Hernandez (2021) describes the inclusion of the model for the calculation of selection effects to achieve a more complete Bayes factor in the context of "lensed vs unlensed". Whilst the first method can deal with more realistic models, it requires building an extended catalog when one wants to explore a realistic lens model, assuming particular source and lens populations and is therefore more prone to systematics than the direct application of a lens model. On the other hand, for the second method, it requires an analytical expression for the magnification probability distribution, which reduces the model it can consider. We note that the latter method could also make use of the results coming from a catalog as a model for the magnification probability distribution but it would then suffer the same caveats as the method presented in Janquart et al. (2022). In both cases, the efforts to use these methods on candidate lensed signals is detailed in Janquart et al. (2023).
An additional complication for more detailed analysis of strong lensing signals is the need for the detection and identification of multiple signals. For instance, analysis of individual images using a specific lens model as outlined in Wright and Hendry (2022) in the geometric optics case does not yield successful differentation between lens models due to the complete degeneracies between the effects of strong lensing and other binary source parameters when only a single image is considered.
A means of performing model selection based on the output of these detection pipelines, including the multiple images and that does not require resampling or the construction of extended catalogs which requires assumptions on population models, would be beneficial to
future lensing searches, and it is such a methodology that we present here.
The goal of this methodology is to find the evidence for a given lens model, here termed \(\mathcal{H}_{\text{mod}}\), which consists of a set of lens parameters, \(\boldsymbol{\Psi}\). By direct application of Bayes' theorem, the evidence for that model is given by:
\[p\left(d_{1},d_{2}|\mathcal{H}_{\text{mod}}\right)=\\ \frac{p\left(d_{1},d_{2}|\boldsymbol{\Theta},\boldsymbol{\Phi}, \boldsymbol{\Psi},\mathcal{H}_{\text{mod}}\right)p\left(\boldsymbol{\Theta}, \boldsymbol{\Phi},\boldsymbol{\Phi}|\mathcal{H}_{\text{mod}}\right)}{p\left( \boldsymbol{\Theta},\boldsymbol{\Phi},\boldsymbol{\Phi}|d_{1},d_{2},\mathcal{H} _{\text{mod}}\right)}. \tag{11}\]
To simplify the following, we here define the evidence for a chosen model as \(\mathcal{Z}=p\left(d_{1},d_{2}|\mathcal{H}_{\text{mod}}\right)\) and the model likelihood as \(\mathcal{L}\left(\mathcal{H}_{\text{mod}}\right)=p\left(d_{1},d_{2}|\boldsymbol {\Theta},\boldsymbol{\Phi},\boldsymbol{\Psi},\mathcal{H}_{\text{mod}}\right)\). We consider calculation of the inverse evidence which, as will be made clear, is easier to solve and once solved may be trivially inverted back to the evidence. This is given from the above as
\[\frac{1}{\mathcal{Z}}=\frac{p\left(\boldsymbol{\Theta},\boldsymbol{\Phi}, \boldsymbol{\Psi}|d_{1},d_{2},\mathcal{H}_{\text{mod}}\right)}{\mathcal{L} \left(\mathcal{H}_{\text{mod}}\right)p\left(\boldsymbol{\Theta},\boldsymbol{ \Phi},\boldsymbol{\Psi}|\mathcal{H}_{\text{mod}}\right)}. \tag{12}\]
The posterior forming the numerator of Eq. (12) may be expanded further using the chain rule to yield
\[p\left(\boldsymbol{\Theta},\boldsymbol{\Phi},\boldsymbol{\Psi} |d_{1},d_{2},\mathcal{H}_{\text{mod}}\right)=\\ p\left(\boldsymbol{\Psi}|\boldsymbol{\Theta},\boldsymbol{\Phi},d_{1 },d_{2},\mathcal{H}_{\text{mod}}\right)\\ \times p\left(\boldsymbol{\Theta},\boldsymbol{\Phi}|d_{1},d_{2}, \mathcal{H}_{\text{mod}}\right). \tag{13}\]
The lattermost term of Eq. (13) is in fact, insensitive to the lens model since the apparent lensing parameters and the effective BBH parameters may be fully determined from the data. As such, \(p\left(\boldsymbol{\Theta},\boldsymbol{\Phi}|d_{1},d_{2},\mathcal{H}_{\text{ mod}}\right)\equiv p\left(\boldsymbol{\Theta},\boldsymbol{\Phi}|d_{1},d_{2}\right)\) and thus, Eq. (12) becomes
\[\frac{1}{\mathcal{Z}}=\frac{p\left(\boldsymbol{\Psi}|\boldsymbol{\Theta}, \boldsymbol{\Phi},d_{1},d_{2},\mathcal{H}_{\text{mod}}\right)p\left( \boldsymbol{\Theta},\boldsymbol{\Phi}|d_{1},d_{2}\right)}{\mathcal{L}\left( \mathcal{H}_{\text{mod}}\right)p\left(\boldsymbol{\Theta},\boldsymbol{\Phi}, \boldsymbol{\Psi}|\mathcal{H}_{\text{mod}}\right)}. \tag{14}\]
This may be solved by sampling \(p\left(\boldsymbol{\Theta},\boldsymbol{\Phi}|d_{1},d_{2}\right)\), computing the remaining terms for that sample and averaging the ratio over all samples, i.e.
\[\frac{1}{\mathcal{Z}}=\left\langle\frac{p\left(\boldsymbol{\Psi}|\boldsymbol{ \Theta},\boldsymbol{\Phi},d_{1},d_{2},\mathcal{H}_{\text{mod}}\right)}{ \mathcal{L}\left(\mathcal{H}_{\text{mod}}\right)p\left(\boldsymbol{\Theta}, \boldsymbol{\Phi},\boldsymbol{\Psi}|\mathcal{H}_{\text{mod}}\right)}\right\rangle _{p\left(\boldsymbol{\Theta},\boldsymbol{\Phi}|d_{1},d_{2}\right)} \tag{15}\]
The quantity over which we sample, \(p\left(\boldsymbol{\Theta},\boldsymbol{\Phi}|d_{1},d_{2}\right)\), is the output of a model-independent joint parameter estimation pipeline (whether using a full joint parameter estimation or a GOLUM style approach). In addition, the prior and likelihood are known and easily calculable, with the joint likelihood being calculated using the implementation from Janquart et al. (2023) with the produced lensed waveform. Finally, the numerator--the conditional probability of the lens parameters given the model--may be computed from the relations between the observables parameters and the model parameters as outlined in Sections 2.1.1 and 2.1.2.
As a result, Eq. (15) offers a means by which to compute the evidence for a given lens model directly from the output of the pipelines that would be used to claim a lensing detection. It also bypasses the need for sampling an extended parameter space meaning that the computation would be extremely rapid upon the completion of the model-agnostic search.
We implement Eq. (15) into the Gravelamps package (Wright et al., 2022) for the models outlined above in which \(\boldsymbol{\Psi}\) reduces to the redshifted lens mass, \(M_{Lz}\), and the dimensionless source position, \(y\). Whilst these models are relatively simple, they allow the testing of the methodolgy in a controlled environment in which the observable to lens parameter conversion may be analytically calculated and therefore may be more closely monitored when testing the methodology. However, the implementation within Gravelamps will allow for extending the method to more complex realistic models leveraging its already extant capabilities to rapidly compute lens amplification factors (Wright and Hendry, 2022).
## 4 Investigation of Methodological Performance
In order to investigate the performance of the methodology in real-world scenarios, it has been subjected to a number of tests. The primary source of investigation was the performance of the methodology on a constructed set of 75 lensed GW signal pairs with known parameters. This allowed investigation of the ability of the method to identify the lensing model as well as the recovery of the model-specific lensing parameters. The stability of the evidence calculation was also investigated by looking at the results of varying the number of samples considered. Finally, whilst a real lensing event remains unavailable, examination of the methodology on real-world data was performed by considering the event pair GW191230-LGW200104: the event pair that was determined by the lensing searches in Janquart et al. (2023) to be the highest significance of the ultimately discarded candidates.
### Injection Set Investigation
The main test of methodological performance was to investigate the results of subjecting a series of simulated lensing events to the method. The testing set consisted of 75 pairs of lensed GW signals. To consider a realistic testing set, the mass, spin, and redshift priors for these were chosen to reflect the inferred population from the
GWTC-3 data, i.e. in-line with Abbott et al. (2022)1. The other parameters were selected from the usual priors on the parameters, see Janquart et al. (2021) for example. The events were chosen to be lensed by SIS lenses with a uniform prior on the redshifted lens mass between \(10^{6}M_{\odot}\) and \(10^{9}M_{\odot}\) and a power-law prior on the source position with \(\alpha=1\) over the range of source positions resulting in multiple images, avoiding the direct hit and caustic cases, i.e. \(y=0\) and \(y=1\) respectively.
Footnote 1: To be precise, we took the maximum likelihood parameters of the population models given in (LIGO Scientific Collaboration and Virgo Collaboration and KAGRA Collaboration, 2023).
Once the sample source and lens parameters were drawn, the lensed pair signals were constructed and injected into a detector network consisting of the two LIGO and the Virgo detectors with noise representative of the expectations for O4. These were then subject to joint parameter estimation using the GOLUM pipeline (Janquart et al., 2021, 2023) operating with the Nessai(Williams et al., 2021) nested sampler. The signal injection and recovery were done using the _IM-RPhenomXPHM_ waveform (Pratten et al., 2021). The resulting samples were then subjected to the methodology and the evidences for each of the models compared to arrive at the resultant Bayes factor.
Figure 1 shows the distribution of the recovered Bayes factors comparing the SIS and point mass lensing models in the form of both the raw histogram of the data as well as the inferred distribution constructed using a kernel-density estimator (KDE). All of the events considered demonstrate a positive log Bayes factor, indicating a consistent preference for the true SIS model. The minimum log Bayes factor of the considered events was 1.44, the maximum was 7.98, and the average log Bayes factor for the true SIS model across all of the events was 3.37. This is sufficiently large to demonstrate a consistent identification and preference for the correct model in these injections.
In addition to the model identification, the method produces candidate posteriors for the lens parameters in each of the models. These two may be interrogated to assess the ability of the method to constrain and recover these model specific parameters. Figure 2 illustrates an example of the recovered posteriors for the parameters of a single event from the testing set for the SIS model, with the true value indicated. In that case, the parameters demonstrate relatively tight constraints with the true value inside of the constructed posterior.
As expected, the tightness and accuracy of the constraint of the model-specific lensing parameters are correlated with the tightness and accuracy of the constraint of the model-agnostic lensing observables. This is demonstrated in Figure 3 which contains a subset of the total runs. In this figure, it can be seen that cases with relatively tight constraint of the relative magnification yield relatively tight constrain of the source position. Similarly in the case of the fifth pairing in the figure, the failure to constrain the relative magnification accurately in the model agnostic search results in a failure to constrain the source position accurately.
### Stability of Methodological Performance
Whilst the methodology is relatively inexpensive to perform over all the samples from a given GOLUM investigation, preliminary results may be sought more quickly by using fewer samples. Similarly, the number of samples will differ between investigation of one event and the next, therefore one metric that needs to be investigated for the method is the number of samples that will yield a stable result.
Figure 4 shows the evolution of the Bayes factor calculation from a number of events as progressively more samples are included. This was done by taking the full sample set and calculating the Bayes factor from every \(n^{\rm th}\) sample scaled by the maximal value calculated to allow multiple events to be overlaid. As can be seen from the figure, the estimated value of the Bayes factor shows significant fluctuation when a small number of samples are used and an increasing narrowness as a greater number of samples are included, demonstrating that the solution is stabilising, with an approximate pre
Figure 1: Distribution of the log Bayes factors comparing the SIS and point mass lensing models for the injection set. Shown in black is the raw histogram of these with the red overlay showing the inferred distribution from a kernel-density estimator. All events analysed show a positive Bayes factor which indicates that the SIS was correctly preferred in all cases.
liminary value being representative at the inclusion of every \(100^{\rm th}\) sample.
In addition to the overall fluctuation of the Bayes factor, a number of events demonstrated that at low sampling rates, the estimated value of the Bayes factor can appear to cluster around a value that is different from the value to which the estimate later appears to converge to. An example of this is shown in Figure 5. Similarly, the figure shows an approximate point at which a preliminary result--from the inclusion of a limited number of samples--is robustly representative of the true Bayes factor. Our results again would indicate that including every \(100^{\rm th}\) sample is sufficiently representative of the Bayes factor in this way.
In addition to the overall fluctuation of the Bayes factor, a number of events demonstrated that at low sampling rates, the estimated value of the Bayes factor would appear to cluster around a deviated value from the later settling value. An example of this is shown in Figure 5. Similarly, this would place an approximate point at which a prelimninary result would be representative of the final value at the inclusion of every \(100^{\rm th}\) sample.
### Example Deployment on GW191230-LGW200104
As discussed in the introduction, as of the end of the third observing run, the LVK lensing searches which include joint parameter estimation analyses that would be appropriate for this method have yet to yield any confirmed detections (Hannuksela et al., 2019; Abbott et al., 2021; The LIGO Scientific Collaboration et al., 2023). Consequently, there is no real lensing data on which to perform a test case analysis. To represent such a test case, we deploy the method on the event pair GW191230-LGW200104 which was identified to have the highest Bayes factor of the ultimately discarded candidates from searches thus far (The LIGO Scientific Collaboration et al., 2023; Janquart et al., 2023). Similarly to Janquart et al. (2023), we stress that we do not claim that this pair is a genuine lensing event. However, we will treat it as though it were for the purposes of this test deployment. Similarly, due to the relatively high SNR of the trigger, LGW200104 it is treated as a real GW event despite the very low \(p_{\rm astro}\) of 1% which would suggest this is unlikely in actuality (Janquart et al., 2023).
The pair has been re-analysed using the GOLUM pipeline and the Nessai nested sampler in order to yield an equivalent model-agnostic search result to those of the injection set. Performing the model selection method on the results yielded a result that favours the SIS model with a \(\log_{10}\) Bayes factor of 3.65. This is consistent with the finding of Janquart et al. (2023) that the consideration of the SIS model did improve preference for the pair as compared to the raw model-agnostic investigation--though Janquart et al. (2023) had the greatest improvement with the SIE model. The posteriors from the GOLUM investigation as well as the reconstructed posteriors for the SIS parameters are shown in Figure 6. These are fairly broad as might be expected when including a trigger that is likely not a genuine lensing event or indeed a genuine GW event. Additionally, we note that there is distinct railing in the reconstructed source position posterior against the direct hit--i.e. \(y=0\)--case which may too be an indication of the probably unlensed nature of the candidate pair. However, should this have been a genuine lensed
Figure 2: Constructed candidate posteriors on the redshifted lens mass (left) and source position (right). The raw histogram of the samples is presented in black, the inferred posterior distribution using a KDE in blue, and the vertical red dotted line illustrates the true source position value. In this instance, the posteriors indicate a tight constraint with the true values of the parameters inside of the posterior and close to its mode—a successful recovery.
Figure 3: Comparisons between the posterior on the relative magnification as determined by the GOLUM pipeline and the constructed posterior on the source position for the SIS model using the method presented in this work. The true value of these parameters is indicated by the red vertical dotted line. As can be seen, the constraint on the model parameter is dependent upon both the width and accuracy of the lensing observable constraint.
event pair the median values of these reconstructed parameter posteriors for the SIS model would suggest a \(5.08^{+0.94}_{-2.09}\times 10^{10}M_{\odot}\) lens at a source position of \(0.23^{+0.15}_{-0.14}\).
## 5 Conclusion
With the increasing sensitivity of the interferometers, the detection of gravitational lensing of a GW event becomes increasingly likely. When such a detection occurs, the nature of the lens, as well as the source, will be studied. One of the fundamental questions about the lens will be about how the mass is distributed. We present here a method by which to rapidly determine the mass density profile using the output of the model-agnostic strong lensing joint parameter estimation pipelines without the need to build extended catalogs of lenses, under the assumption of source and lens populations. This method has been implemented within the python package Gravelamps(Wright et al., 2022) to augment its pre-existing capabilities to determine mass density profiles for microlensing events.
We have demonstrated the efficacy of the method using a pair of relatively simple lens models--the point mass and the SIS lenses--though it is not restricted to these models. In order to test the methodology's performance it was confronted with an injected set of 75 lensed BBH pairs following the observed distribution of events during the third observing run of the LVK detectors. In each case, the correct lensing model was identified with the lens model-specific parameter recovery falling in line with the recovery of the observable model-agnostic lensing parameters.
To test the stability of the results calculated, we observed the behaviour of the Bayes factors as more and more samples are included. Preliminary results from the methodology would be representative of the final solution using all of the samples from the inclusion of every \(100^{\rm th}\) sample.
Finally, whilst there remains no real lensing data on which to deploy the methodology, we test deployment on actual data by examining the GW191230-LGW200104 pair. It had the highest Bayes factor of the ultimately discarded pairs. Our analysis indicated that if the pair were genuinely lensed, which we stress is unlikely, then the preferred lens of the two models examined would be a \(5.08^{+0.94}_{-2.09}\times 10^{10}M_{\odot}\) SIS lens at a dimensionless displacement from the optical axis of \(0.23^{+0.15}_{-0.14}\). This is in line with the increased support for the pair when considering the SIS model in Janquart et al. (2023)--though we note that the SIE model was the most preferred in that case.
With the demonstration of viability for the method, future work to be undertaken would be to investigate more realistic models for the mass density profile of the lens, such as the aforementioned SIE, or the Navarro-Frenk-White model. For these more sophisticated models, additional work would need to be done to rapidly go from the lensing observables to the model-specific parameters without the relatively simple analytical relationships that exist for the point mass and SIS models considered in this work.
Figure 4: The resulting value of the Bayes factors found by using every \(n^{\rm th}\) sample of the full set, scaled by the maximal value from the calculations to allow multiple events to be shown. As can be seen, a preliminary estimate of the Bayes factor, from applying the methodology using every \(100^{\rm th}\) sample, would be robustly representative of the “fional” estimate using all of the samples.
Figure 5: The results of the stability analysis for an event which demonstrates an initial deviation from the later stabilisation value. Similarly to the main set, by approximately the inclusion of every \(100^{\rm th}\) sample, the preliminary result would be representative of the final result
## Acknowledgements
The authors thank Chris Van Den Broeck for useful discussions and insights on related topics. MW acknowledges the support of the Science and Technologies Facilities Council (STFC) of the United Kingdom. JJ is supported by the research programme of the Netherlands Organisation for Scientific Research (NWO). MH acknowledges additional support from the Science and Technologies Facilities Council (Ref. ST/L000946/1).
This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science Foundation.
The authors are grateful for computational resources provided by the LIGO Laboratory and the Leonard E Parker Center for Gravitation, Cosmology, and Astrophysics at the University of Wisconsin-Milwaukee and supported by National Science Foundation Grants PHY-0757058 and PHY-0823459, and PHY-1626190 and PHY-1700765 respectively.
Figure 6: _Top_: Posteriors on the relative lensing parameters for the GW191230–LGW200104 pair as determined by the GOLUM investigation. _Bottom_: The reconstructed posteriors for the source position and redshifted lens mass as calculated by the methodology presented in this work. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.